Guides

Guides

Practical guides for integrating Yachtsy Agent into your sailing applications.

Guides

Learn how to integrate Yachtsy Agent into your applications with these practical guides and tutorials.

Integration Guides

Web Applications

Integrate Yachtsy Agent into web apps with React, Vue, Angular, or vanilla JavaScript.

Use Case Examples

Coming soon! We're working on comprehensive guides for various sailing use cases including yacht brokerage, sailing education, trip planning, and maintenance assistance.

Framework-Specific Guides

For detailed framework integrations, check out our Web Integration Guide which covers:

  • JavaScript/TypeScript: React, Vue.js, Next.js, and vanilla JavaScript
  • Python: Coming soon - FastAPI, Django, Flask, and Streamlit
  • Mobile: Coming soon - React Native, Flutter, iOS, and Android

Best Practices

Prompt Engineering for Sailing

Learn how to craft effective prompts for sailing-specific queries:

{
  "role": "system",
  "content": "You are an experienced sailing instructor with 20+ years of blue water cruising experience. Provide practical, safety-focused advice suitable for recreational sailors. Always consider weather conditions and seamanship best practices."
}

Handling Streaming Responses

Implement smooth streaming for better user experience:

// Example: Streaming with proper error handling
async function streamSailingAdvice(question) {
  try {
    const stream = await client.chat.completions.create({
      model: 'yachtsy-agent',
      messages: [{ role: 'user', content: question }],
      stream: true
    });

    for await (const chunk of stream) {
      if (chunk.choices[0]?.delta?.content) {
        updateUI(chunk.choices[0].delta.content);
      }
    }
  } catch (error) {
    handleError(error);
  }
}

Error Handling

Robust error handling for production applications:

import openai
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="your-api-key"
)

try:
    response = client.chat.completions.create(
        model="yachtsy-agent",
        messages=[{"role": "user", "content": "sailing question"}]
    )
except openai.AuthenticationError:
    # Handle authentication errors
    print("Invalid API key")
except openai.RateLimitError:
    # Handle rate limiting
    print("Rate limit exceeded")
except openai.APIError as e:
    # Handle other API errors
    print(f"API error: {e}")

Performance Tips

Caching Responses

Cache common sailing questions to improve response times:

const cache = new Map();

async function getCachedResponse(question) {
  const cacheKey = question.toLowerCase().trim();
  
  if (cache.has(cacheKey)) {
    return cache.get(cacheKey);
  }
  
  const response = await client.chat.completions.create({
    model: 'yachtsy-agent',
    messages: [{ role: 'user', content: question }]
  });
  
  cache.set(cacheKey, response);
  return response;
}

Batch Processing

Process multiple questions efficiently:

import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
    base_url="http://localhost:8000/v1",
    api_key="your-api-key"
)

async def process_questions(questions):
    tasks = []
    for question in questions:
        task = client.chat.completions.create(
            model="yachtsy-agent",
            messages=[{"role": "user", "content": question}]
        )
        tasks.append(task)
    
    responses = await asyncio.gather(*tasks)
    return responses
Need help with a specific integration? Check out our detailed framework guides or contact support for custom implementation assistance.