Yachtsy Models

Yachtsy Agent

General sailing assistant with intelligent routing and conversational AI for all your sailing questions.

Overview

Yachtsy Agent is the router-based model that automatically determines the best approach for your sailing question and routes it to the appropriate specialized tool or provides direct conversational assistance.

Best For: General sailing questions, conversational assistance, mixed queries that might need different types of expertise

Key Features

  • Intelligent Routing: Automatically selects the best tool for your question
  • Conversational AI: Natural dialogue for sailing advice and guidance
  • Multi-tool Access: Can access listings, research, surveys, and general knowledge
  • Chunked Streaming: Optimized streaming for mixed content types

When to Use

Choose yachtsy-agent when you:

  • Have general sailing questions
  • Want conversational assistance
  • Aren't sure which specialized model to use
  • Need mixed responses (advice + data + recommendations)

Usage Examples

Basic Sailing Questions

from openai import OpenAI

client = OpenAI(
    base_url="https://api.yachtsy.ai/v1",
    api_key="your-api-key"
)

response = client.chat.completions.create(
    model="yachtsy-agent",
    messages=[{
        "role": "user",
        "content": "I'm planning my first offshore passage. What should I prepare?"
    }],
    stream=True
)

for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Equipment Recommendations

const response = await client.chat.completions.create({
    model: 'yachtsy-agent',
    messages: [{
        role: 'system',
        content: 'You are helping a sailor choose equipment for their boat.'
    }, {
        role: 'user', 
        content: 'What autopilot system would you recommend for a 35ft sloop doing coastal cruising?'
    }],
    temperature: 0.7
});
curl -X POST https://api.yachtsy.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "yachtsy-agent",
    "messages": [{
      "role": "user",
      "content": "Plan a route from San Francisco to Monterey Bay. Include timing, weather considerations, and key waypoints."
    }]
  }'

Conversation Context

Maintain context for better assistance:

messages = [
    {"role": "system", "content": "You are helping with sail trim and boat performance."}
]

# First question
messages.append({
    "role": "user", 
    "content": "My boat feels sluggish upwind. What could be the issue?"
})

response = client.chat.completions.create(
    model="yachtsy-agent",
    messages=messages
)

# Add response to conversation
messages.append({
    "role": "assistant", 
    "content": response.choices[0].message.content
})

# Follow-up with boat details
messages.append({
    "role": "user",
    "content": "It's a 1995 Catalina 34 with a 110% genoa and full-batten main. The boat has been recently antifouled."
})

response = client.chat.completions.create(
    model="yachtsy-agent",
    messages=messages
)

Router Behavior

The agent automatically routes questions to appropriate tools:

Automatic Tool Selection

  • Boat listings queries → Routes to listings database
  • Technical specifications → Routes to boat profile data
  • Market analysis → Routes to market data tools
  • General sailing → Provides conversational assistance

Example Routing

# This will automatically route to listings search
response = client.chat.completions.create(
    model="yachtsy-agent",
    messages=[{
        "role": "user",
        "content": "Show me Catalina 34 boats for sale under $50k"
    }]
)

# This will provide conversational sailing advice
response = client.chat.completions.create(
    model="yachtsy-agent", 
    messages=[{
        "role": "user",
        "content": "How do I reef a mainsail in heavy weather?"
    }]
)

Performance Characteristics

  • Response Time: 5-15 seconds (depends on routing)
  • Streaming: Chunked streaming (10-15 words per chunk)
  • Context Length: 4096 tokens
  • Temperature Range: 0.0-1.0 (default 0.7)

Best Practices

System Messages

Use system messages to set context:

{
  "role": "system",
  "content": "You are helping a beginner sailor learn basic sailing techniques. Explain concepts clearly and emphasize safety."
}

Temperature Settings

  • 0.1-0.3: Factual information and specifications
  • 0.4-0.7: Balanced advice and recommendations
  • 0.8-1.0: Creative problem-solving and trip planning

Question Formatting

Good examples:

  • "What's the best 35ft sailboat for coastal cruising with a $150k budget?"
  • "How do I troubleshoot my furling system that's jamming?"
  • "Plan a sailing route from Florida to the Bahamas in March"

Less effective:

  • "Tell me about boats" (too vague)
  • "What should I do?" (no context)
  • "Help me" (no specific question)

Error Handling

try:
    response = client.chat.completions.create(
        model="yachtsy-agent",
        messages=[{"role": "user", "content": "sailing question"}]
    )
    return response.choices[0].message.content
    
except Exception as e:
    if "401" in str(e):
        return "Authentication failed. Check your API key."
    elif "429" in str(e):
        return "Rate limit exceeded. Try again later."
    else:
        return f"Error: {str(e)}"

Integration Examples

Vercel AI SDK

import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const { text } = await generateText({
  model: openai('yachtsy-agent', {
    baseURL: 'https://api.yachtsy.ai/v1',
    apiKey: process.env.YACHTSY_API_KEY,
  }),
  prompt: 'What are the key factors in choosing a cruising sailboat?',
});

LangChain

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="yachtsy-agent",
    openai_api_base="https://api.yachtsy.ai/v1",
    openai_api_key="your-api-key"
)

response = llm.invoke("Explain the different types of sailing rigs and their advantages")

Next Steps