Yachtsy Models

Deep Research Agent

Comprehensive research and analysis with real-time streaming for market studies and detailed investigations.

Overview

The Deep Research Agent is a direct LLM model that provides comprehensive, multi-faceted analysis with structured output including executive summaries, market context, technical deep dives, and actionable recommendations.

Real-time Streaming: See text appear as the AI researches and writes - true token-by-token streaming for immediate feedback

Key Features

  • Token-by-token Streaming: Real-time text generation as AI thinks
  • Structured Analysis: 6-section comprehensive reports
  • Market Intelligence: Industry trends and competitive positioning
  • Technical Deep Dives: Detailed specifications and performance data
  • Actionable Insights: Practical recommendations and next steps

When to Use

Choose yachtsy-deep-research-agent when you need:

  • Comprehensive market analysis
  • Competitive comparisons between boat models
  • Investment and purchasing decision support
  • Detailed technical evaluations
  • Industry trend analysis

Response Structure

Every response includes these sections:

  1. Executive Summary - Key findings and recommendations
  2. Detailed Analysis - Comprehensive data and insights
  3. Market Context - Industry trends and positioning
  4. Technical Deep Dive - Specifications and performance
  5. Research Sources & Methodology - Data validation and confidence levels
  6. Recommendations - Actionable next steps

Usage Examples

Market Analysis

from openai import OpenAI

client = OpenAI(
    base_url="https://api.yachtsy.ai/v1",
    api_key="your-api-key"
)

# Comprehensive market comparison with streaming
response = client.chat.completions.create(
    model="yachtsy-deep-research-agent",
    messages=[{
        "role": "user",
        "content": "Comprehensive analysis of Catalina 34 vs Hunter 34 vs Beneteau 343. Include market positioning, resale value, and ownership costs."
    }],
    stream=True,
    max_tokens=4000
)

for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Investment Analysis

const analysisStream = await client.chat.completions.create({
    model: 'yachtsy-deep-research-agent',
    messages: [{
        role: 'user',
        content: 'Investment analysis for purchasing a 40ft cruising sailboat. Budget $200k-300k. Include depreciation, maintenance costs, and best value models.'
    }],
    stream: true,
    max_tokens: 4000
});

for await (const chunk of analysisStream) {
    if (chunk.choices[0]?.delta?.content) {
        process.stdout.write(chunk.choices[0].delta.content);
    }
}

Technical Evaluation

curl -N -X POST https://api.yachtsy.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "yachtsy-deep-research-agent",
    "messages": [{
      "role": "user",
      "content": "Deep technical analysis of Bavaria Cruiser 37 vs Jeanneau Sun Odyssey 379. Focus on build quality, performance, and long-term reliability."
    }],
    "stream": true,
    "max_tokens": 4000
  }'

Streaming Performance

  • Latency: ~50ms per token
  • Experience: Text appears as AI generates it
  • Chunk Size: Individual tokens and words
  • Response Time: 15-30 seconds total (streaming starts immediately)

Streaming Example

import time

start_time = time.time()
response_text = ""

stream = client.chat.completions.create(
    model="yachtsy-deep-research-agent",
    messages=[{
        "role": "user",
        "content": "Research the sailboat market trends for 2024-2025"
    }],
    stream=True
)

print("Research starting...")
for chunk in stream:
    if chunk.choices[0].delta.content:
        token = chunk.choices[0].delta.content
        response_text += token
        print(token, end="", flush=True)

print(f"\n\nCompleted in {time.time() - start_time:.1f} seconds")
print(f"Total response: {len(response_text)} characters")

Advanced Usage

Research Methodology

# Request specific research methodology
response = client.chat.completions.create(
    model="yachtsy-deep-research-agent",
    messages=[{
        "role": "system",
        "content": "Conduct thorough research using multiple data sources. Include confidence levels for each claim and cite specific sources where possible."
    }, {
        "role": "user",
        "content": "Research the reliability and common issues of Catalina Yachts from 1985-2000. Include owner feedback and surveyor reports."
    }],
    stream=True,
    temperature=0.1  # Low temperature for factual research
)

Competitive Analysis

# Multi-model comparison with market context
response = client.chat.completions.create(
    model="yachtsy-deep-research-agent",
    messages=[{
        "role": "user",
        "content": """
        Comprehensive competitive analysis:
        
        Models to compare:
        - Catalina 34 (1985-2001)
        - Hunter 34 (1985-1995) 
        - Beneteau 343 (2005-2012)
        
        Analysis areas:
        - Build quality and construction
        - Sailing performance and handling
        - Interior layout and comfort
        - Market pricing and availability
        - Owner satisfaction and community
        - Resale value trends
        
        Provide quantitative data where available and confidence levels for each assessment.
        """
    }],
    stream=True,
    max_tokens=4000
)

Response Quality

High-Quality Outputs

The Deep Research Agent provides:

  • Quantitative Data: Specific numbers, measurements, and statistics
  • Comparative Analysis: Side-by-side comparisons with tables
  • Confidence Levels: High/Medium/Low confidence for different claims
  • Source References: Multiple data sources and cross-validation
  • Market Context: Industry trends and future outlook
  • Actionable Insights: Practical recommendations and next steps

Example Response Structure

## Executive Summary
The Catalina 34 represents excellent value in the 34-foot cruising market...

## Detailed Analysis
**Production Data**: 1,800+ units built (High confidence - Catalina records)
**Price Range**: $35,000-$65,000 (Medium confidence - YachtWorld analysis)
...

## Market Context
Mid-size cruisers (32-36ft) remain in high demand...

## Technical Deep Dive
| Specification | Catalina 34 | Hunter 34 | Beneteau 343 |
|---------------|-------------|-----------|--------------|
| LOA | 34'6" | 34'5" | 34'9" |
...

## Research Sources & Methodology
- Catalina Yachts production records (High confidence)
- SailboatData.com specifications (High confidence)  
- YachtWorld pricing analysis (Medium confidence)
...

## Recommendations
1. For budget-conscious buyers: Prioritize 1990+ models...
2. For performance: Consider wing keel option...

Integration with Other Models

The Deep Research Agent works well in combination:

# Use Deep Research for analysis, then Listings for current inventory
research_response = client.chat.completions.create(
    model="yachtsy-deep-research-agent",
    messages=[{"role": "user", "content": "Research best 35ft cruising sailboats under $100k"}]
)

# Then search for specific models identified in research
listings_response = client.chat.completions.create(
    model="yachtsy-listings-agent",
    messages=[{"role": "user", "content": "Find Catalina 34 and Hunter 34 boats under $100k"}]
)

Performance Tips

Optimize for Speed

  • Use stream=True for immediate feedback
  • Set max_tokens=4000 for comprehensive analysis
  • Use temperature=0.1 for factual research

Query Optimization

  • Be specific about analysis scope
  • Include relevant criteria and constraints
  • Ask for confidence levels and sources
  • Request structured output formats

Common Use Cases

  1. Pre-purchase Research: "Research Catalina 34 reliability and common issues"
  2. Market Analysis: "Analyze the 35-40ft cruising sailboat market trends"
  3. Investment Decisions: "Compare Bavaria 37 vs Jeanneau 379 for charter investment"
  4. Technical Evaluation: "Deep dive into Oyster 485 build quality and performance"

Error Handling

try:
    response = client.chat.completions.create(
        model="yachtsy-deep-research-agent",
        messages=[{"role": "user", "content": "research query"}],
        stream=True
    )
    
    for chunk in response:
        if chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content, end="")
            
except Exception as e:
    if "timeout" in str(e).lower():
        print("Research query too complex. Try breaking it into smaller parts.")
    else:
        print(f"Research failed: {str(e)}")
Pro Tip: The more specific your research criteria, the more detailed and actionable the analysis will be. Include budget ranges, intended use, and specific concerns for best results.