API Reference

Responses

Understanding API response formats, status codes, and error handling for Yachtsy Agent.

Overview

All Yachtsy Agent API endpoints return responses in JSON format, following OpenAI's response structure for maximum compatibility.

Response Format

Successful Responses

All successful API responses follow this general structure:

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "yachtsy-agent",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Your sailing question response here..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 56,
    "completion_tokens": 31,
    "total_tokens": 87
  }
}

Response Fields

Core Fields

FieldTypeDescription
idstringUnique identifier for the completion
objectstringThe object type (e.g., "chat.completion")
createdintegerUnix timestamp of when the completion was created
modelstringThe model used for the completion

Choices Array

FieldTypeDescription
indexintegerThe index of the choice in the list
messageobjectThe generated message
message.rolestringAlways "assistant" for completions
message.contentstringThe actual response content
finish_reasonstringReason the model stopped generating

Usage Object

FieldTypeDescription
prompt_tokensintegerNumber of tokens in the prompt
completion_tokensintegerNumber of tokens in the completion
total_tokensintegerTotal tokens used (prompt + completion)

Finish Reasons

ReasonDescription
stopNatural stopping point or provided stop sequence
lengthMaximum token limit reached
content_filterContent filtered due to policy violations

Streaming Responses

When using stream: true, responses are sent as Server-Sent Events (SSE):

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"yachtsy-agent","choices":[{"index":0,"delta":{"role":"assistant","content":"For"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"yachtsy-agent","choices":[{"index":0,"delta":{"content":" Caribbean"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1677652288,"model":"yachtsy-agent","choices":[{"index":0,"delta":{"content":" sailing"},"finish_reason":null}]}

data: [DONE]

Streaming Format

  • Each chunk contains a delta object with incremental content
  • The final chunk has finish_reason set and empty delta
  • Stream ends with data: [DONE]

HTTP Status Codes

Success Codes

CodeStatusDescription
200OKRequest successful
201CreatedResource created successfully

Client Error Codes

CodeStatusDescription
400Bad RequestInvalid request format or parameters
401UnauthorizedInvalid or missing API key
403ForbiddenAccess denied for this resource
404Not FoundEndpoint or resource not found
422Unprocessable EntityRequest valid but cannot be processed
429Too Many RequestsRate limit exceeded

Server Error Codes

CodeStatusDescription
500Internal Server ErrorUnexpected server error
502Bad GatewayUpstream service error
503Service UnavailableService temporarily unavailable

Error Response Format

Error responses follow OpenAI's error format:

{
  "error": {
    "message": "Invalid API key provided",
    "type": "invalid_request_error",
    "param": null,
    "code": "invalid_api_key"
  }
}

Error Types

TypeDescription
invalid_request_errorInvalid request parameters
authentication_errorAuthentication failed
permission_errorInsufficient permissions
not_found_errorResource not found
rate_limit_errorRate limit exceeded
api_errorServer-side error

Common Error Codes

CodeDescription
invalid_api_keyThe provided API key is invalid
missing_api_keyNo API key provided
model_not_foundRequested model doesn't exist
context_length_exceededRequest exceeds maximum context length
rate_limit_exceededToo many requests in time window

Response Examples

Chat Completion Success

{
  "id": "chatcmpl-yachtsy123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "yachtsy-agent",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "For Caribbean sailing, I'd recommend the Jeanneau Sun Odyssey 409 or Beneteau Oceanis 40.1. Both offer excellent blue water capabilities with shallow drafts perfect for Caribbean anchorages. The Sun Odyssey 409 has superior upwind performance, while the Oceanis 40.1 offers more interior space."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 28,
    "completion_tokens": 52,
    "total_tokens": 80
  }
}

Models List Success

{
  "object": "list",
  "data": [
    {
      "id": "yachtsy-agent",
      "object": "model",
      "created": 1677610602,
      "owned_by": "yachtsy"
    },
    {
      "id": "yachtsy-deep-research-agent",
      "object": "model", 
      "created": 1677610602,
      "owned_by": "yachtsy"
    }
  ]
}

Authentication Error

{
  "error": {
    "message": "Invalid API key provided. Please check your API key and try again.",
    "type": "authentication_error",
    "param": null,
    "code": "invalid_api_key"
  }
}

Rate Limit Error

{
  "error": {
    "message": "Rate limit exceeded. Please wait before making another request.",
    "type": "rate_limit_error",
    "param": null,
    "code": "rate_limit_exceeded"
  }
}

Best Practices

Error Handling

Always check the HTTP status code and handle errors appropriately:

import requests

try:
    response = requests.post(
        "http://api.yachtsy.ai/v1/chat/completions",
        headers={"Authorization": f"Bearer {api_key}"},
        json=payload
    )
    response.raise_for_status()  # Raises HTTPError for bad responses
    data = response.json()
except requests.exceptions.HTTPError as e:
    if response.status_code == 401:
        print("Invalid API key")
    elif response.status_code == 429:
        print("Rate limit exceeded - please wait")
    else:
        error_data = response.json()
        print(f"API Error: {error_data['error']['message']}")

Retry Logic

Implement exponential backoff for rate limits and server errors:

import time
import random

def make_request_with_retry(url, payload, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = requests.post(url, json=payload)
            if response.status_code == 429:
                # Rate limit - wait and retry
                wait_time = (2 ** attempt) + random.uniform(0, 1)
                time.sleep(wait_time)
                continue
            return response
        except requests.exceptions.RequestException:
            if attempt == max_retries - 1:
                raise
            time.sleep(2 ** attempt)

Response Validation

Always validate response structure before using:

def validate_completion_response(response_data):
    required_fields = ['id', 'object', 'choices']
    for field in required_fields:
        if field not in response_data:
            raise ValueError(f"Missing required field: {field}")
    
    if not response_data['choices']:
        raise ValueError("No choices in response")
    
    choice = response_data['choices'][0]
    if 'message' not in choice or 'content' not in choice['message']:
        raise ValueError("Invalid choice format")
    
    return choice['message']['content']