Notice: Syft Router is in ALPHA. A major update will arrive mid-November — stay connected for updates.

Custom Chat Router Guide

Connect your existing language models to your router and make powerful chat services available to users of the SyftBox network. Maintain full control over conversation flows and model behavior.

What is a Custom Chat Router?

A Custom Chat Router is perfect for users who already have language models deployed and want to keep using their existing infrastructure. Instead of starting fresh, you're bringing your own AI capabilities to the SyftBox network while maintaining complete control over how conversations happen.

This approach is fantastic for:

Supported LLM Providers

🎯

Remotely-Hosted LLMs

GPT-3.5, Claude, and pretty much anything accessible via a REST API.

🏠

Local LLMs

Self-hosted models via Ollama, vLLM, or custom inference servers

Creating a Custom Chat Router

Step 1: Generate Template

First, create the router template through the SyftBox dashboard:

  1. Dashboard → "Create Router"
  2. Name: my-custom-chat (choose a descriptive name)
  3. Type: Custom ✓ (for full control)
  4. Services: Chat Service
  5. Click "Create"

Step 2: Project Structure

Once created, you'll find this structure in your SyftBox apps directory:

my-custom-chat/
├── server.py           # Main FastAPI server (handles routing)
├── chat_service.py     # Template for your custom chat implementation
├── spawn_services.py   # Monitors service health and status
├── pyproject.toml      # Where you'll add your LLM provider dependencies
└── run.sh              # Script that starts everything up

Step 3: Open Your Project

Navigate to your router directory and open it in your IDE:

cd ~/SyftBox/apps/my-custom-chat
cursor .  # or code . for VS Code

Implementation Example: OpenAI

Here's a complete example showing how to integrate OpenAI's GPT models:

1. Update chat_service.py

import requests
from openai import OpenAI
from typing import List, Optional
from uuid import UUID

class CustomChatService(ChatService):
    def __init__(self, config: RouterConfig):
        super().__init__(config)
        self.accounting_client: UserClient = self.config.accounting_client()
        logger.info(f"Initialized accounting client: {self.accounting_client}")
        logger.info("Initialized custom chat service")
        self.app_name = self.config.project.name
        
        # Initialize OpenAI client - this connects to your language model
        self.client = OpenAI(
            api_key=config.get('openai_api_key'),
            base_url=config.get('base_url', 'https://api.openai.com/v1')
        )
        # Set a default model in case users don't specify one
        self.default_model = config.get('model', 'gpt-3.5-turbo')
    
    def generate_chat(
        self,
        model: str,
        messages: List[Message],
        user_email: EmailStr,
        transaction_token: Optional[str] = None,
        options: Optional[GenerationOptions] = None,
    ) -> ChatResponse:
        # 1. Prepare the conversation for your language model
        payload = {
            "model": model or self.default_model,
            "messages": [{"role": msg.role, "content": msg.content} for msg in messages],
            "stream": False,
        }
        
        # Add any generation options the user specified
        if options:
            if options.temperature is not None:
                payload["temperature"] = options.temperature
            if options.top_p is not None:
                payload["top_p"] = options.top_p
            if options.max_tokens is not None:
                payload["max_tokens"] = options.max_tokens
            if options.stop_sequences:
                payload["stop"] = options.stop_sequences

        # 2. Handle payment transaction if you've set pricing
        query_cost = 0.0
        if self.pricing > 0 and transaction_token:
            with self.accounting_client.delegated_transfer(
                user_email,
                amount=self.pricing,
                token=transaction_token,
                app_name=self.app_name,
                app_ep_path="/chat",
            ) as payment_txn:
                response = self.client.chat.completions.create(**payload)
                # Only confirm payment if we got a valid response
                if response.choices:
                    payment_txn.confirm()
                query_cost = self.pricing
        else:
            # Free service, just make the request
            response = self.client.chat.completions.create(**payload)

        # 3. Convert response to SyftBox format
        choice = response.choices[0]
        assistant_message = Message(
            role="assistant",
            content=choice.message.content,
        )

        # 4. Track token usage
        usage_data = response.usage
        usage = ChatUsage(
            prompt_tokens=usage_data.prompt_tokens,
            completion_tokens=usage_data.completion_tokens,
            total_tokens=usage_data.total_tokens,
        )

        # 5. Return ChatResponse
        return ChatResponse(
            id=UUID(response.id),
            model=response.model,
            message=assistant_message,
            usage=usage,
            provider_info={"provider": "openai", "finish_reason": choice.finish_reason},
            cost=query_cost,
        )

2. Add Dependencies

Update your pyproject.toml:

[project]
dependencies = [
    "openai>=1.0.0",      # For OpenAI
    "requests>=2.28.0",    # For HTTP requests
    # Or for other providers:
    # "anthropic>=0.25.0",        # Anthropic Claude
    # "google-generativeai>=0.3.0", # Google Gemini
]

3. Configure Your Service

Set up environment variables in a .env file:

# OpenAI Configuration
OPENAI_API_KEY=sk-your-key-here
MODEL_NAME=gpt-4
BASE_URL=https://api.openai.com/v1

# Router Settings
ROUTER_NAME=my-custom-chat
ROUTER_PORT=8001
LOG_LEVEL=INFO
🔒 Security Note

Never commit your .env file to version control. Use .env.example as a template for others.

4. Implement Service Monitoring

Update spawn_services.py to monitor your language model connection:

def spawn_custom_chat(self):
    """Monitor external LLM service health"""
    logger.info("💬 Setting up custom chat service...")
    try:
        # Add health checks for your language model
        # For example: test the API connection
        import openai
        client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
        
        # Test the connection with a simple request
        test_response = client.models.list()
        
        if test_response:
            self.custom_chat_url = "http://localhost:12345"
            self.config.state.update_service_state(
                "chat",
                status=RunStatus.RUNNING,
                started_at=datetime.now().isoformat(),
                url=self.custom_chat_url,
            )
            return True

    except Exception as e:
        logger.error(f"❌ Custom chat service setup failed: {e}")
        self.config.state.update_service_state(
            "chat", status=RunStatus.FAILED, error=str(e)
        )
        return False

Alternative Implementations

Anthropic Claude

from anthropic import Anthropic

class CustomChatService(ChatService):
    def __init__(self, config: RouterConfig):
        super().__init__(config)
        self.client = Anthropic(
            api_key=config.get('anthropic_api_key')
        )
        self.default_model = config.get('model', 'claude-3-sonnet')
    
    def generate_chat(self, ...):
        response = self.client.messages.create(
            model=self.default_model,
            messages=messages,
            max_tokens=1024
        )
        # Process and return response

Local Models (Ollama)

import ollama

class CustomChatService(ChatService):
    def __init__(self, config: RouterConfig):
        super().__init__(config)
        self.client = ollama.Client(
            host=config.get('ollama_host', 'http://localhost:11434')
        )
        self.default_model = config.get('model', 'llama2')
    
    def generate_chat(self, ...):
        response = self.client.chat(
            model=self.default_model,
            messages=messages
        )
        # Process and return response

Testing Your Router

Via Dashboard

  1. Go to router list in your dashboard
  2. Select your router from the dropdown
  3. Send test messages:
    • "Hello, how are you?"
    • "Tell me a joke"
    • "Explain quantum computing"
  4. Verify responses from your language model

Via API

curl -X POST https://syftbox.net/api/v1/send/ \
  -H "Content-Type: application/json" \
  -H "x-syft-from: user@example.com" \
  -d '{
    "message": "What is machine learning?",
    "model": "gpt-4",
    "temperature": 0.7,
    "suffix-sender": "true",
    "x-syft-url": "syft://<your_email>/app_data/my_custom_chat/rpc/chat"
  }'

Publishing Your Router

Once your router is working perfectly:

  1. Test thoroughly: Ensure reliable responses across different queries
  2. Add metadata: Go to router details → "Publish"
  3. Set pricing: Configure per-conversation pricing
  4. Publish: Make available to network users
Summary: "Advanced AI chat with GPT-4"
Description: "Powered by GPT-4 with custom prompts and RAG"
Tags: ["chat", "ai", "gpt-4", "custom"]
Pricing:
  Chat: $0.02 per request

Monitoring & Troubleshooting

View Logs

tail -f ~/SyftBox/apps/my-custom-chat/logs/app.log

Common Issues