Data sources, model endpoints, and service management APIs
Object-oriented interface for interacting with SyftBox services (both data and model services).
name: str
The service's display name without the datasite prefix, providing a clean identifier for the specific service functionality.
datasite: str
The email address of the datasite (data node) that hosts and owns this service, identifying who is providing the service on the network. This information is crucial for understanding service provenance, trust relationships, and payment routing.
full_name: str
The complete service identifier in the format datasite/name.
supports_chat: bool
Boolean flag indicating whether this service provides chat/conversation capabilities.
supports_search: bool
Boolean flag indicating whether this service provides search/retrieval capabilities.
cost: float
The cost per request for using this service, expressed in USD as a floating-point number.
search(message: str, **kwargs) -> SearchResponse
Perform search against this service's indexed data synchronously, returning results immediately upon completion. This method blocks until the search is complete and is ideal for interactive applications where you need immediate access to search results. Only available on services with search capabilities.
# Load a data service
openmined_data = client.load_service("irina@openmined.org/openmined-about")
# Basic search
results = openmined_data.search("attribution-based control")
# Search with parameters
results = openmined_data.search(
message="machine learning",
limit=10,
similarity_threshold=0.8
)
# Access results
for result in results.results:
print(result.content, result.score)
chat(messages: List[Dict[str, str]], **kwargs) -> ChatResponse
Send conversational messages to this AI service and receive a response synchronously, blocking until the model completes its response generation. This method is perfect for real-time chat applications and interactive AI assistants where immediate responses are required. Only available on services with chat capabilities.
role and content# Load a chat service
claude_llm = client.load_service("aggregator@openmined.org/claude-sonnet-3.5")
# Basic chat
response = claude_llm.chat([
{"role": "user", "content": "What's up bro?"}
])
# Chat with parameters
response = claude_llm.chat(
messages=[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Explain quantum computing"}
],
temperature=0.7,
max_tokens=200
)
print(response.content)
async search_async(message: str, **kwargs) -> SearchResponse
Perform a semantic search against this service's data asynchronously, allowing your application to continue processing while the search executes in the background. This method is ideal for applications that need to handle multiple concurrent operations or when building responsive UIs that shouldn't block on search operations.
# Async search
results = await openmined_data.search_async(
message="attribution-based control",
limit=10
)
async chat_async(messages: List[Dict[str, str]], **kwargs) -> ChatResponse
Send conversational messages to this AI service asynchronously, enabling non-blocking communication that allows your application to handle other tasks while waiting for the response. This approach is essential for scalable applications that need to manage multiple concurrent conversations or long-running model interactions.
# Async chat
response = await claude_llm.chat_async([
{"role": "user", "content": "Hello!"}
])
show() -> None
Display detailed service information in an interactive widget optimized for Jupyter notebooks, showing capabilities, costs, endpoints, and usage examples.
# Display service details
service.show()
show_example() -> None
Display practical, runnable code examples specifically tailored to this service's capabilities, for a quick start.
# Show usage examples
service.show_example()
Response objects returned by service operations.
content: str
The actual text content generated by the AI model in response to your chat messages, containing the synthesized answer or completion.
cost: float
The total cost incurred for this chat request.
usage: ChatUsage
Detailed token consumption metrics including input tokens, output tokens, and total tokens used during the chat interaction.
results: List[DocumentResult]
An ordered list of search results containing document content and relevance scores, ranked by similarity to your query. Each result includes the matched text, similarity score, and metadata.
query: str
The exact search query string that was submitted to the service.
cost: float
The total cost charged for performing this search operation.
content: str
The actual text content from the matched document, which may be a snippet or the full document depending on the service configuration.
score: float
A normalized similarity score between 0.0 and 1.0 indicating how closely this result matches your search query, with 1.0 being a perfect match. Use this score to filter results, implement relevance thresholds, or rank results for display to users.
Note that the distance metric might differ across nodes on the network and different embedding models can be used by the services. As a result, the score might not be comparable across services.
metadata: Dict[str, Any]
A dictionary containing additional metadata about the source document, such as filename, creation date, datasite origin, document type, and other contextual information.