Providers
Providers are the model backends Syndicate Code routes inference requests to. Every provider is registered, governed by policy, and assigned to a trust class.
Provider classes
| Class | Description |
|---|---|
local | Models running in the local process or on localhost |
approved_hosted | Hosted providers approved by policy for the active deployment |
restricted_hosted | Hosted providers available only under specific policy conditions |
blocked | Providers blocked by policy — invocations are denied |
Configured vs. catalog providers
The catalog lists all known providers by identity. The configured list shows providers registered in the active workspace with their current state.
syndicate provider list --output table
syndicate provider show anthropic
syndicate provider test local-ollama
Trust and routing
Provider routing is governed by policy. The active policy determines which providers are eligible for each execution envelope. Provider selection is not left to the model — the control plane enforces routing rules.
In this section
- Overview — provider identity, routing, trust class assignment
- Catalog — built-in provider catalog, supported models
- Custom Endpoints — registering custom OpenAI-compatible endpoints
- Local Inference — Ollama, llama.cpp, and other local backends