Skip to main content

Providers

Providers are the model backends Syndicate Code routes inference requests to. Every provider is registered, governed by policy, and assigned to a trust class.

Provider classes

ClassDescription
localModels running in the local process or on localhost
approved_hostedHosted providers approved by policy for the active deployment
restricted_hostedHosted providers available only under specific policy conditions
blockedProviders blocked by policy — invocations are denied

Configured vs. catalog providers

The catalog lists all known providers by identity. The configured list shows providers registered in the active workspace with their current state.

syndicate provider list --output table
syndicate provider show anthropic
syndicate provider test local-ollama

Trust and routing

Provider routing is governed by policy. The active policy determines which providers are eligible for each execution envelope. Provider selection is not left to the model — the control plane enforces routing rules.

In this section

  • Overview — provider identity, routing, trust class assignment
  • Catalog — built-in provider catalog, supported models
  • Custom Endpoints — registering custom OpenAI-compatible endpoints
  • Local Inference — Ollama, llama.cpp, and other local backends