Skip to main content

About

Governance for AI code execution.

AI Syndicate Inc.

17232063 CANADA INC. — Toronto, ON, Canada

aisyndicate.io →

The moment that started it

We were reviewing an incident where an AI coding tool had made unintended changes to a production database. The tool had logged everything—but the logs showed a prompt, not what actually executed. The approval was buried in a chat thread. No one could say with certainty what the tool had actually done.

This wasn't a failure of the tool. It was a failure of attribution. The tool didn't know what it was supposed to record. The humans didn't know what they were supposed to verify. The gap between “what was asked” and “what happened” was unbridgeable with the existing system.

What we learned

Prevention is hard. You can't anticipate every dangerous action an AI might take. But attribution is tractable: record the exact action, who approved it, and what executed.

The key insight was that approval needs to bind to the specific action, not the natural language request. “Run a SQL query” is not the same as “SELECT * FROM users”—but current systems approve the former and hope for the latter.

So we built Syndicate Code to make approvals precise: bind to the exact normalized action arguments, verify before execution, record what actually happened.

What we believe

Governance is a design problem. You can't bolt on compliance after the fact. The system has to be designed from the ground up to make the right thing the easy thing.

Claims need boundaries. “Our system is secure” is not a claim—it's a hope. Real claims have defined scope, explicit exclusions, and stated failure modes.

Evidence matters more than assertions. Anyone can say their system is safe. What matters is whether the evidence backs it up—and whether that evidence is verifiable.

The product

Syndicate Code is a control plane that sits between AI coding tools and code execution. It requires human approval for actions and maintains an attributable event record.

It's not a security product. It's an audit and governance layer. It records what happened and who approved it—whether you act on that information depends on your processes.

The system makes explicit claims with defined boundaries. Each claim has:

  • What it guarantees
  • What it applies to (scope)
  • What it doesn't cover (exclusions)
  • How it can fail (failure mode)
  • Where to find the proof

What it looks like in practice

When a developer runs syndicate dev and makes a request like “refactor the authentication module,” the system proposes a specific action: edit files X, Y, Z. Before any code changes, a human approves.

If the approved action is a file edit, the approval binds to the exact diff—not the natural language request. If the model later tries to modify different files or the same files differently, execution is denied.

The resulting event record shows: the actor, the approved diff, what actually executed, and who approved it. This record is queryable, exportable, and auditable.

Teams use this to verify policy enforcement in CI pipelines, demonstrate approval workflows during security reviews, and reconstruct incident timelines from immutable logs.

Work with us

For partnership inquiries, pilot programs, or technical discussions.

Get in touch →