The Lethal Trifecta: Why AI Coding Tools Without Governance Are a Risk
Three capabilities that individually seem reasonable but together create an attack surface. Understanding the compound risk of AI coding tools with broad access.
Published: 2026-03-21
Simon Willison, the researcher who coined the term "prompt injection," identified what he calls the Lethal Trifecta: three capabilities that, when combined in a single AI agent, create an attack surface that is difficult to secure through conventional means.
The three capabilities are:
- Access to private data
- Exposure to untrusted content
- The ability to take actions
Each capability individually is manageable. Together without proper governance, they create conditions where a single vulnerability can cascade into a full incident.
Why each capability matters
Access to private data
AI coding tools that operate on codebases have access to source code, configuration files, credentials, and potentially proprietary business logic. This access is necessary for the tools to function—the value comes from understanding the full context of a codebase.
The risk is that this access can be leveraged for exfiltration if the tool is compromised or manipulated.
Exposure to untrusted content
AI coding tools can ingest content from multiple sources: repository files, documentation, web search results, emails, messages, and support tickets. Some of this content comes from outside the organization's trust boundary.
When an AI agent processes untrusted content, any instructions embedded in that content become part of the agent's context. This is the fundamental vulnerability that prompt injection exploits.
The ability to take actions
AI coding tools can execute shell commands, write files, make API calls, and interact with external services. These actions have real-world consequences in production systems.
The combination of the first two capabilities with execution authority means an agent can potentially use legitimate access to exfiltrate data or perform unauthorized actions, triggered by injected instructions embedded in content it processes.
The compound effect
The risk compounds in agentic architectures. When an agent retrieves external data, processes it, and takes action based on that data, a single injected instruction can cascade through an entire workflow.
The Lethal Trifecta is not an argument against AI coding tools. It is a framework for understanding what governance controls need to address.
What Syndicate Code does
Syndicate Code is a governed local execution platform with its own AI planner. The control plane is the authoritative component—every action must flow through it before execution is authorized.
Syndicate Code addresses the third capability—action enforcement—through:
- Human approval required before execution: No AI-initiated action executes without explicit authorization
- Approval binding to exact arguments: Approvals are tied to SHA-256 digests of action parameters, not natural language summaries
- Immutable audit record: Every approval, denial, and execution is recorded with attribution
- Policy enforcement at the control plane layer: Governance is enforced outside the model runtime
Syndicate Code does not address data access or content ingestion directly. Those are concerns for tool configuration, network policy, and input validation. Syndicate Code addresses what happens after an AI proposes an action: ensuring that action is approved and recorded.
What Syndicate Code does not do
Syndicate Code is not a prompt injection detector. It does not analyze or filter content that AI tools ingest.
Syndicate Code is not a data loss prevention system. It does not monitor or restrict data access by AI tools.
Syndicate Code is not a runtime security monitor. It does not provide intrusion detection or threat response capabilities.
Syndicate Code does not integrate with external AI coding tools such as Cursor, Windsurf, or GitHub Copilot. It is a standalone tool with its own AI planner that connects directly to AI model providers.
These are separate concerns that require separate tooling. Syndicate Code addresses action governance—the layer where decisions become events.
The governance gap in typical AI tool deployments
Most AI coding tools are deployed with broad access by default. The tools need context to be useful, and restricting context limits capability. This creates the compound risk: tools with broad data access, exposed to untrusted content, with the ability to take actions—all without governance infrastructure.
The Lethal Trifecta is not inherent to AI coding tools. It is a consequence of deploying capable tools without governance infrastructure.
FAQ
Does Syndicate Code prevent prompt injection?
No. Prompt injection occurs at the input processing layer, before Syndicate Code's control plane is involved. Syndicate Code addresses what happens after an AI tool decides to take an action: ensuring that action is approved and recorded.
Can Syndicate Code work with tools that process untrusted content?
Syndicate Code is a standalone tool with its own AI planner. If an AI model connected to Syndicate Code's control plane ingests untrusted content and proposes an action, that action requires approval through Syndicate Code before execution. Syndicate Code does not integrate with external AI coding tools.
What is the minimum governance setup for AI coding tools?
The minimum effective governance setup includes: action approval before execution, immutable audit logging, and policy enforcement at a layer independent of the AI tool's runtime. Syndicate Code provides these controls through its control plane architecture. Additional controls—such as input validation, network segmentation, and least-privilege tool access—are context-dependent.
Does Syndicate Code work with existing CI/CD pipelines?
Syndicate Code operates at the tool execution layer. It can be integrated into CI/CD workflows through its API. Specific integration patterns depend on the workflow architecture and the specific requirements of the deployment.
How does Syndicate Code handle the Lethal Trifecta?
Syndicate Code addresses one leg of the trifecta: the ability to take actions. By enforcing human approval and immutable audit logging at the execution layer, Syndicate Code ensures that even if an AI tool with broad access is manipulated through prompt injection, the resulting actions are attributed and subject to human review before execution proceeds.
The other two legs—data access and content ingestion—remain the responsibility of the deployment configuration and the AI tool's own security controls.