Writing
Long-form writing for architecture decisions, constraints, and trade-offs.
2026-03-21
Why 'I Trusted the AI' Isn't an Audit Trail
Audit trails that record AI actions without human attribution don't satisfy governance requirements. Understanding what audit trails actually need to contain.
2026-03-21
What Syndicate Code Doesn't Claim (And Why That Matters)
Bounded honesty about Syndicate Code's governance guarantees: what the system does not do, and why explicit exclusions are part of the governance model.
2026-03-21
Understanding Trust Tiers in Syndicate Code
How Syndicate Code's four-tier trust system calibrates AI autonomy against governance requirements, and what each tier actually permits.
2026-03-21
For Regulated Industries: AI Coding Tools Need the Same Audit Trails as Financial Transactions
Why SOC2, HIPAA, and compliance frameworks demand attribution for AI-initiated actions, and what audit trails need to contain to satisfy governance requirements.
2026-03-21
The Lethal Trifecta: Why AI Coding Tools Without Governance Are a Risk
Three capabilities that individually seem reasonable but together create an attack surface. Understanding the compound risk of AI coding tools with broad access.
2026-03-21
Human-in-the-Loop Isn't a Bottleneck—It's Your Safety Net
The case for approval workflows in AI coding tools: why requiring human authorization before AI actions execute is a feature, not a friction point.
2026-03-21
The Executive Confidence Gap: 82% Think They're Protected, 14% Have Actual Controls
Survey data reveals a disconnect between executive confidence in AI governance and the actual security controls organizations have in place.
2026-03-21
Why Control-Plane Boundaries Matter More Than Agent Personality
A boundary-first argument for governable AI code execution. The distinction between a governed platform and an AI agent with governance wrappers is architectural, not cosmetic.
2026-03-21
What Happens When Your AI Coding Assistant Does Something You Didn't Approve
AI coding tools can modify their own instructions and arguments after approval. Understanding how approval drift works and why binding to prompts is insufficient.
2026-03-21
Approval Binding: Why Binding to Arguments Matters More Than Binding to Prompts
How Syndicate Code's argument-bound approval mechanism works and why cryptographic digest comparison closes the gap between approved intent and executed action.
2026-03-21
Cursor Has 5 CVEs. Windsurf Has FedRAMP. What Does Your AI Coding Tool Actually Do With Your Code?
Security audits of AI coding tools reveal a pattern: closed-source tools with known vulnerabilities, broad data access, and subprocessors you may not be aware of.
2026-03-21
AI Coding Tools Are Moving Faster Than Your Security Reviews
The gap between AI adoption speed and governance maturity is creating measurable risk. Here is what the data shows and why it matters for teams using AI coding tools.