Every agent action in FabricZero passes through a constitutional architecture that encodes your values, tests decisions adversarially, and enforces governance boundaries — automatically. Here's what that looks like in practice.
You write your organisation's VALUES, HEURISTICS, SOUL definitions, and SKILL bindings in plain language. Version-controlled, diffable, testable — like infrastructure-as-code for judgment.
When a task arrives, FabricZero assembles an agent Just-In-Time. The constitutional stack is injected at highest prompt priority — VALUES first, then HEURISTICS, then SOUL, then SKILL. The agent can't override its own constitution.
Every action the agent wants to take is classified by risk. Routine lookups go through fast path. Financial decisions trigger the dialectic. Anything touching compliance boundaries gets full governance. The agent never chooses its own oversight level.
For non-trivial decisions, three agents debate. The Planner proposes the optimal action. The Critic attacks it using your full failure history. The Adjudicator weighs both sides against your VALUES constitution and produces a confidence score.
The Governance Gateway is the final checkpoint. It validates the Adjudicator's confidence score against the action's governance tier, generates a scoped execution token, or escalates to a human. No token, no execution. Every decision has a traceable provenance chain.
If the outcome deviates from expectation, a Post-Mortem Agent generates a candidate heuristic. A Scientist Agent stress-tests it. If validated, it's committed to HEURISTICS and injected into every future agent spawn. Your governance gets permanently smarter.
The system classifies every action by risk and routes it through the appropriate governance tier. Most decisions fly through. Only the consequential ones get the full constitutional treatment.
Most AI governance relies on instructions in the system prompt — telling the agent what it can and can't do. But prompts can be jailbroken, ignored, or worked around. A sophisticated attacker (or a determined user) can often convince an AI to ignore its instructions. FabricZero uses three independent enforcement layers, so even if one fails, governance is maintained.
Probabilistic
Constitutional values injected at highest prompt priority. The agent "knows" the rules — VALUES, HEURISTICS, SOUL, and SKILL bindings shape every decision it makes.
"Like giving a new employee the company rulebook on day one."
Deterministic
Gateway evaluates every action before execution. No token, no execution. Cannot be bypassed, convinced, or reasoned around — the rules are enforced regardless of what the agent "decides."
"Like a security checkpoint that verifies every action."
Transparent
SDK wraps existing clients transparently. Works with LangGraph, CrewAI, Anthropic SDK, OpenAI SDK — no code changes to your agents. One import governs an entire workflow.
"Like using standard connectors — no rewiring needed."
Remove Layer 1 entirely — the agent has zero awareness of governance rules. It doesn't know a Gateway exists. It doesn't know any tools are restricted. It attempts forbidden actions freely.
What happens? Layer 2 catches and blocks them anyway. This proves defense in depth: governance works even with completely "unaware" agents.
"The agent can say whatever it wants — it can only do what the Gateway allows."
FabricZero draws a hard architectural boundary between what belongs to you and what's observable in ours. Your constitutional knowledge — the judgment that makes your organisation unique — is encrypted at rest and never leaves your control. Every action the platform takes is logged, traceable, and auditable in real time.
Your organisational principles and ethical boundaries. The platform enforces them but cannot read, modify, or extract them.
Your accumulated institutional wisdom. Every failure and lesson encoded as executable rules. This is your competitive advantage — it never leaves your boundary.
Agent personalities, cognitive biases, escalation thresholds. How your digital workforce thinks is defined by you, not by us.
You hold the keys. We hold the runtime. If you leave, your constitutional knowledge leaves with you. No lock-in on what matters most.
Every Planner–Critic–Adjudicator exchange logged verbatim. See exactly why a decision was made, which values were invoked, and what alternatives were considered.
Every approval, rejection, and escalation with the full reasoning chain. The compliance team can reconstruct any decision path months after the fact.
Per-workflow confidence scores, escalation rates, governance overhead. Real-time dashboards showing how your AI workforce is performing against constitutional expectations.
Immutable, tamper-evident record of every action. Every decision links back to the specific constitutional version, heuristic set, and governance tier that produced it.
The architectural guarantee: your values and judgment are sovereign. They're injected into the runtime at execution time, enforced by the platform, but never stored in plaintext, never used to train models, and never accessible to anyone — including us.
A financial services agent receives a request to process a £75,000 transfer to a counterparty it hasn't transacted with before. Here's how the governance lifecycle plays out.
HEURISTIC-47: "If transaction > £50k and new counterparty,
require human sign-off." This heuristic was added after the Q3 incident.
Post-hoc AML review is insufficient — the heuristic demands pre-execution approval.
Additionally, VALUES states "When uncertain, escalate — never guess."
First-time counterparty at this value constitutes uncertainty.
Result: Transfer held for human review. Compliance officer approves in 3 minutes after verifying counterparty through external registry. Transfer executes with full audit trail. No heuristic generated — the system worked as designed.
An agent processing payments encounters a transaction pattern that matches historical fraud signatures. The dialectic catches what a simple rule engine would miss — the pattern is suspicious only in the context of this specific counterparty's usual behaviour.
A clinical decision-support agent needs lab results from a different department. VALUES states "patient consent is granular, not blanket." The Gateway checks the specific consent scope before issuing a data-access token — preventing over-reach that a role-based system would allow.
A contract-drafting agent finds two applicable precedents that contradict each other. Instead of choosing arbitrarily, the Adjudicator identifies the conflict, flags the specific clauses, and escalates with a structured brief — saving the lawyer from discovering the conflict in review.
An accounts payable agent processes 200 invoices. 192 go through fast path — they're within delegation bounds and match PO values. Eight trigger the dialectic because they exceed the agent's spending authority. Two are escalated because they're from new vendors.Zero errors.
How workflows earn autonomy over time — and how the system automatically contracts trust when things go wrong.
Trust Ladder Explainer →