Architecture: Governance Pipeline
Deep dive into the CAR > INTENT > ENFORCE > PROOF governance pipeline and the two-engine trust architecture.
Governance Pipeline Architecture
The Vorion governance stack processes every AI agent action through a four-stage pipeline: CAR > INTENT > ENFORCE > PROOF. This page explains each stage, the two-engine trust architecture that powers enforcement, and the 8-tier trust model (T0--T7) that determines what agents are allowed to do.
The Four-Stage Pipeline
Agent Request
|
v
+----------+ +-----------+ +-----------+ +-----------+
| CAR | --> | INTENT | --> | ENFORCE | --> | PROOF |
| Identity | | Parsing | | Decision | | Record |
+----------+ +-----------+ +-----------+ +-----------+
| | | |
Agent ID Structured ALLOW/DENY Immutable
+ Trust Intent with ESCALATE/ Hash-chained
Profile Risk Level DEGRADE Audit Entry
Stage 1: CAR (Categorical Agentic Registry)
The CAR specification provides machine-readable identity for AI agents. Before any governance decision is made, the system resolves who is asking.
- Agent identity -- Unique ID, name, description, version
- Trust profile -- Current trust score, tier, capability set
- Provenance -- Creation date, owner, organizational context
- Configuration -- Registered capabilities, tier overrides, policy bindings
Every agent in the Vorion ecosystem has a CAR record. The CAR is the source of truth for agent identity across all services.
import { Cognigate } from '@vorionsys/cognigate';
const client = new Cognigate({ apiKey: process.env.COGNIGATE_API_KEY! });
// CAR lookup: resolve agent identity and trust profile
const status = await client.trust.getStatus('agent-001');
// Returns: trustScore, trustTier, tierName, capabilities, factorScores
Stage 2: INTENT
Intent parsing converts a natural-language or structured action request into a governance-evaluable intent object. The intent captures:
- Parsed action -- What the agent wants to do (e.g., "read_database", "send_email")
- Risk level -- LOW, MEDIUM, HIGH, CRITICAL
- Required capabilities -- What permissions are needed
- Context -- Environmental factors that affect the decision
- Confidence -- How confident the parser is in its interpretation
// Parse intent from natural language
const parsed = await client.governance.parseIntent(
'agent-001',
'Read customer data from the sales database'
);
console.log(parsed.intent.parsedAction); // "read_database"
console.log(parsed.intent.riskLevel); // "MEDIUM"
console.log(parsed.confidence); // 0.95
You can also use the local ATSF intent service for offline evaluation:
import { createIntentService } from '@vorionsys/atsf-core';
const intentService = createIntentService();
const intent = await intentService.submit({
entityId: 'agent-001',
goal: 'Send email to user',
context: { recipient: 'user@example.com' },
});
Stage 3: ENFORCE
The enforcement engine evaluates the parsed intent against the agent's trust profile, active policies, and contextual constraints. It produces one of four decisions:
| Decision | Meaning | |---|---| | ALLOW | Action permitted -- agent has sufficient trust and capabilities | | DENY | Action blocked -- insufficient trust, missing capabilities, or policy violation | | ESCALATE | Action requires human approval before proceeding | | DEGRADE | Partial access granted -- some capabilities allowed, others restricted |
// Enforce governance policies
const result = await client.governance.enforce(parsed.intent);
switch (result.decision) {
case 'ALLOW':
// Proceed with full capabilities
break;
case 'DEGRADE':
// Proceed with restricted capabilities
console.log('Granted:', result.grantedCapabilities);
break;
case 'ESCALATE':
// Route to human reviewer
console.log('Reason:', result.reasoning);
break;
case 'DENY':
// Block the action
console.log('Blocked:', result.reasoning);
break;
}
The local ATSF enforcement service provides the same decision logic for offline or edge deployments:
import { createEnforcementService } from '@vorionsys/atsf-core';
const enforcer = createEnforcementService({
defaultAction: 'deny',
requireMinTrustLevel: 2,
});
const decision = await enforcer.decide(context);
Stage 4: PROOF
Every governance decision is recorded as an immutable proof record. Proof records form a hash-chained audit trail (similar to a blockchain) where each record references the hash of the previous one.
- Hash chain -- SHA-256 linked records prevent tampering
- Merkle proofs -- Efficient verification of individual records
- Optional blockchain anchoring -- Polygon network for cross-system trust
- Proof Bridge -- Forwards Cognigate decisions to the Vorion Proof Plane
// Record a proof
import { createProofService } from '@vorionsys/atsf-core';
const proofService = createProofService();
const proof = await proofService.create({
intent,
decision,
inputs: {},
outputs: {},
});
// Verify chain integrity
const verification = await proofService.verify(proof.id);
console.log('Valid:', verification.valid);
Two-Engine Trust Architecture
Vorion uses two complementary trust engines that work together:
Engine 1: ATSF Trust Engine (Local Runtime)
The @vorionsys/atsf-core trust engine runs locally (or at the edge) and provides:
- Real-time trust scoring on a 0--1000 scale
- 5-dimension evaluation: Capability Trust (CT), Behavioral Trust (BT), Governance Trust (GT), Contextual Trust (XT), Assurance Confidence (AC)
- 16-factor signal processing across behavioral, compliance, identity, and context categories
- Time-based decay with stepped milestones (7/14/28/42/56/84/112/140/182-day intervals)
- Asymmetric trust dynamics -- trust is 10x harder to gain than to lose
- Recovery mechanics -- graduated restoration after incidents
- Event-driven -- emits events for tier changes, failures, and recovery milestones
Engine 2: Cognigate Enforcement Engine (Cloud Service)
The Cognigate enforcement engine runs as a centralized service and provides:
- Cross-agent governance -- consistent policies across distributed deployments
- Intent parsing -- NLP-based action classification
- Policy enforcement -- centralized rule evaluation with real-time decisions
- Proof chain -- immutable audit trail with optional blockchain anchoring
- Webhook events -- real-time notifications for governance events
- Admin dashboard -- visual monitoring of agent trust and governance activity
How They Interact
Local Environment Cognigate Cloud
+------------------+ +------------------+
| ATSF Trust | <---> | Cognigate |
| Engine | sync | Engine |
| | | |
| - Local scoring | | - Central policy |
| - Decay/recovery | | - Intent parsing |
| - Event emission | | - Proof chain |
| - Edge decisions | | - Cross-agent |
+------------------+ +------------------+
The ATSF engine handles fast, local trust calculations. Cognigate provides centralized governance, cross-agent coordination, and the immutable proof chain. Both engines agree on the 8-tier trust model and score ranges.
The 8-Tier Trust Model (T0--T7)
Trust tiers define what an AI agent is allowed to do. Score ranges narrow at higher tiers, reflecting the increasing difficulty of achieving greater autonomy.
| Tier | Score Range | Name | Capabilities | |------|-------------|-------------|--------------| | T0 | 0--199 | Sandbox | Isolated testing environment only. No real operations, no external access. | | T1 | 200--349 | Observed | Read-only access to designated resources. Under active human supervision. | | T2 | 350--499 | Provisional | Basic write operations with heavy constraints and logging. | | T3 | 500--649 | Monitored | Standard operations with continuous monitoring. Can execute routine tasks. | | T4 | 650--799 | Standard | External API access. Policy-governed with periodic reviews. | | T5 | 800--875 | Trusted | Cross-agent communication. Minimal oversight, audit trail required. | | T6 | 876--950 | Certified | Administrative tasks. Near-autonomous with compliance checks. | | T7 | 951--1000 | Autonomous | Full autonomy. Self-governance with post-hoc audit. |
Trust Score Dynamics
- Earning trust: Agents start at a configured tier and earn trust through successful task completion. Each success signal above the
successThreshold(default 0.7) increases the score. - Losing trust: Failed signals (below
failureThreshold0.3) cause accelerated decay at 3x the normal rate. Trust is asymmetric -- 10:1 loss-to-gain ratio. - Time decay: Idle agents lose trust over time. Stepped milestones at days 7, 14, 28, 42, 56, 84, 112, 140, and 182 each apply a 5--6% reduction. The 182-day half-life means an idle agent's score reaches 50% of its pre-decay value.
- Recovery: After an incident, agents can rebuild trust through consistent positive signals. Recovery events emit
trust:recovery_milestoneevents. - Circuit breakers: Graduated response prevents catastrophic failures -- the system degrades gracefully through DEGRADE and ESCALATE before reaching DENY.
Putting It All Together
A complete governance flow:
- Agent submits request -- "Delete old customer records from the archive database"
- CAR resolves identity -- Agent is
data-cleanup-bot, currently T3 (Monitored), score 580 - INTENT parses action -- Action:
delete_records, Risk: HIGH, Required:write_database,delete_data - ENFORCE evaluates -- T3 agents cannot perform HIGH-risk delete operations (requires T4+). Decision: ESCALATE
- PROOF records -- Immutable proof record captures the intent, decision, reasoning, and timestamp
- Human reviews -- Escalation routed to designated approver
- Trust updated -- If approved and executed successfully, agent earns trust toward T4 promotion
Next Steps
- SDK Quickstart -- Install the trust engine and Cognigate client
- API Reference: Cognigate Endpoints -- Full REST API documentation
- Compliance: NIST & EU AI Act -- Map to regulatory frameworks