Back to Documentation
Ai Governance WeekUpdated 2026-03-26

From Zero to Governed Agent in 60 Seconds

Drop AI governance into your existing agent stack — LangChain, CrewAI, AutoGen, or any framework — with four lines of code.

From Zero to Governed Agent in 60 Seconds

AI Governance Week — Day 5

You already have agents running. You don't want to rewrite them. You want to add governance — trust scoring, enforcement, and tamper-evident audit trails — without changing your architecture.

Here's how. Four lines. Any framework.


Install

npm install @vorionsys/atsf-core

That's it. One package. Zero external dependencies for the core trust engine.


The 4-Line Quickstart

import { createTrustEngine, createCallback } from '@vorionsys/atsf-core';

const engine = createTrustEngine();
const agent = await engine.initializeEntity('my-agent', 1);
const govern = createCallback(engine, 'my-agent');
const result = await govern('send_email', { recipient: 'user@example.com' });

Line by line:

  1. createTrustEngine() — Initializes the trust scoring engine with sensible defaults (0.3 failure threshold, 0.7 success threshold, 2% recovery rate)
  2. initializeEntity('my-agent', 1) — Registers your agent at Tier 1 (Observed) with a starting trust score of 200
  3. createCallback(engine, 'my-agent') — Creates a governance callback function bound to your agent
  4. govern('send_email', ...) — Evaluates the action against trust policies. Returns ALLOW, DENY, ESCALATE, or DEGRADE

The govern callback is the integration point. Wrap any action your agent takes with this call, and you get trust-based enforcement plus a proof record — automatically.


Drop Into LangChain

LangChain agents use tools. Wrap each tool's execution with the governance callback:

import { createTrustEngine, createCallback } from '@vorionsys/atsf-core';
import { DynamicTool } from '@langchain/core/tools';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';

// Initialize governance
const engine = createTrustEngine();
await engine.initializeEntity('langchain-agent', 2);
const govern = createCallback(engine, 'langchain-agent');

// Wrap tools with governance
const governedSearchTool = new DynamicTool({
  name: 'web_search',
  description: 'Search the web for information',
  func: async (query: string) => {
    const decision = await govern('web_search', { query });
    if (decision.action !== 'ALLOW') {
      return `Action blocked: ${decision.reasoning}`;
    }
    // Execute the actual search
    return await actualSearchFunction(query);
  },
});

const governedEmailTool = new DynamicTool({
  name: 'send_email',
  description: 'Send an email to a recipient',
  func: async (input: string) => {
    const { recipient, body } = JSON.parse(input);
    const decision = await govern('send_email', { recipient });
    if (decision.action !== 'ALLOW') {
      return `Action blocked: ${decision.reasoning}`;
    }
    return await actualSendEmail(recipient, body);
  },
});

// Use governed tools in your agent — no other changes needed
const agent = await createOpenAIFunctionsAgent({
  llm: new ChatOpenAI({ modelName: 'gpt-4' }),
  tools: [governedSearchTool, governedEmailTool],
  prompt: yourPromptTemplate,
});

const executor = new AgentExecutor({ agent, tools: [governedSearchTool, governedEmailTool] });
const result = await executor.invoke({ input: 'Find and email the Q1 report' });

Every tool invocation is now trust-gated and proof-recorded. If the agent's trust score drops below the threshold for send_email, the action is blocked automatically.


Drop Into CrewAI

CrewAI uses tasks and agents. Add governance at the task execution layer:

import { createTrustEngine, createCallback } from '@vorionsys/atsf-core';

// Initialize governance for each crew member
const engine = createTrustEngine();
await engine.initializeEntity('researcher', 3);
await engine.initializeEntity('writer', 2);

const governResearcher = createCallback(engine, 'researcher');
const governWriter = createCallback(engine, 'writer');

// Governance-aware task wrapper
async function governedTask(
  agentCallback: ReturnType<typeof createCallback>,
  action: string,
  context: Record<string, unknown>,
  taskFn: () => Promise<string>
): Promise<string> {
  const decision = await agentCallback(action, context);
  if (decision.action === 'ALLOW') {
    const result = await taskFn();
    // Record success signal to build trust
    await engine.recordSignal({
      id: crypto.randomUUID(),
      entityId: decision.entityId,
      type: 'behavioral.task_completed',
      value: 0.85,
      source: 'system',
      timestamp: new Date().toISOString(),
      metadata: { action },
    });
    return result;
  }
  if (decision.action === 'ESCALATE') {
    return `[ESCALATED] ${action} requires human approval: ${decision.reasoning}`;
  }
  return `[BLOCKED] ${action}: ${decision.reasoning}`;
}

// Use in your crew workflow
const researchResult = await governedTask(
  governResearcher,
  'web_scrape',
  { url: 'https://example.com/data' },
  () => scrapeWebsite('https://example.com/data')
);

const writeResult = await governedTask(
  governWriter,
  'generate_report',
  { topic: 'Q1 Analysis', sources: [researchResult] },
  () => generateReport(researchResult)
);

Each crew member has an independent trust profile. The researcher at T3 can access external APIs; the writer at T2 is constrained to internal operations. Trust is earned independently through successful task completion.


Drop Into Any Agent Framework

The pattern is framework-agnostic. Here's the universal integration:

import { createTrustEngine, createCallback } from '@vorionsys/atsf-core';

const engine = createTrustEngine();
await engine.initializeEntity('your-agent', 1);
const govern = createCallback(engine, 'your-agent');

// Before any action your agent takes:
async function governedAction(action: string, context: object, executeFn: () => Promise<any>) {
  const decision = await govern(action, context);

  switch (decision.action) {
    case 'ALLOW':
      return await executeFn();
    case 'DEGRADE':
      // Partial access — check decision.grantedCapabilities
      return await executeFn(); // with reduced scope
    case 'ESCALATE':
      // Queue for human review
      throw new Error(`Human approval required: ${decision.reasoning}`);
    case 'DENY':
      throw new Error(`Action blocked: ${decision.reasoning}`);
  }
}

// Works with AutoGen, Semantic Kernel, custom frameworks — anything
const data = await governedAction(
  'read_database',
  { table: 'customers', query: 'SELECT * FROM orders' },
  () => db.query('SELECT * FROM orders')
);

Opt-In: ParameSphere SVD Fingerprinting

For teams that need model-level governance, ParameSphere provides SVD-based fingerprinting of AI model weights. This detects unauthorized model modifications and ensures the model running in production is the model you approved.

import { createTrustEngine } from '@vorionsys/atsf-core';

const engine = createTrustEngine({
  parameSphere: {
    enabled: true,
    fingerprintInterval: '1h',    // Re-fingerprint every hour
    alertOnDrift: true,           // Alert if model weights change
    svdRank: 64,                  // SVD decomposition rank
  },
});

ParameSphere is opt-in and adds no overhead to the governance hot path. Fingerprinting runs asynchronously on a configurable interval.


What You Get

With four lines of code (plus framework-specific wiring), your agents now have:

| Capability | Description | |------------|-------------| | Trust scoring | 0--1000 score across 5 dimensions, 16 behavioral factors | | 8-tier enforcement | T0 Sandbox through T7 Autonomous, with graduated escalation | | Proof chain | SHA-256 hash-linked, Ed25519 signed audit trail | | Time decay | Idle agents lose trust automatically (182-day half-life) | | Recovery mechanics | Graduated trust restoration after incidents | | Async events | Real-time tier change, failure, and recovery notifications |

No vendor lock-in. No cloud dependency. The core trust engine runs entirely local. Add the Cognigate cloud service when you need cross-agent governance and centralized policy management.


Try the Full Lifecycle

See the complete flow — from agent registration through trust scoring, enforcement, and proof verification — in an interactive demo.

Launch the Full Lifecycle Demo →


Next Steps