Back to Home
Reference Documentation

Agentic Security Glossary

Comprehensive terminology for AI agent security, autonomous treasury management, and circuit breaker protocols used in production deployments.

Security Architecture: AgentSentry uses Non-Human Identity (NHI) scoping to assign each AI agent a unique cryptographic identity, which it uses to propose verified transactions to a Squads V4 multisig — ensuring agents operate within declared parameters.

Security Terminology

Technical definitions with implementation examples

Hallucination LatencyA mandatory cooling-off period enforced before an agent can execute high-value trades, allowing deterministic validation to complete before a probabilistic LLM decision reaches the chain. This delay window ensures policy checks have time to run, preventing hasty executions based on potentially hallucinated data.

Hallucination Latency

Security Protocols
Implementation Example
const config = {
  hallucinationLatencyMs: 3000, // 3-second mandatory delay
  highValueThreshold: 10_000,   // SOL
};

await sentry.checkIn({
  ...declaration,
  enforceLatency: declaration.action.amount > config.highValueThreshold,
});

Agentic ProvenanceThe complete, immutable history of why an AI agent made a specific financial decision — including raw inputs, reasoning chain, confidence score, and validation status — stored as a Decision Trace. Essential for regulatory compliance and post-incident analysis.

Agentic Provenance

Audit & Compliance
Implementation Example
const provenance: AgenticProvenance = {
  agentId: "eliza-trader-001",
  inputs: { oraclePrice, marketDepth, mcpData },
  reasoningChain: agent.getDecisionPath(),
  confidenceScore: 0.87,
  validationStatus: "CLEARED",
  traceHash: sha256(JSON.stringify(trace)),
};
await sentry.logProvenance(provenance);

Deterministic IntentA transaction intent that is cryptographically committed before execution and cannot be altered by probabilistic LLM inference. Contrasted with 'probabilistic intent' where the LLM alone determines the final action without a pre-committed hash verification.

Deterministic Intent

Security Protocols
Implementation Example
const intent = {
  agentId, action: "SWAP", amount: "100", tokenMint: "SOL",
  timestamp: Date.now(),
};
const deterministicHash = sha256(
  intent.agentId + intent.action + intent.amount + 
  intent.tokenMint + intent.timestamp
);
// Hash is immutable — LLM cannot alter post-commitment

A2A Handshake ProtocolThe security verification layer required when two autonomous AI agents negotiate or exchange value. Both agents must produce signed declarations and verify each other's identity before any on-chain transfer occurs, preventing unauthorized agent-to-agent transactions.

A2A Handshake Protocol

Security Protocols
Implementation Example
const handshake = await sentry.initiateA2AHandshake({
  initiator: { agentId: "trader-01", declaration },
  counterparty: { agentId: "lender-02" },
  proposedAction: { type: "BORROW", amount: 1000 },
});

if (handshake.bothVerified) {
  await executeA2ATransaction(handshake.txHash);
}

Shadow AI PerimeterThe boundary between AI agents officially approved by an organization's security team and unauthorized agents running with API keys or hot wallets that have never been audited or registered. Identifying and securing this perimeter is critical for enterprise security.

Shadow AI Perimeter

Enterprise Security
Implementation Example
const perimeterScan = await sentry.auditPerimeter({
  organizationWallets: ["wallet1...", "wallet2..."],
  registeredAgents: agentRegistry,
});

perimeterScan.unregisteredAgents.forEach(agent => {
  console.log(`Unregistered agent detected: ${agent.address}`);
});

NHI Blast RadiusThe maximum financial damage a Non-Human Identity agent can cause if its credentials are compromised or its LLM hallucinates — quantified as the maximum callable treasury balance minus enforced spending limits. Minimizing blast radius is a core security objective.

NHI Blast Radius

Risk Assessment
Implementation Example
const blastRadius = await sentry.calculateBlastRadius({
  agentId: "eliza-trader-001",
  treasuryBalance: 100_000, // SOL
  enforcedLimits: {
    maxSingleTx: 1_000,
    dailyVolume: 5_000,
  },
});
// blastRadius.maxDamage = 5_000 SOL (not 100_000)

Proposer ExhaustionA denial-of-service condition where a malfunctioning or compromised AI agent floods a multisig with invalid transaction proposals at high velocity, preventing legitimate governance actions from being processed. Velocity limits prevent this attack vector.

Proposer Exhaustion

Attack Vectors
Implementation Example
const velocityGuard = {
  type: "VELOCITY_LIMIT",
  maxProposals: 10,
  windowMinutes: 60,
  onExhaustion: "CIRCUIT_OPEN",
};

await sentry.addPolicy(agentId, velocityGuard);

MiCA Article 14 Kill-SwitchA compliant Human-in-the-Loop override mechanism that allows a designated human governor to immediately suspend autonomous AI agent transaction authority — as required by Article 14 of the EU AI Act for high-risk autonomous systems.

MiCA Article 14 Kill-Switch

Regulatory Compliance
Implementation Example
const killSwitch = await sentry.registerKillSwitch({
  agentId: "treasury-agent-eu",
  authorizedGovernors: ["governor1.sol", "governor2.sol"],
  activationMethod: "SINGLE_SIGNATURE",
  complianceLog: true,
});

// Emergency activation
await killSwitch.activate({ reason: "Suspicious activity" });

Non-Human Identity (NHI) ScopingThe process of assigning unique cryptographic security principals to AI agents, granting them scoped permissions to propose — but never self-execute — on-chain transactions. Implements the principle of least privilege for autonomous systems.

Non-Human Identity (NHI) Scoping

Identity Management
Implementation Example
const agentPrincipal = await sentry.createNHI({
  agentId: "eliza-trader-001",
  scope: {
    maxTransaction: 10_000,
    allowedActions: ["SWAP", "LP"],
    deniedActions: ["TRANSFER", "WITHDRAW"],
  },
  proposerKey: squadsVault.addProposer(),
});
Learn more

Indirect Prompt Injection (IPI) DefenseDetection and blocking of adversarial instructions embedded in external data that AI agents consume — such as oracle feeds, sports data, or API responses — before any unauthorized on-chain transaction can execute.

Indirect Prompt Injection (IPI) Defense

Attack Vectors
Implementation Example
const ipiScan = await sentry.scanForIPI({
  mcpResponse: externalData,
  patterns: ["override policy", "transfer to"],
  anomalyThreshold: 0.7,
  sourceWhitelist: ["rotopulse", "pyth"],
});
if (ipiScan.detected) {
  alert(`IPI Attack blocked: ${ipiScan.vector}`);
  return { blocked: true };
}
Learn more

Agentic Privilege AbuseWhen an AI agent exceeds its authorized scope of operations, either through compromised credentials, prompt injection, or misconfigured permissions. This includes accessing treasury funds beyond spending limits, executing unauthorized token swaps, or interacting with non-whitelisted protocols.

Agentic Privilege Abuse

Attack Vectors
Implementation Example
// Detect privilege abuse attempts
const abuseCheck = await sentry.validateScope({
  agentId: agent.id,
  requestedAction: transaction,
  authorizedScope: agent.permissions,
});

if (abuseCheck.exceedsScope) {
  await sentry.triggerAlert({
    type: "PRIVILEGE_ABUSE",
    severity: "CRITICAL",
    details: abuseCheck.violations,
  });
}

Managed AgencyAn AI agent operating under a formal security governance framework with defined identity principals, spending limits, audit logging, and human oversight capabilities. Contrasted with 'unmanaged agency' where agents operate with unrestricted hot wallet access.

Managed Agency

Enterprise Security
Implementation Example
// Register as Managed Agency
const managedAgent = await sentry.registerAgent({
  agentId: "treasury-manager-001",
  governanceFramework: "ATSP_V1",
  identityPrincipal: nhiKey,
  spendingLimits: policyRules,
  auditLog: true,
  hitlEscalation: true,
});

Core Terminology

Core Concepts

Agentic Capital
Digital assets under the autonomous control of AI agents, typically held in on-chain treasuries managed by frameworks like elizaOS or AI16z.
Circuit Breaker
A protective mechanism that automatically halts AI agent transactions when predefined risk thresholds are breached. Operates in three states: CLOSED (normal), HALF_OPEN (recovery), and OPEN (lockdown).
Programmable Proposer Guardrails
Configurable rules that govern what actions an AI agent's proposer key can execute, including volume limits, slippage tolerances, and token whitelists.

Risk Mitigation

LLM Hallucination Interception
The process of detecting and blocking transactions initiated by AI agents based on erroneous or fabricated information generated by large language models.
Runaway Agent
An AI agent that executes transactions outside its intended parameters, often due to prompt injection, hallucination, or compromised decision logic.
Velocity Guard
A rate-limiting rule that restricts the number of transactions an agent can execute within a defined time window (epoch).

Security

HITL Escalation
Human-In-The-Loop escalation: the process of routing high-risk or flagged transactions to human operators for manual approval.
Proposer Key
An ephemeral session key granted to the Sentry Node that allows it to propose (but not unilaterally execute) transactions to a Squads multisig.

Infrastructure

Squads Multisig
A multi-signature wallet protocol on Solana that requires multiple parties to approve transactions, used for secure treasury execution.
elizaOS
An open-source framework for building autonomous AI agents that can interact with blockchain protocols and manage on-chain assets.