Concepts
What is ArmorIQ?
Understanding ArmorIQ's architecture and core concepts
What is ArmorIQ?
ArmorIQ is a security platform for AI agents that enables cryptographically verified action execution across multiple services. Think of it as a zero-trust security layer specifically designed for LLM-powered agents.
The Problem We Solve
Traditional AI agents face critical security challenges:
- Prompt Injection Attacks: Malicious prompts can trick agents into executing unauthorized actions
- Agent Drift: Agents can deviate from intended behavior during execution
- Lack of Auditability: No clear trail of what the agent planned vs. what it executed
- Unauthorized Escalation: Compromised agents can access services beyond their scope
The ArmorIQ Solution
ArmorIQ bridges two worlds:
- AI Agents that use LLMs to reason and plan dynamically
- Zero-Trust Security that cryptographically verifies every action
Traditional Approach
# Direct calls - no verification
api.call("service1", "action1")
api.call("service2", "action2")
api.call("service3", "action3") # Could be malicious!ArmorIQ Approach
# Step 1: Agent captures intent (LLM generates plan)
captured_plan = client.capture_plan(
llm="gpt-4",
prompt="Fetch sales data and analyze Q4 performance"
)
# LLM decides: data-mcp/fetch_sales → analytics-mcp/analyze
# Step 2: Get cryptographic proof for the LLM-generated plan
token = client.get_intent_token(captured_plan)
# Step 3: Only declared actions can execute
client.invoke(
mcp="data-mcp",
action="fetch_sales",
intent_token=token,
params={...}
) # ✓ Verified (in plan)
client.invoke(
mcp="analytics-mcp",
action="analyze",
intent_token=token,
params={...}
) # ✓ Verified (in plan)
client.invoke(
mcp="data-mcp",
action="delete_all",
intent_token=token,
params={...}
) # ✗ Fails - LLM didn't plan this!Key Insights
Even though the LLM generated the plan dynamically, every action is cryptographically verified. This prevents:
- Prompt injection attacks: Malicious prompts can't execute unplanned actions
- Agent drift: Agent can't deviate from captured intent
- Unauthorized escalation: Even if compromised, agent is bound to the plan
Core Principles
1. Intent-Based Execution
Instead of directly calling services, you declare your intent (what you want to do) upfront. This intent becomes a cryptographically verified contract.
2. Zero Trust Security
ArmorIQ follows zero trust principles:
- Every action is verified cryptographically
- Tokens are time-limited and non-reusable
- Plans are immutable once signed
- All requests are authenticated
- Complete audit trail maintained
3. LLM-Generated Plans
Plans are declarative and LLM-generated, not manually coded:
# ✓ Agent captures intent from natural language
captured_plan = client.capture_plan(
llm="gpt-4",
prompt="Fetch user data and calculate credit score"
)
# LLM generates declarative plan:
# [
# {"action": "fetch_data", "mcp": "data-mcp"},
# {"action": "calculate_score", "mcp": "analytics-mcp"}
# ]Why This Matters:
- LLM Autonomy: Agent decides the best approach based on prompt
- Cryptographic Binding: Even dynamic plans are immutably verified
- Declarative Security: You secure what the agent wants, not how it does it
- No Implementation Details: MCPs handle the how, plans declare the what
Next Steps
- Architecture Overview - Understand the system components
- Intent Plans - Learn about plan structure and lifecycle
- Security Model - Deep dive into security mechanisms
- Token Lifecycle - How tokens work