
u/After_Somewhere_2254


Open sourced a security runtime for AI agent tool calls — 8 layers, Rust, sub-ms
If you’re building agents with tool use, function calling, or MCP integrations, this might be relevant. Agent Armor sits between your agent and any external action, running every call through 8 security layers before execution. Prompt injection detection, protocol DPI, taint tracking, policy verification. Written in Rust, Docker ready, Python and TypeScript SDKs. Would love to hear what security issues others have hit when deploying agents with tool access. github.com/EdoardoBambini/Agent-Armor-Iaga

Agent Armor: open source zero trust runtime for AI agents — protocol DPI, taint tracking, policy verification (Rust
Sharing a project focused on runtime security for autonomous AI agents. The core idea is treating every agent action as untrusted and running it through an 8-layer deterministic pipeline before execution. Layers include deep packet inspection for MCP/ACP protocols, prompt injection firewalls, data taint propagation, NHI registry checks, and formal policy verification. Written in Rust. Benchmarked against 16 attack categories. Full methodology in the repo. Interested in hearing from anyone who’s looked at AI agent attack surfaces from a network security perspective. github.com/EdoardoBambini/Agent-Armor-Iaga
OPEN SOURCE PROJECT
We just open sourced our zero trust security runtime for AI agents. 8 security layers, sub-ms latency, Rust. Tested on 800 real world requests across 16 attack scenarios with 99.8% accuracy.
Would love feedback from anyone building with agents in production. First 200 stars get featured as early supporters in the repo.
github.com/EdoardoBambini/Agent-Armor-Iaga