
12 OSS projects solving the agent-credential problem in different ways. here's the map
i kept getting asked "what should i use to handle API keys in my agents". for a long time my answer was ".env file, hope nothing happens". that stopped feeling fine once the agent was running on a remote box, calling tools that execute shell commands, and executing LLM-suggested actions inside the same process that holds my keys.
went looking. there are now around twelve OSS projects in this category and they disagree about what the threat model actually is. this is the map.
family one, sidecar proxy. agent makes a normal HTTP call, a local proxy injects the credential at the network layer, agent process never holds the raw secret. four projects worth knowing.
Infisical/agent-vault is the most polished. MIT, Go, container-isolation mode, TypeScript SDK for sandboxed agents. https://github.com/Infisical/agent-vault
onecli is the heavier one. Apache-2, Rust gateway plus a Next dashboard, needs Postgres and Docker but you get a real UI. https://github.com/onecli/onecli
authsome is the lighter option. MIT, Python, pip install. 30+ providers preconfigured, ships an agentskills.io skill. https://github.com/manojbajaj95/authsome (disclaimer: i maintain this)
clawshell is the narrow version, LLM provider calls only (OpenAI, Anthropic, OpenRouter), adds DLP regex scanning and IMAP allowlist. Apache-2, Rust, needs sudo. https://github.com/clawshell/clawshell
family two, local broker without a proxy. credential sits in a local vault, agent asks for it via CLI or MCP, no HTTP interception.
asimons81/hermes-vault. Python CLI plus an MCP server, policy.yaml gating, OAuth PKCE for the big providers. originally Hermes-flavored but works generally. https://github.com/asimons81/hermes-vault
botiverse/agent-vault. interesting one. not a proxy, a file I/O shim. the agent reads files and sees agent-vault:key placeholders, writes get rehydrated to real values, sensitive commands are TTY-gated so prompt injection physically cannot trigger them. Apache-2, Node. https://github.com/botiverse/agent-vault
family three, identity not storage. agent gets its own cryptographic identity and signs requests with its own key. no bearer tokens to leak.
dickhardt/AAuth. IETF draft for per-instance agent identity using HTTP Message Signatures. reference implementations in TS, Python, Java. https://github.com/dickhardt/AAuth
better-auth/agent-auth. MIT, TypeScript. implementation of the Agent Auth Protocol as a Better Auth plugin plus SDK plus CLI. https://github.com/better-auth/agent-auth
opena2a. Apache-2, three pieces. AIM for identity and audit, HackMyAgent for security scanning, Secretless AI for keeping keys out of IDEs. https://www.opena2a.org
where they disagree
Proxy people think the agent should never hold a secret because the runtime is hostile (prompt injection, hijacked tools, malicious dependencies, all read os.environ). Protocol people think tokens you can copy are tokens you will leak, so the agent should have its own cryptographic identity instead.
file-layer people think the real attack surface is the LLM provider, not the agent, so the right place to redact is at file I/O.
They're all somewhat right. boring answer is probably "proxy now, identity later, file hygiene at the edges".
"What i actually want from this thread"
if you're still on .env, do you have a threat model where that's fine, or are you in the "haven't gotten around to it" bucket like i was?
anyone using any of the projects above in production for more than three months? would love a real reliability report. curious about the proxy approach holding up under parallel subagents and streaming responses and security aspect of it.
if you maintain one of these and i mischaracterized it, reply, i will edit.