u/AgentRdotdev

12 OSS projects solving the agent-credential problem in different ways. here's the map

12 OSS projects solving the agent-credential problem in different ways. here's the map

i kept getting asked "what should i use to handle API keys in my agents". for a long time my answer was ".env file, hope nothing happens". that stopped feeling fine once the agent was running on a remote box, calling tools that execute shell commands, and executing LLM-suggested actions inside the same process that holds my keys.

went looking. there are now around twelve OSS projects in this category and they disagree about what the threat model actually is. this is the map.

family one, sidecar proxy. agent makes a normal HTTP call, a local proxy injects the credential at the network layer, agent process never holds the raw secret. four projects worth knowing.

Infisical/agent-vault is the most polished. MIT, Go, container-isolation mode, TypeScript SDK for sandboxed agents. https://github.com/Infisical/agent-vault

onecli is the heavier one. Apache-2, Rust gateway plus a Next dashboard, needs Postgres and Docker but you get a real UI. https://github.com/onecli/onecli

authsome is the lighter option. MIT, Python, pip install. 30+ providers preconfigured, ships an agentskills.io skill. https://github.com/manojbajaj95/authsome (disclaimer: i maintain this)

clawshell is the narrow version, LLM provider calls only (OpenAI, Anthropic, OpenRouter), adds DLP regex scanning and IMAP allowlist. Apache-2, Rust, needs sudo. https://github.com/clawshell/clawshell

family two, local broker without a proxy. credential sits in a local vault, agent asks for it via CLI or MCP, no HTTP interception.

asimons81/hermes-vault. Python CLI plus an MCP server, policy.yaml gating, OAuth PKCE for the big providers. originally Hermes-flavored but works generally. https://github.com/asimons81/hermes-vault

botiverse/agent-vault. interesting one. not a proxy, a file I/O shim. the agent reads files and sees agent-vault:key placeholders, writes get rehydrated to real values, sensitive commands are TTY-gated so prompt injection physically cannot trigger them. Apache-2, Node. https://github.com/botiverse/agent-vault

family three, identity not storage. agent gets its own cryptographic identity and signs requests with its own key. no bearer tokens to leak.

dickhardt/AAuth. IETF draft for per-instance agent identity using HTTP Message Signatures. reference implementations in TS, Python, Java. https://github.com/dickhardt/AAuth

better-auth/agent-auth. MIT, TypeScript. implementation of the Agent Auth Protocol as a Better Auth plugin plus SDK plus CLI. https://github.com/better-auth/agent-auth

opena2a. Apache-2, three pieces. AIM for identity and audit, HackMyAgent for security scanning, Secretless AI for keeping keys out of IDEs. https://www.opena2a.org

where they disagree

Proxy people think the agent should never hold a secret because the runtime is hostile (prompt injection, hijacked tools, malicious dependencies, all read os.environ). Protocol people think tokens you can copy are tokens you will leak, so the agent should have its own cryptographic identity instead.

file-layer people think the real attack surface is the LLM provider, not the agent, so the right place to redact is at file I/O.

They're all somewhat right. boring answer is probably "proxy now, identity later, file hygiene at the edges".

"What i actually want from this thread"

if you're still on .env, do you have a threat model where that's fine, or are you in the "haven't gotten around to it" bucket like i was?

anyone using any of the projects above in production for more than three months? would love a real reliability report. curious about the proxy approach holding up under parallel subagents and streaming responses and security aspect of it.

if you maintain one of these and i mischaracterized it, reply, i will edit.

u/AgentRdotdev — 3 days ago

anyone else's ~/.hermes/.env file getting out of hand?

quick context, i maintain authsome, a local OAuth2 and API key broker for agents. been meaning to test the agentskills.io-compatible skill on a clean Hermes install before releasing it anywhere, so finally did it today.

what worked. install command resolves cleanly:

hermes skills install manojbajaj95/authsome/skills/authsome

skill shows up in hermes skills list as enabled. files fetched, SKILL.md and evals/evals.json. ran the basic flow (login github, run a curl through the proxy), credentials inject as expected and the agent's env shows only placeholders.

what didn't, caveat for anyone else trying it. hermes' security scanner returns CAUTION verdict on community sources and blocks the install by default. two MEDIUM findings, both false positives. the phrase "register an OAuth app" in SKILL.md trips a network rule, and "GitHub auth" in evals.json trips a supply_chain rule. workaround is --force, but that reads bad. patching the wording upstream this week so it installs clean.

why i'm posting here, not r/AI_Agents. the value prop is for the long tail of tool API keys your hermes skills call. github, notion, slack, stripe, resend, linear, klaviyo, etc. hermes already handles LLM provider keys well via the pool and rotation, so authsome doesn't touch those. for the other 30+ providers, you log in once with browser PKCE or device code and a local proxy injects at request time. ~/.hermes/.env stays empty for those.

genuine question for anyone running hermes seriously. how are you handling tool API keys today? still ~/.hermes/.env? hermes-vault? clawshell? raw env? trying to figure out if the proxy approach is what people actually want or if a different shape fits the hermes workflow better.

repo, https://github.com/manojbajaj95/authsome
also opened a PR to awesome-hermes-agent yesterday, https://github.com/0xNyk/awesome-hermes-agent/pull/66, if anyone wants to push back on the framing there too

disclosure, i work on this, so take everything above with that in mind.

s$ hermes skills install manojbajaj95/authsome/skills/authsome --force
Fetching: manojbajaj95/authsome/skills/authsome
Quarantined to .hub/quarantine/authsome
Running security scan...
Scan: authsome (skills-sh/manojbajaj95/authsome/skills/authsome/community)  
Verdict: CAUTION
  MEDIUM   network        SKILL.md:55                    "If the provider 
requires you to register an OAuth app manual"
  MEDIUM   supply_chain   evals/evals.json:25            ""expected_output": 
"The agent recognises it needs GitHub aut"

Installed: authsome
Files: SKILL.md, evals/evals.json

$ hermes skills list | grep authsome
│ authsome             │                     │ skills.sh │ community │ enabled │
reddit.com
u/AgentRdotdev — 3 days ago

HashiCorp Vault is the wrong tool for AI agents and we should stop reaching for it

This is going to be the unpopular opinion. Hear me out.

 

The reason "secrets in env vars" survived this long is that the threat model assumed the process running your code was trusted. You wrote the code, you reviewed the dependencies, the OS isolated you from other tenants. If something in your process leaked the env, you had a much bigger problem than the env.

 

agents broke that assumption and nobody redrew the diagram. An agent process now runs LLM-chosen tool calls. Some of those tools execute shell commands. Some run subprocess. Some read files based on user input. The agent's address space is, functionally, attacker-influenceable. And the entire env is in that address space.

 

The traditional secrets manager (Vault, Doppler, 1Password CLI, AWS Secrets Manager) does not save you here. They all return the credential value to the caller. The caller is the compromised process. Once the value is in os.environ, it is everywhere in the process.

 

ahat actually changes the threat model is moving the secret out of the agent's address space entirely. Two patterns are emerging. The HTTP proxy pattern, where a local sidecar holds the real credential and injects it on outbound requests, the agent's env contains a placeholder. The cryptographic identity pattern, where the agent signs requests with its own per-instance key, no bearer token to leak. Both make a process compromise no longer equal a credential compromise.

 

I write a tool in the first category, called authsome (GitHubmanojbajaj95/authsome, MIT, pip install). I am posting this because I am tired of seeing "use Vault" answered to "how do I keep my agent from leaking keys" when Vault does not solve that problem. There are also Infisical's agent-vault, OneCLI, hermes-vault, clawshell, all OSS, all worth a look.

 

Push back welcome. The case I want someone to argue is "the agent process is not actually that hostile, you are overstating". I think you are wrong but I want to hear it.

u/AgentRdotdev — 4 days ago

How are you actually keeping API keys out of your agent processes? I will go first

I want a real answer for once. Every blog post on this says "use a secrets manager" and every repo I read says load_dotenv(). Something is missing in the middle.

 

I will start. I run a few Python agents locally and a couple in cloud workers. For a long time I was on plain .env, then dotenvx for encryption at rest, then a half-finished Vault setup that I gave up on because the agent process still ended up with the key in os.environ.

 

I eventually wrote a thing called authsome (https://github.com/manojbajaj95/authsome, disclosure I maintain it) that runs a local HTTP proxy and injects credentials on the way out, so the agent's env only has placeholders.

 

works for me, I am not claiming it should work for you.

 

what I actually want to know is what other people are doing. Specifically,

how do you handle the case where a tool the agent picks up can read os.environ. Do you accept that risk, isolate it, or move the secret out entirely.

 

How do you do OAuth2 for an agent that needs to refresh a token at 3am with no human around

if you use a secrets manager, which one, and do you feel it actually changed your threat model or just your audit story. If you have ever leaked a key from an agent, what happened. (I have. Open to others sharing.) I will read every reply. If a pattern shows up in the answers I will write it up and post back.

u/AgentRdotdev — 4 days ago