▲ 4 r/aisecurity
How should AI coding agents be contained before tool calls execute?
AI coding agents are starting to do more than suggest code: they can run shell commands, read local files, call tools/MCP servers, and modify config using the user’s permissions.
From a security point of view, I’m trying to think through where containment should happen. The risky part seems to be unsafe action before the human notices, not just bad advice.
For people working with coding agents:
What actions would you block by default?
Examples I’m thinking about:
- destructive shell commands
- access to secrets or SSH keys
- modifying security-sensitive config
- network calls to unknown destinations
- installing packages or running downloaded scripts
- MCP/tool calls with broad permissions
Also curious:
What false positives would make this unusable?
Is local pre-execution enforcement the right layer, or should this be handled by sandboxing, identity/permissions, audit logs, rollback/snapshots, or something else?
u/Gary_AIAGENTLENS — 4 days ago