Does AI behavior reset too easily across runtimes?
One pattern I keep seeing with AI agents:
You finally get an agent's behavior dialed in:
- boundaries
- approvals
- dos/don'ts
- escalation behavior
Then the context or runtime changes and you end up re-teaching everything again.
Not just annoying. Potentially risky once agents start touching real systems and irreversible actions.
Feels like there's a missing portability layer for behavioral expectations across tools/runtimes.
Curious whether people think this eventually gets solved through:
- prompts
- runtime semantics
- MCP-style layers
- policy artifacts
- something else entirely
Or whether this is just the cost of building with agents right now.