Anyone else constantly re-teaching AI agents the same behavior?
You spend hours shaping an agent:
- what tools it can touch
- what it should ask before acting
- what counts as risky
- when it should stop and clarify
Eventually it mostly behaves.
Then the surface changes: new runtime, new coding tool, new MCP server, new workflow…
…and suddenly you're re-explaining the same expectations all over again.
Feels like a lot of this stuff currently lives in prompts, habits, and the operator's head instead of surviving across surfaces.
Curious how others are handling this.
Prompts? Policy files? Wrappers/hooks? MCP? Just accepting the drift?