Armorer Guard Learning Loop: local live feedback for AI-agent security
We just shipped a Rust-native learning overlay for Armorer Guard.
The idea: a scanner should be able to adapt from local feedback immediately, without silently mutating model weights or uploading prompts to a cloud service.
What changed:
- feedback-record / feedback-export / feedback-stats CLI modes
- stable scan IDs so teams can review findings without storing raw prompts
- local allow / block / review exemplars stored outside the repo
- no suppression for credentials, dangerous tool calls, or credential-disclosure policy reasons
- reviewed export path for later offline retraining
The claim we are trying to make precise is: live local learning, no silent cloud upload, no poisoning-by-default.
I am curious how people here would wire this into agent runtimes. Before the tool call? Around MCP/tool results? As a CI gate for agent evals?