
The middle layer of AI governance, runtime enforcement, is almost empty. We’ve been building around that gap.
Most AI governance work sits at two ends: policy frameworks at the top, and model-level evaluation at the bottom. The middle layer, runtime enforcement during actual decision execution, is almost empty.
By runtime enforcement, we mean the concrete mechanics: how you bound an agent's authority inside a live decision, what the escalation path looks like when it hits its limits, how the decision gets recorded in a form that reconstructs why the outcome happened, and how a human reviewer overrides it without tearing up the audit trail.
These questions are not answered by policies or model evals. They get answered by something sitting in the execution path.
We are co-authoring Enterprise Architecture in an Agentic World with Manning and building MIDAS as the open-source counterpart to the book's runtime governance ideas. It treats decisions as first-class objects with explicit authority boundaries, produces audit envelopes that capture the full decision context, and handles escalation and human-in-the-loop review as part of the runtime rather than bolted on afterward. The premise is that governance needs to happen inside decision execution, not only around it.
One of us teaches AI and AI governance at Oxford, and the lack of concrete codebases for newcomers to engage with is a real gap. An open-source project with real design decisions and a live issue tracker is one of the better ways to learn this material, arguably better than most courses, because nothing in a course survives contact with questions like: “What happens when a reviewer overrides an agent's decision and the policy says they should not be allowed to?”
A few questions we think are worth discussing more openly in this space:
- Where does runtime enforcement stop being governance and start being just "controls"?
- How do you audit an autonomous decision in a way that is genuinely useful to a reviewer six months later, rather than just producing log noise?
- What is the right relationship between policy, meaning what should happen, and authority, meaning what a specific agent is permitted to do in a specific context?
The project is MIDAS, Apache-licensed, written in Go, at github.com/accept-io/midas.
Our first external contributor has just picked up the Authority Graph work, which is the runtime artefact that makes authority boundaries inspectable. Adjacent areas are open for contribution too, including observability, run linkage, simulation, eventing, an OPA-backed policy evaluator, and Explorer admin on the existing Local IAM backend. The issues are written up with enough context to be picked up without long onboarding.
We would love to hear from you whether you are an expert in the field or newer to it. Contributions, questions, critique, and discussion are all very welcome.