
Your DORA metrics look great. Your systems are quietly becoming unmanageable.
AI coding tools are creating a dangerous gap between delivery speed and system comprehension — and DORA metrics are hiding it.
The pattern: deployment frequency up, change failure rate stable, MTTR looking healthy. Meanwhile nobody on the team can explain the critical path end-to-end. The dashboards are green. The operators are nervous.
The core problem isn't AI. It's that DORA measures the pipe, not whether anyone understands what's in it. AI just made that gap orders of magnitude bigger — more code shipping, less of it truly owned.
A few things worth taking seriously:
- If your MTTR looks great but your team can't explain why a rollback fixed it, your systems are illegible. Ask how many people can walk through the critical path in plain language in under five minutes. If the answer is two or fewer, that's not a knowledge concentration problem — it's a succession crisis.
- Changes are also happening outside your SDLC now. Vendor consoles, IdP rules, AI agent glue that nobody wants to admit is load-bearing. DORA doesn't see any of that. The blast radius extends beyond what the dashboard covers.
- The fix isn't ditching DORA. It's stopping it from carrying work it wasn't built for — adding scope notes to every DORA review, requiring incident narratives for critical systems, and treating legibility as a first-class metric alongside delivery.
Full article here: https://leaddev.com/reporting/dora-metrics-are-lying-to-you-and-ai-is-making-it-worse