
We analyzed 211M lines of code to understand what AI is actually doing to engineering teams. Here's what we found.
We partnered with GitClear and spent the last year talking to hundreds of engineering teams about AI adoption.
The common thread: everyone knows AI is changing things, but nobody is sure if it's helping or hurting. The data surprised us. Code duplication has increased tenfold since 2022.
Refactoring dropped below duplication rates for the first time. And developers feel more productive than ever, even as the codebase gets harder to maintain.
That gap between perceived productivity and actual code health is the thing nobody is measuring, and it's the thing that will bite you in 6 months when velocity tanks for reasons nobody can explain. We put together a framework built around 4 measurement layers: direct AI usage, code health indicators, developer experience signals, and business outcomes. None of them tell the whole story alone. Together they act like a dashboard, because you wouldn't ignore the fuel gauge just because the speedometer looks good.
A few things from the research that stuck with us:
• 79% of code changes now touch code written less than a month ago (up from 70% in 2020). Fast iteration or fast rework? You need to know which.
• Developer experience metrics are leading indicators, typically months ahead of delivery impact. By the time velocity drops, the warning signs were already there.
• One engineering manager put it well: "My developers are flying through tickets, but they can't explain how their code works."
That's a developer experience problem that becomes a velocity problem fast. We wrote up the full frameworks and playbooks in a free guide: "Quantifying the Impact of AI." If your org is trying to justify AI spend or figure out where it's actually creating value (vs. just feeling like it is), it might be worth a read.