
Six months ago validating a backend ticket was a day's work minimum. Today it closes in under 10 minutes. Not because the AI is magic — because we built the right infrastructure around it.
The short version of what changed:
- Tests are stored as JSON configs in a database, not code in a repo. No PR, no commit, no review cycle just to save a test case.
- Manual validation happens via terminal as usual — AI builds the requests, runs them, traces the responses.
- Once manual validation passes, the AI switches to MCP to persist the test and trigger execution through an automation microservice. Same conversation, no context switch.
The thing nobody talks about: the AI alone is useless without this infrastructure. It can read code and generate requests all day. But if it can't save a test and replay it three weeks later, you've built a demo, not a system.
The human still owns all the judgment — what scenarios matter, what's a real bug, what's worth automating. The AI just stopped eating the time that should've gone toward that.
Wrote it up in full here: https://medium.com/@dhananjaya-gowda/what-actually-changed-when-ai-entered-our-testing-workflow-e26e4f7c9d8d
Happy to answer questions about the MCP setup or how the test configs are structured.