Full disclosure: I built Ascent (career-ascent.io), an AI job discovery
platform. This is a war story from building it, not a sales pitch - the lesson applies to anyone shipping data integrations.
The bug:
I built Ascent with 228 direct ATS integrations to escape the data quality issues of scraped job boards.
Greenhouse was one of those integrations - they expose a clean public API.
For 6 weeks, every Greenhouse job on my platform showed "no qualifications listed."
The data was there. My parser just wasn't reading it.
A user emailed asking why a $250k AI engineering role had "no listed requirements."
That single email exposed the entire bug.
Why it happened:
Greenhouse formats qualifications inside an HTML structure nested in a`content` field - not in the structured fields most ATS APIs expose.
My parser handled structured fields beautifully. For Greenhouse jobs, it returned an empty array. Every single one. No errors. No alerts.
What I took from it:
- Silent data corruption is the most expensive bug class. No errorlogs, no exception traces. Your monitoring is blind. The bug lookslike working software.
- The fix wasn't technical. It was epistemic. I trusted thedocumentation over the actual API response. New rule: schema in docs≠ schema in production. Every parser test runs against a capturedreal response now.
The bugs that hurt most don't crash production. They quietly corrupt the trust users place in your output.
What's the most expensive silent bug you've ever shipped?