this was about 18 months ago, we were building a SaaS product for logistics companies, small team, 4 engineers, tight runway and a co-founder who was convinced that moving fast was more important than moving carefully. we had a launch date, investors were watching and slowing down for proper testing felt like the wrong call at the time.
we shipped. the product worked in demos. it fell apart in production almost immediately.
the first bug was a data sync issue that was corrupting records for about 12% of users, we did not catch it for 11 days. by then 3 enterprise clients had already flagged it and one of them had made decisions based on the corrupted data. the second issue was a billing error that was charging some accounts twice on renewal, that one ran for 19 days before someone on the team noticed.
total refunds issued in the first 6 weeks were $34,000. two enterprise clients left and did not come back. one of them had been worth $1,400 a month. the cost of the QA process we skipped would have been around $8,000 in engineering time.
the part that still bothers me is that both bugs were completely detectable with basic testing, they were not edge cases or rare scenarios, they were things that would have shown up on day one of any structured QA process. we just chose not to have one because we were in a hurry.
has anyone else been through something like this and actually changed how they approach testing after, genuinely curious what actually stuck and what was just good intentions that faded after the next deadline pressure hit