u/x_philomath_x

I was a QA engineer pressing retry 40 times a day so I built a tool to make sure nobody else has to
▲ 3 r/IMadeThis+1 crossposts

I was a QA engineer pressing retry 40 times a day so I built a tool to make sure nobody else has to

Hey guys, I was a QA engineer for years and I want to talk about the thing that finally made me go off and build Drizz

We built Drizz after watching our QA team collapse under the weight of re-running the same tests every single sprint, like clockwork, every two weeks, same pain

It got to a point where 20% of sprint time was just gone, not on features, not on actual testing, just on babysitting a pipeline that nobody trusted anymore

The worst part was how quietly it broke everything, devs started ignoring red builds, QA started manually re-running stuff "just to be safe", and releases started slowing down for no visible reason, just this invisible drag that nobody could point to

We dug into why and it kept coming back to the same two things

- selectors breaking the moment a developer touched anything in the UI, XPath and element IDs are just not built for how fast mobile apps actually change

- fixed timeouts that were either too slow and made the suite painful to run, or too fast and still missed async loads half the time

So instead of trying to patch those problems we just went a level deeper and removed them entirely

Drizz reads the screen visually, the same way a human tester actually looks at an app, it does not care if an ID changed or a button moved 10px, it sees what is on screen and acts on that

Some things we saw after teams started using it

- flakiness dropped to around 5%, most tools and frameworks sit somewhere between 8 and 15% in real production environments

- CI execution success rate climbed to 97%+

- writing tests got roughly 10x faster compared to Appium, which matters a lot because if automation is slower than manual testing people just stop writing tests

- teams got back around 20% of sprint time just from not chasing ghost failures anymore

It also handles some stuff automatically that used to be a constant headache

- popup and permission dialogs are handled without any extra steps

- works across Android and iOS from a single shared suite

- self heals when the UI shifts mid run instead of just dying

- caches repeated steps so execution gets faster over time

We are genuinely not trying to say other tools are bad because they are not, just different problems

- BrowserStack is still the best if you need real device coverage across thousands of combinations

- Sauce Labs has really strong analytics and reporting for larger enterprise teams

- Perfecto is a solid choice if you are in a regulated industry that needs controlled environments

- Kobiton is great if your team mixes manual exploratory testing with automation

- HeadSpin is the one to look at if performance instability is what is causing your flakiness

But if your situation looks anything like ours did, pipeline nobody trusts, QA re-running everything manually, devs tuning out red builds, that is exactly what we were trying to fix

Would love to hear from anyone who has been through this, what did you try, what actually worked, what made it worse

u/x_philomath_x — 16 hours ago