u/dhana231_231

▲ 5 r/QuickBooksHacks+1 crossposts

Almost shut down my flower shop after 6 years, the problem wasn't the business, it was me not knowing what I didn't know.

I've been going back and forth on posting this but I think I finally want to.

I opened my shop in NE Portland in 2018. It survived COVID somehow, mostly pivoted to delivery and sympathy arrangements, which honestly kept the lights on. By 2022 I felt like the hard part was over. We were doing weddings again, my wholesale costs had stabilized, I had one full time employee.

Eighteen months ago I sat down to figure out why I never seemed to have any money saved despite the business feeling okay. Like I wasn't struggling exactly but I also had basically zero financial cushion. One bad month away from problems.

I thought maybe I just needed to raise prices. So I did. Margins improved a little. Still no cushion.

I thought maybe my employee costs were too high. Did the math. They weren't.

I genuinely could not figure it out and I remember telling my mom on the phone "I think I'm just bad at business" and she goes "or maybe you just can't see where it's going."

She was right and I hate that she was right.

Here's what I eventually found when I actually dug into everything properly: I was paying for cold storage insurance AND a separate contents policy that overlapped almost completely, duplicate coverage I'd set up years ago and forgotten. I had a subscription to a floral design platform I'd used for like three weeks and abandoned. I had two different payment processors because I'd switched but never turned the old one off, and both were charging monthly minimums.

None of it was intentional. I was just busy and I never had a clear view of everything at once. I was running the business out of vibes and a bank balance, and the bank balance was always technically positive so I assumed I was fine.

Once I could actually see my full picture, every account, every recurring charge, real cash flow, I found almost $900/month that was just… leaving. Quietly. For nothing.

The shop is still open. I have an actual emergency fund for the first time. I'm not posting this to brag, I'm posting it because I wasted probably three years of stress that I didn't need to have.

reddit.com
u/dhana231_231 — 2 days ago

My 100% green Playwright suite just let a critical UI bug slip into production, and it completely changed how I view E2E testing.

I’m still recovering from a massive post-mortem we had on Monday. I spent the last three months building a rock-solid automation suite for our core checkout flow. Every PR had to pass it, the pipeline was consistently green, and we felt invincible.

Last Thursday, the marketing team pushed a "temporary" sticky promotional banner to the mobile view. The devs merged it, my E2E suite ran, clicked the "Confirm Order" button perfectly, and gave a green light. We deployed.

Friday morning, we realized mobile conversions had flatlined for 12 hours.

Turns out, the new sticky banner had a z-index issue and physically covered the entire checkout button on smaller screens. Real users literally could not tap it. But my script didn't care. It bypassed the visual rendering layer, found the <button> node in the DOM, and fired a click event directly via JavaScript. It gave us total false confidence because it did something a human physically couldn't do.

It made me realize that traditional automation is fundamentally flawed: we aren't testing the user's experience, we are just testing the DOM state.

Valuable Takeaways & Resources I’m looking into:

  • Audit your framework's actionability checks: If you use Playwright, make sure you aren't overusing .click({ force: true }). For Cypress, understand how it checks for visibility. But even then, they can be tricked by CSS transforms.
  • Visual Regression is a bandaid, not a cure: We looked into tools like Percy and BackstopJS, but they just flag pixel differences. I don't want to approve 50 baseline images every time a dev changes a padding value.
  • The Philosophical Gap: We need to start thinking about how to test visual intent rather than code implementation. Has anyone found a reliable way to test what the screen actually looks like and interacts like, without relying on the hidden HTML?
reddit.com
u/dhana231_231 — 2 days ago
🔥 Hot ▲ 78 r/software

We got a 1 star review saying our skip button did nothing. Tested it 20 times, worked every single time. Took us 3 weeks to understand what was actually happening

Four months building our onboarding. Every screen reviewed, every flow walked through, every team member had gone through it so many times we could do it with our eyes closed. We launched feeling genuinely confident which in hindsight is always the most dangerous feeling a founder can have

First cohort came through and completion was just slightly off. Not crash the meeting off, just quietly lower than our benchmark in a way that felt like a product problem. Users not connecting with the value proposition fast enough, messaging not landing, the usual suspects. We started planning copy experiments and a redesign of the second screen

Then the 1 star review came in. Skip button does nothing. We opened the app immediately and tapped that button probably fifteen times in a row. Worked every single time without a single issue. Responded to the review apologetically, asked for more details, got no response. Marked it as one of those unverifiable complaints that every app gets and moved on because you have to

Two more reviews over the next week saying the exact same thing. Same screen, same button, does nothing

We finally got one of those users to tell us their device. Then the second one. Then a third person from our beta group who had mentioned it quietly weeks earlier and we had not followed up on. All Samsung, all with gesture navigation turned on in their system settings. What was happening was that Samsung's gesture navigation zone sits at the very bottom of the screen and intercepts touch events before they reach the app layer. Our skip button was living right inside that zone. Visually it looked completely normal. Fully rendered, correct position, nothing to suggest anything was wrong. But every tap was being swallowed by the system before our app ever saw it. Every Samsung user with gesture nav enabled was hitting a dead button on screen three and we had zero visibility into it because our test devices were a Pixel and two iPhones and none of us had gesture navigation turned on

The part that stayed with me was not the bug itself. Bugs happen, edge cases exist, no team catches everything. What stayed with me was the 4 month gap between the bug existing and us finding it. It existed from day one. Every Samsung user who came through our funnel in those four months hit that wall silently. Most of them never left a review. Most of them just left

reddit.com
u/dhana231_231 — 6 days ago

I lost my entire weekend to a flaky test that wasn't even a real bug

I am still so incredibly frustrated about this because we had a deployment push late on Friday and right before the cutoff the CI pipeline lit up red with a critical end to end checkout test failing.

The release got paused and because I am the QA lead my weekend was instantly ruined so I spent Saturday morning pulling logs and re-running the suite locally trying to reproduce it but on my local machine it passed perfectly while in the pipeline it failed every single time.

I spent another four hours digging through the DOM structure and do you know what the critical bug was because a front end developer added a new promotional banner and slightly changed the z index of a wrapper div so the actual checkout button was completely visible and worked perfectly for a human user but our Playwright script was targeting a specific selector that was technically now being obscured in the DOM hierarchy according to the headless browser.

The feature wasn't broken and the app was fine so the only thing that was broken was my script's ability to read the code underneath the UI.

I lost my Saturday to a CSS tweak and it genuinely made me sit back and realize how fundamentally flawed our approach is because if a human can look at the screen and click the button without an issue our automation shouldn't be throwing a fatal error just because the underlying HTML got reorganized.

We are spending more time testing our DOM structure than we are testing the actual user experience and please tell me I am not the only one hitting a breaking point with this kind of maintenance trap.

reddit.com
u/dhana231_231 — 7 days ago

40 stories, 1 tester, and a completely broken automation suite: Why I stopped trying to hero through a UI migration

I was staring at a sprint board with 40 active stories while my entire automation suite flashed red, and my workload had just gone from heavy to genuinely unmanageable.

Management had just migrated our project from Velocity to Omni Studio which made perfect sense from a product standpoint but nobody scoped what that transition would do to an existing automation suite. Omni Studio renders components differently so XPaths and locators that had been rock-solid for over a year suddenly resolved to nothing or pointed to the wrong elements entirely because the underlying DOM hierarchy shifted.

It wasn't just one or two flaky scripts since a massive portion of our suite now needed a complete audit and rewrite. Meanwhile the sprint didn't shrink to accommodate this and I was staring down nearly 40 stories with zero parallel QA capacity, no one to split the triage work with, and absolutely no buffer.

Honestly, spending hours fixing broken locators just because a div changed to a span is making me want to ditch DOM-based testing entirely and pitch one of those modern AI tools that uses computer vision to just look at the screen and execute plain English test scenarios. Fighting with brittle code during a massive UI migration like this is actual torture when tools exist that just visually recognize the layout anyway.

I tried to figure out the sequencing and realized I was trapped since functional testing on those 40 stories couldn't be deprioritized because the release wouldn't move without it, but script maintenance couldn't be deferred indefinitely either since a broken suite provides zero regression coverage.

I stopped trying to quietly absorb the extra workload and drew a hard line where manual functional coverage took absolute priority per story to keep the release moving, and automation fixes were strictly if time allows. Most importantly I documented the exact gap between what the suite should cover and what it currently covered and took that straight to the lead explicitly as a compounding business risk log rather than a complaint about being overworked.

For three years I prided myself on being the solo tester who could handle whatever volume was thrown at me but I almost burned myself out trying to silently cover for a massive project scoping failure. I let my ego convince me I just needed to work harder but I am never doing that again.

Has anyone else survived a massive platform migration as a solo QA without completely losing your mind?

reddit.com
u/dhana231_231 — 9 days ago