u/Strange-Cod5862

Would you trust AI-based locator resolution in Playwright tests?
▲ 2 r/Playwright+1 crossposts

Would you trust AI-based locator resolution in Playwright tests?

Been experimenting with an open-source tool built around Playwright that resolves elements from plain English instructions like:

await t.act("Click login button")

The idea itself isn’t new — tools like Zerostep, Midscene, etc. already exist — but in our experience many of them felt slow/sluggish for regular automation workflows.

Main goal here was reducing locator maintenance and speeding up automation setup while keeping execution lightweight.

It also caches resolved selectors, so repeated runs don’t keep hitting the LLM.

GitHub: QorTest GitHub

If anyone is interested, would genuinely appreciate trying it out and sharing honest feedback/issues/limitations.

u/Strange-Cod5862 — 13 hours ago

I recently came across Midscene.js and it looks interesting, especially the idea of reducing dependency on locators.

I have few questions before investing more time in this.

  1. Is anyone here actually using it in real projects?
  2. Has it actually reduced maintenance or flakiness?
  3. Any limitations you faced?
  4. Are there better alternatives you’re using for similar problems?

Would love to hear real experiences.

reddit.com
u/Strange-Cod5862 — 7 days ago

I recently came across Midscene and it looks interesting, especially the idea of reducing dependency on locators.

I have few questions before investing more time in this.

  1. Is anyone here actually using it in real projects?
  2. Has it actually reduced maintenance or flakiness?
  3. Any limitations you faced?
  4. Are there better alternatives you’re using for similar problems?

Would love to hear real experiences.

reddit.com
u/Strange-Cod5862 — 10 days ago

I have worked with Selenium for years and recently started using Playwright, along with exploring newer AI tools like Zerostep and other AI testing tools.

On paper, everything sounds impressive but in real projects, things feel very different from demos.

Recently I came across tools like Testim, Mabl etc. They claim faster test creation, reduced maintenance, and even autonomous failure analysis but I have also read that many "AI tools" are still wrappers and need heavy cleanup/debugging in real use.

What I really care about as a QA:

  • Writing stable, maintainable test cases (like an experienced QA, not generated scripts)
  • Handling frequent UI changes without constant fixes
  • Reducing flaky failures in CI/CD
  • Supporting real business logic + edge cases
  • Not increasing hidden maintenance effort

From my experience so far:

  • Selenium = stable but high maintenance
  • Playwright = better reliability but still needs strong framework discipline
  • AI tools = promising, but not sure how they hold up long-term in production

Would love honest feedback from people actually using these:

  • Which tool are you using in production today?
  • Did Playwright really reduce flakiness?
  • Has any AI tool actually reduced maintenance (not just demos)?
  • Which tool helps you write high-quality test cases like a real QA engineer?

Looking for real-world experiences, not marketing claims.

reddit.com
u/Strange-Cod5862 — 12 days ago

I have worked with Selenium for years and recently started using Playwright, along with exploring newer AI tools like Zerostep and other AI testing tools.

On paper, everything sounds impressive but in real projects, things feel very different from demos.

Recently I came across tools like Testim, Mabl etc. They claim faster test creation, reduced maintenance, and even autonomous failure analysis but I have also read that many "AI tools" are still wrappers and need heavy cleanup/debugging in real use.

What I really care about as a QA:

  • Writing stable, maintainable test cases (like an experienced QA, not generated scripts)
  • Handling frequent UI changes without constant fixes
  • Reducing flaky failures in CI/CD
  • Supporting real business logic + edge cases
  • Not increasing hidden maintenance effort

From my experience so far:

  • Selenium = stable but high maintenance
  • Playwright = better reliability but still needs strong framework discipline
  • AI tools = promising, but not sure how they hold up long-term in production

Would love honest feedback from people actually using these:

  • Which tool are you using in production today?
  • Did Playwright really reduce flakiness?
  • Has any AI tool actually reduced maintenance (not just demos)?
  • Which tool helps you write high-quality test cases like a real QA engineer?

Looking for real-world experiences, not marketing claims.

reddit.com
u/Strange-Cod5862 — 14 days ago
▲ 5 r/Playwright+1 crossposts

I have worked with Selenium for years and recently started using Playwright, along with exploring newer AI tools like Zerostep and other AI testing tools.

On paper, everything sounds impressive but in real projects, things feel very different from demos.

Recently I came across tools like Testim, Mabl etc. They claim faster test creation, reduced maintenance, and even autonomous failure analysis but I have also read that many "AI tools" are still wrappers and need heavy cleanup/debugging in real use.

What I really care about as a QA:

  • Writing stable, maintainable test cases (like an experienced QA, not generated scripts)
  • Handling frequent UI changes without constant fixes
  • Reducing flaky failures in CI/CD
  • Supporting real business logic + edge cases
  • Not increasing hidden maintenance effort

From my experience so far:

  • Selenium = stable but high maintenance
  • Playwright = better reliability but still needs strong framework discipline
  • AI tools = promising, but not sure how they hold up long-term in production

Would love honest feedback from people actually using these:

  • Which tool are you using in production today?
  • Did Playwright really reduce flakiness?
  • Has any AI tool actually reduced maintenance (not just demos)?
  • Which tool helps you write high-quality test cases like a real QA engineer?

Looking for real-world experiences, not marketing claims.

reddit.com
u/Strange-Cod5862 — 14 days ago
▲ 37 r/softwaretesting+1 crossposts

I have spent around 10 years in QA across automation, manual testing, team handling, release coordination, and recently even UI/UX collaboration.

One thing I've noticed: QA careers can easily become repetitive if we don't intentionally expand our skill set.

For me, learning beyond pure testing (automation + design collaboration + release ownership) opened more opportunities than just learning another automation tool.

Curious to hear from others in QA:

  • What's one career move/skill investment that gave you the biggest return?
  • Moving into automation?
  • Learning API/performance/security testing?
  • Leadership/management?
  • Product/design understanding?
  • Something else completely?

Would love to hear real experiences from people at different stages of their QA careers.

reddit.com
u/Strange-Cod5862 — 14 days ago