r/Playwright

Starting today, I declare scraping free again.
▲ 86 r/Playwright+9 crossposts

Starting today, I declare scraping free again.

I got tired of anti-bot systems constantly breaking my Playwright AI agent, so I built Invisible_Playwright: an open-source, MIT-licensed Playwright and Firefox fork patched at the C++ level.

Instead of reusing the same noisy automation fingerprint, Invisible_Playwright generates a different but internally consistent browser fingerprint for each session. The goal is to remove the Playwright automation signals while keeping the browser environment coherent and reproducible.

Category Invisible_Playwright result
Fingerprint generation ✅ Different, coherent per-session fingerprint
WebRTC ✅ Pass — no public IP leak
PixelScan ✅ Pass — no inconsistencies
CreepJS ✅ Pass — 0 lies
SannySoft ✅ Pass — all green
BrowserLeaks WebRTC ✅ Pass — no public IP leak
reCAPTCHA v3 ✅ Pass — 0.90
Fingerprint Pro ✅ Pass — bot=false, tampering=false
Cloudflare / Turnstile ✅ Pass
hCaptcha ✅ Pass
DataDome-style checks ✅ Pass
Kasada-style checks ✅ Pass
Akamai-style checks ✅ Pass
Imperva-style checks ✅ Pass
HUMAN / PerimeterX-style checks ✅ Pass
Arkose-style checks ✅ Pass

Repo: https://github.com/feder-cr/invisible_playwright

github.com
u/bolaretyr — 2 hours ago
▲ 458 r/Playwright+9 crossposts

Today I declare scraping free again

reCAPTCHA v3 at 0.3, FP Pro flagging bot:true, Cloudflare banning my ASN on sight. Sick of it.

Built this: a Firefox patched at the C++:

reCAPTCHA v3 0.90, FP Pro bot=false, tampering=false · CreepJS 0 lies · sannysoft all green · WebRTC no leak.

Self-hosted, MIT, no cloud, no subscription.

Repo: https://github.com/P0st3rw-max/stealthfox

u/bolaretyr — 4 days ago
▲ 1 r/Playwright+1 crossposts

Would you trust AI-based locator resolution in Playwright tests?

Been experimenting with an open-source tool built around Playwright that resolves elements from plain English instructions like:

await t.act("Click login button")

The idea itself isn’t new — tools like Zerostep, Midscene, etc. already exist — but in our experience many of them felt slow/sluggish for regular automation workflows.

Main goal here was reducing locator maintenance and speeding up automation setup while keeping execution lightweight.

It also caches resolved selectors, so repeated runs don’t keep hitting the LLM.

GitHub: QorTest GitHub

If anyone is interested, would genuinely appreciate trying it out and sharing honest feedback/issues/limitations.

u/Strange-Cod5862 — 10 hours ago

AI testing tools are everywhere now but most look identical how do you actually tell them apart?

Most tools in the AI testing space are doing one of two things: generating scripts through natural language input, which is still Selenium or Appium underneath, or running automated crawlers doing random path exploration.

Neither is genuinely agentic in any meaningful sense.

The evaluation question that matters is whether the tool understands what it's verifying or is just automating actions blindly. The marketing has converged to the point where everything sounds equivalent.

reddit.com
u/Reasonable-Bake-8614 — 6 hours ago

Need suggestions for AI coding agents

I am about to join a new team where I am asked to create test automation from the very start with Playwright. I am confused about which coding agent to go for. I have been using GitHub copilot for years now and I found it really fruitful for test automation use cases. I don’t want AI to completely write everything. I like more of coding assistance where AI can help me suggest what code to write next in the editor itself. I like this aspect of GitHub copilot.

But lately, I have been playing with Claude Code too and created some portfolio/POC projects and I am really impressed with its autonomous capabilities of writing and testing a quality code.

I also tried Playwright Agents in the past too, i.e planner, generator and healer and I found it decent.

What’s your all views on this? Which coding agent should one go for when it comes to Playwright test development with TypeScript?

reddit.com
u/Code_Sorcerer_11 — 3 days ago
▲ 46 r/Playwright+1 crossposts

Self-healing Playwright scraper

I run a few small scrapers on cron price trackers, a job feed, a stock watcher. Every couple of weeks one dies because a site renamed a CSS class. Patch, push, repeat. Maybe 30 min/month of pure boredom. So I spent a weekend on a fix.

Selfmend is a Playwright scraper where you describe each field in plain English (e.g. "the price in pounds, e.g. £51.77"). When a selector returns nothing, or returns something that fails the field validator, it sends the page's accessibility tree (not raw HTML) to an LLM, which proposes a new locator. A validator gate (type / regex / range) checks the result before it goes in the cache. So the LLM proposes; deterministic code accepts or rejects. That gate is the whole trick without it, the LLM silently corrupts your data over time.

You can checkout the project here: https://github.com/Per0x1de-1337/selfmend

u/ThinFoundation8228 — 2 days ago

How do you feel about Playwright closing popular issues as "not planned"?

I've noticed that the Playwright team closes quite a few highly upvoted feature requests/issues as not planned.

I completely understand that maintaining a project of this size requires strong prioritization, and that upvotes alone can’t drive the roadmap. Also not every feature request is a good fit for the project.

Still, I sometimes find it frustrating to see issues with a lot of community interest get closed rather quickly. Especially when those issues represent real pain points for advanced or large-scale use cases.

I’m curious how others in the community feel about this.

u/vitalets — 1 day ago

How to write a lot of tests as fast as possible.

My manager wants me to create using AI as many automation tests as fast as possible using the test cases we already have.He said he doesnt care about quality so the tests dont need to be perfect, just get them done.I was thinking that my only option is to use playwright MCP.
Any thougths?

reddit.com
u/Some-Mountain-3575 — 4 days ago
▲ 23 r/Playwright+5 crossposts

Hey guys,

A while ago I posted here about the gap between what an e2e test says it protects and what it actually checks.

That discussion raised a few good questions, especially around whether I was just arguing for page objects or trying to force everything into application-level tests.

I spent some time thinking deeper about the problem, and now I think the thing I've been trying to name more precisely is this:

A test can be perfectly clean and still change for the wrong reasons if it is anchored to a different scope than the promise it claims to protect.

Example:

test('create business party', async ({ page }) => {
  const partyList = page.getByTestId('Components.PartyList');

  await partyList.getByRole('button', { name: /add party/i }).click();

  const modal = page.getByTestId('Components.PartyModal');
  await modal.getByRole('button', { name: /business/i }).click();

  const entityName = modal.getByTestId('Components.PartyModal.PartyModalBusinessForm.entityName');
  await entityName.getByRole('combobox').fill('Acme Inc.');
  await entityName.getByRole('option', { name: /create/i }).click();

  await modal.getByTestId('Components.PartyModal.submitButton').click();

  await expect(partyList.getByTestId('Components.PartyList.PartyRow').filter({ hasText: 'Acme Inc.' })).toBeVisible();
});

Nothing is wrong with this by itself.

But if the promise is just:

>a business party can be created

then this test is anchored to a much more UI-specific scope:
- there is a party list with an add-party entry point
- the flow starts there
- it happens through a modal
- that modal has a business tab
- etc...

That may be exactly what you want to protect. But then it is a UI-scope contract.
Same promise space, different scope:

test('create business party', async ({ parties }) => {
  await parties
    .addBusiness({ companyName: 'Acme Inc.' })
    .create();
  await expect.poll(async () => parties.get('Acme Inc.')).not.toBeUndefined();
});

UI-scope tests are completely valid when the thing you want to protect is UI behavior. Application-scope tests are valid when the thing you want to protect is the capability itself.

The problem starts when the test sounds like it protects one scope, but is actually tied to another.
And if a test is truly UI-scope, it is worth asking whether e2e is the right place for it, or whether a smaller UI/component test would give faster, more focused feedback.

Imo that is where a lot of brittleness comes from. And it's not just naming alignment. Once those two are aligned, the whole suite - and maybe your whole testing strategy - gets much easier to reason about:
- UI-scope tests change when UI behavior changes
- application-scope tests change when the application capability changes
- mechanics can still break, but the fix is easier to locate
- "should this really be an e2e test?" is easier to answer
- it becomes easier to see when a lower-level test is creating more churn than the promise is worth

If interested, I wrote the longer version with a fuller example and more on scope alignment in the linked post.

Glad to jump back in the trenches arguing about testing practices :D

u/TranslatorRude4917 — 5 days ago

Playwright hangs after test completion

Hey! I'm running into a strange issue and could use some help.

I've built a custom automation framework on Playwright for testing a product at my company. Everything was working fine until Friday afternoon IST, when things started breaking in an unexpected way. No new code had been pushed since Thursday morning, and we were able to successfully run the scheduled Friday morning run too — so nothing in the codebase should have affected execution.

Here's what happens: the test runs to completion, all steps execute correctly, all log statements print as expected, and the browser closes normally — but then VS Code just keeps spinning as if the test is still running. It never marks the test as passed, and eventually it fails due to the global timeout I have configured. I've tried running tests via both the Playwright VS Code extension and the CLI, and the result is the same either way.

The weird part is that the test does seem to finish execution — it just never gets reported as done.

Has anyone seen this before, or have any idea what might be causing it?

https://preview.redd.it/0s1m0f9yji0h1.png?width=1876&format=png&auto=webp&s=afb2208fdd0e10d6590df0912e469bfdf82873ff

reddit.com
u/dhanvith2016 — 2 days ago
▲ 12 r/Playwright+1 crossposts

The video is the full run. The command was literally this:

qagent "Goal: I successfully buy a backpack. Steps: 1. login (standard_user / secret_sauce) 2. add 'Sauce Labs Backpack' to cart 3. open the cart 4. checkout — fill First Name, Last Name, Zip 5. click Continue, then Finish. End: I see the 'Thank you for your order!' confirmation page" --url https://www.saucedemo.com/

That's the whole spec. No selectors, no fixtures, no await page.click(...). A real Playwright browser, end-to-end, with PASS/FAIL + evidence at the end.

What's working

  • google/gemma-4-26b-a4b-it via OpenRouter passes a real Gravity Forms submission (5 required fields, checkbox arrays, paired email confirmation) for ~$0.008/run, 5/5.
  • gpt-4.1-mini also passes 5/5, ~6× the cost.
  • --reporter=ndjson streams one JSON event per turn and a stable done envelope at the end (outcome, evidence, totalCost, finalUrl). Exit codes 0/1/2/3. So a Claude Code parent agent can shell out, parse tail -1, and act on a real verdict.

Install

npm install -g @qagent/cli
npx playwright install chromium
qagent config set apiKey sk-or-...
qagent config set provider openrouter #Or any other provider you like
qagent config set model google/gemma-4-26b-a4b-it

Claude code usage: Just tell it to test with qagent / read the readme.

Why did I build this? Because letting claude code test it's own creation fails SO MANY TIMES. And also takes ages, while burning tokens. I prefer to pay the API price (for gemma-4 that's less than a cent).

u/haukebr — 6 days ago

Hey folks, back with another article - this time about how to design tests that survive when the UI is refactored.

TLDR: Learn why UI refactors keep breaking Playwright tests even when the features work fine.

It covers the coupling patterns that make suites fragile and ranks selectors by what actually survives design system changes. There's also content on structuring page objects so migrations don't cascade into dozens of test failures.

u/waltergalvao — 8 days ago
▲ 4 r/Playwright+1 crossposts

Hi, I'm a Manual QA looking to learn Automation testing using Playwright with Python. I choose Python as my current employer uses Python with Playwright.

Code - https://github.com/Strawboy97/EventHub

Wrote the code myself but I used AI to generate a Readme, still have tests to add but wanted to see if I'm on the right track.

Thanks

u/Strawboy97 — 8 days ago

26, working as mendix support and developer from last 4.5 years. Not seeing much growth, and to uspkill myself. I decided to start learning playwright.

Currently starting it from Udemy course, any tips or suggestions would you like to give me as a beginner.

No experience as tester

Goal:- To learn playwright and land a good job in automation testing.

reddit.com
u/Early-Act-6402 — 10 days ago

I tested 3 approaches to handling auth state in Playwright - here's what actually held up

After maintaining a mid-sized test suite for about a year, auth management kept biting us. Here's what I learned:

1. storageState per role Cleanest approach. Generate auth files once, reuse them across tests. Breaks when tokens expire mid-CI run, so pair it with a global setup that refreshes them.

2. Logging in per test Painful and slow, but occasionally necessary for tests that mutate user state. We isolated these into their own project in the config to avoid polluting parallel workers.

3. API-level auth + injecting cookies manually Fastest by far. Skip the UI login entirely, hit the auth endpoint directly, then inject the session cookie. Fragile if your cookie structure changes, but worth it for high-frequency smoke tests.

The real lesson: mixing strategies based on test type is better than committing to one approach globally.

Curious what others are doing - especially around multi-tenant apps where you're juggling 5+ roles. Do you generate all storageState files upfront, or lazily per test file?

reddit.com
u/Bazingga_17 — 5 days ago

Hello,

I recently started working with playwright, switched from selenium, and there's something not clear to me.

Which one do you guys use in page objects to wait for locator's visibility

await this.videoPlayer.waitFor({ state: 'visible' });

or

await expect(this.videoPlayer).toBeVisible({ timeout: 10000 });

I know they both do the same thing, but for me it makes more sense to keep assertions at test level.

reddit.com
u/Alert-Argument-3087 — 12 days ago

Switching from Tosca to Playwright + AI — Is This the Right Move for Long-Term Growth?

This is a follow-up to my previous post about switching from Tosca/testing

Previous post: [Previous Post Link]

I currently have ~2 YOE in Tosca/SAP automation in a service-based company and after discussing with experienced folks, I’m planning to move towards Playwright since many people suggested it’s becoming more preferred for modern automation projects.

I also want to stand out from the crowd, so I’m interested in combining Playwright with AI tools/workflows like GitHub Copilot, Playwright MCP, AI-assisted automation, etc.

While exploring Udemy, I found multiple Playwright courses.

Now I’m confused about the best path to start with.

Should I first build strong Playwright fundamentals and then move into AI-assisted automation, or directly start with courses that combine both?

Would really appreciate guidance from experienced Playwright/SDET folks on:

- The right learning path
- What’s actually used in industry today
- Which type of course would be better for long-term growth

Thanks in advance!

reddit.com
u/Fearless_Shift_1139 — 6 days ago

If you’ve used Playwright MCP for more than just demo logins from YouTube, you’ve probably run into this issue: the agent misses some elements on the page, gets confused, or completely loses context.

The reason - Playwright MCP sends an ARIA snapshot to the LLM, not the full list of interactable elements from the DOM.

Together with my team, we built an MCP upgrade that:

  • serializes the full DOM tree
  • returns all interactable elements
  • provides a complete page context

As a result, the agent gets a full picture of the page, understands how to interact with elements, and can generate significantly more accurate and comprehensive test scenarios from the first attempt.

https://github.com/MobiDev-Org/treegress-browser-mcp (open source)

Hope you find it helpful. I’d really appreciate your feedback

u/Warm-Camera-3520 — 9 days ago

Correct way to publish report when executing in azure pipeline?

Right now when I published html artifact it doesn't have embedded screenshots or video for the test. What is the right way to take a look at report when running in pipe lines ?

reddit.com
u/road2bitcoin — 5 days ago