How do you tell if failures are caused by bad proxies or bad automation?
I'm dealing with a recurring problem where automated jobs fail inconsistently when proxies are involved.
Sometimes the browser test passes locally but fails in CI. Sometimes the request works without a proxy but times out with one. Sometimes one proxy provider works fine for one domain but performs terribly on another.
for me right now the hard part is diagnosis. I dont want to waste hours debugging selectors, waits, or test code if the real issue is proxy quality.
For those using proxies with Playwright, Selenium, scraping tests, or geo-based QA checks, what's your process for proving whether the proxy is the problem?
Do you benchmark providers before adding them to your automation stack? What metrics are actually useful?
I'm thinking:
- success rate
- median and p95 response time
- timeout frequency
- CAPTCHA/block rate
- repeatability over time
- results per target site, not just generic speed
If there's a standard way to test this properly.