Spent last month rewriting my scraping stack. Main takeaway: the whole "CAPTCHA solver" market is a trap if your targets are protected by anything from the last two years.
The setup I was running: headless Chromium + puppeteer-stealth + 2Captcha fallback + residential proxies.
The failure mode: Cloudflare Turnstile pages returning "Sorry, you have been blocked." No CAPTCHA image to send to 2Captcha. Stealth config passing old detection checks but not new ones. ~30% of targets unreachable on any given day.
What I learned digging in:
Modern CAPTCHAs aren't images. They're risk scores. By the time a challenge page renders, your browser has already been scored on ~200 signals. The "I'm not a robot" button is just how the site tells you the verdict.
The stealth-plugin treadmill is real. Cloudflare ships detection updates faster than the open-source stealth community ships patches. For every 2 weeks you're passing, you'll have 3 days you're not.
The only architecture that's not on the treadmill: use a real browser. Not a headless one trying to look real. An actual Chrome you use daily, with your real history and cookies, driven by an agent.
Write-up with the full breakdown: https://www.browseract.com/blog/what-is-a-captcha-solver