- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
They’re still useful. For one, this study appears to focus solely on “distorted text field” captchas, which I’m pretty sure have been known to be solvable by bots for years now. For another, they can still provide useful metrics to determine whether a user is a bot or not depending on how much telemetry is available. Even text distortions can factor in whether cursor movement and typing cadence appears human. The article mentions that bots can solve captchas in under a second, which sounds scary but is something humans would not be able to do–so that can be used as another filter.
Plus, just because some bots CAN solve them doesn’t mean ALL bots can. It’s another layer of work for anyone trying to create bots accounts to deal with.
If websites can block you from solving captcha too fast then it should block a lot of bots. If the solving time is faster than human reaction time then it is fairly certain a bot.
I never like captchas for privacy reasons. However bots are a big problem in cyberspace.
You should use privacy pass extension.