Hacker News new | past | comments | ask | show | jobs | submit login
Google's Captcha in Firefox vs. in Chrome (grumpy.website)
1343 points by kojoru on June 10, 2019 | hide | past | favorite | 483 comments



I was going through the same ordeal as a Firefox user, so I've made Buster to solve challenges and reclaim some of that lost time: https://github.com/dessant/buster

If you're a developer, please consider replacing reCAPTCHA on your site with an alternative. reCAPTCHA discriminates against people with disabilities and those who seek privacy, and it gaslights you into thinking you did not solve the challenge correctly, which is plain cruel.

Here are some reCAPTCHA alternatives: https://www.w3.org/TR/turingtest/


The problem with recaptcha alternatives is that they either are insecure or require time and money to continue to be ahead of bots.

All of the "interactive stand-alone approaches" from that page can be beaten with run-of-the-mill OCR (other than perhaps the 3d challenge) and with almost any mobile phone speech recognition engine (and, if the attacker has the money, can send it off to Google's cloud speech-to-text).

All of the non-interactive approaches from the page require this constant tuning and upkeep to make sure bots aren't able to sign up/abuse systems. There's also not \that\ secure if your website is targeted and scripts are made specifically to avoid your anti-abuse methods.


> The problem with recaptcha alternatives is that they either are insecure or require time and money to continue to be ahead of bots.

Sure great, but when I see behavior like the above, I just hit back and add the site to my routers firewall black list. If its this much of a PITA to "solve" a captcha, CORRECTLY but I keep getting the middle finger I don't give a crap anymore. Your site isn't worth going to if I have to spend literally minutes "solving" captchas for googles stupid ai which is treating me like prove i'm a bot even when I prove i'm not.

Just realize by using recaptcha this is what you're forcing some users to deal with. And I deal with it by making sure I never come back to your site ever again when you've wasted minutes of my time just to try to get to your page. Even if its googles fault for being jerks, I don't care. You choose to implement it.

Ok rant mode off and stepping off my personal soap box.


Your site isn't worth going to if I have to spend literally minutes "solving" captchas for googles stupid ai which is treating me like prove i'm a bot even when I prove i'm not.

I've run into state and local tax agencies, utility companies, and large healthcare companies that require Google's reCAPTCHA. So, unless you don't want healthcare, to have water service at your home, or you're in the mood to just shut down your business, you have to suck it up.


UK Gov doesn’t allow CAPTCHAs on central gov services: https://www.gov.uk/service-manual/technology/using-captchas


They can still use them if they meet certain criteria, and show that they 'need' them. The overuse probably comes from the incentive - Google is incentivized to encourage the use of captcha because it is curating a data collection for ai training. I imagine some of the 'gaslighting' that people experience is when they are given images that don't yet have a confidence rating high enough. I wonder if answering incorrectly often enough would result in being asked fewer questions?


(I used to work at GDS)

‘Need’ here means exhausted all other opportunities, and have built alternative accessible ways of accessing the same service. I’d certainly have expected a service to have investigated a self-hosted solution, and I doubt a reliance on 3rd party JS from a Google service would fly, regardless of the service, as it breaks a whole bunch of separate resilience guidelines.


The few times I couldn't avoid Recaptcha, I spent 5 minutes randomly clicking on image tiles. Sometimes I got through by this strategy. If it didn't work, I tried a less random approach.


It will let you through eventually, even when intentionally selecting wrong fields, when you do it often enough.


So frustrated people give up, but tireless bots will get through? That sounds like the exact opposite it's supposed to accomplish.


I've even seen state and government sites using Google's reCAPTCHA. People shouldn't be required to hand over their browsing history and other information to Google for essential services, especially to use government websites.


Thankfully, Indian government websites still use their own captchas - which though not as 'secure', works for most of the cases, and don't take minutes to solve.


It this case they get to deal with me offline. Like I'm using a credit card right now without internet banking. They send me letters, on paper, with how much I owe them and then I pay. All because registering for their internet banking was a crazy shitty experience that I abandoned.


I default to paper mail with things like written checks for that sort of thing. Never had a problem.


Of course if it's an essential service like healthcare, formal education, paying bills etc. people will be forced to use it (if there's no option to change that service itself). But for that fancy startup showing some content for to consume when it's not necessary, I will just close that website.


i say the same thing to my friend in a wheelchair -- "suck it up handycapper and pull yourself up the stairs".

there was a time not long ago before wheelchair ramps or accessible doors were commonplace. these people were literally shut out of society.

its the same with captcha forcing privacy-conscious users off the internet.


Uh, using a wheelchair vs walking is a lot less of a personal choice than using Firefox vs Chrome.

Or: people who need a wheelchair are protected by anti-discriminatory laws, while people who prefer not to use Google products aren't.


uh, captchas don't just appear on Google products. Third parties use it -- government services, online shopping, all kinds of things you take for granted because clearly you aren't one of the people affected by it (ie you're fingerprinted). Many things we used to do in physical space now occurs virtually. There is a serious philosophical and moral case to be made for the relevance of privacy and anonymity that captcha is specifically and nefariously working to erode. And in that sense it's worse than bad building codes.


I suspect the Google product that the GP was referring to was Chrome, given that this is a co, ent thread about Firefox vs Chrome, and the behaviour of another Google product (recaptcha) betwee the aforementioned products.


Yeah, but then again, so many times that I run into Captcha issues, it's on a site that really doesn't need Captcha to begin with.

Why make me solve a Captcha to see static content?

Why make me solve a Captcha to log in when I've already completed one to register?

Why make me solve a Captcha to pay utility bills? Is there some underground group of deviants going around surreptitiously paying other people's utility bills? The monsters.


> Why make me solve a Captcha to see static content?

Fair point, I usually run into this when using Tor, or VPN when accessing content behind Cloudflare, and or similar services. This is some anti abuse stuff, but is often overly agressive with giving you captchas.

> Why make me solve a Captcha to log in when I've already completed one to register?

So attackers cannot password spray. This is typically after attackers has gotten access to the latest database breach, and are just blindly trying username/password combinations.

> Why make me solve a Captcha to pay utility bills? Is there some underground group of deviants going around surreptitiously paying other people's utility bills?

Sound like a strange place to have a captcha indeed. What information is needed in the form to submit it? Does it validate stuff that an attacker might want to scrape? I guess they added it for a reason.


> I guess they added it for a reason.

This is not necessarily a reasonable assumption. People often do things because they heard it was a good practice, or because it solves a problem they don't actually have, but think they might, or arbitrarily without giving it much thought.


So attackers cannot password spray. This is typically after attackers has gotten access to the latest database breach, and are just blindly trying username/password combinations.

A simple ratelimit takes care of that. Plus, it's not like attackers would be easily defeated by a CAPTCHA anyway --- there are services selling batches of valid tokens, likely generated by actual humans or very close emulations thereof, for ReCAPTCHA.


CAPTCHA is not a fool proof, it is just the first layer in of defence in the signup/login form. CAPTCHAS increases the cost of password spraying, attackers can't simply fire up Hydra. They'll need additional tools and services which costs money.

Captcha solving service also has other costs than just the money it costs. It adds time costs and additional resource usage on the machines it is running on. A quick look at a service[1] shows that the average response for a challenge was 40 seconds (this value changed a lot when refreshing the page). The attacker has now gone from the 200ms range per attempt to several seconds, slowing the down a lot. This gives defenders additional time to respond, it is also a useful metric for detecting malicious logins.

[1] https://anti-captcha.com/mainpage


Rate limit by what? IP? Botnet traffic will originate at random IPs.


By the account. 3 failed login attempts in a row, and you disallow further logins for 30 seconds.

This should waste less time than reCAPTCHAs. I know it's not 1:1 in terms of pros/cons, but it gets a good subset of the advantages without the key disadvantages mentioned above.


First, that's a bit user-hostile (and suddenly a DoS-vector; I can prevent a site's users from logging in by continuously firing bad password attempts).

Secondly, botnets can, and presumably do, randomize which accounts they try, too.


So rate-limiting is "user-hostile", but permanently hell-banning someone because their network is considered "seedy" is user-friendly?

Incidentally, you still need rate-limiting if you use Google's CAPTCHA. If you don't rate-limit CAPTCHA endpoint, an attacker can DDoS you (especially if your server-side captcha component uses low-performance single-threaded HTTP client). Furthermore, an attacker within the same AS as their target can purposefully screw over their account by performing attacks on Google's services until the reputation of the network hits rock bottom.


reCAPTCHA is a rate-limiting measure. Google handles all the heavy-lifting and attacker protection for you, and the slow fade you see in the video is that rate-limiting in action. But if you get a clean CAPTCHA result back from them, then that client is very unlikely to be an automated attacker. It's super easy and scales really well.

Conveniently, normal users with typical browser configurations get nothing but the animated checkbox. For nearly everyone, the whole experience is simple and easy. The only people who get inconvenienced are the low-grade privacy enthusiasts who think that preventing tracking is the path to Internet safety. Ironically, "tracking" is literally the mechanism by which legitimate users can be distinguished from attackers, so down that road lies a sort of self-inflicted hell for which the only sensible solution is to stop hitting yourself.


so down that road lies a sort of self-inflicted hell for which the only sensible solution is to stop hitting yourself.

"Be a good little sheeple and do what Big Brother Google says." Fuck no.


So I can lock you out of your account with 3 attempts from any IP address?


For a minute usually. Prevents flooding. Not a bad approach unless the account is constantly hit. In those cases two factor auth makes sense.


This is obviously a bad idea. It costs nothing for an attacker to send 3 http requests, every minute, every hour, all day. They could lock your account basically forever. IP filtering and locking accounts are terrible ways of preventing password spraying.


> By the account. 3 failed login attempts in a row, and you disallow further logins for 30 seconds.

...congratulations, I just locked out all of your users. Have a nice day.


How did you get the email addresses of all my users, which are used as login name?


From that messed up email from support that leaked them. Or I assumed that you'll have a big cross-section with some other site that leaked.

This is not theory, this is hard-earned experience. Locking-out people is bad, the most that's acceptable is rate limiting to a once every few seconds.


> > Why make me solve a Captcha to pay utility bills? Is there some underground group of deviants going around surreptitiously paying other people's utility bills?

> Sound like a strange place to have a captcha indeed. What information is needed in the form to submit it? Does it validate stuff that an attacker might want to scrape? I guess they added it for a reason.

Ive seen captchas on payment forms to prevent credit card checking. You can take a dump of CC details and try them all out on a site and get back the valid ones. I'd assume they charge $1 to the CC to test it before allowing you to continue and then you could cancel your order before they charge the full amount. However, assuming you have to be logged in to pay your bill that seems less reasonable.


I've even seen people beat captcha in bulk to get to a payment form. My best guess is something along the lines of mechanical turk or a room full of low wage workers doing it manually. I think the payoff of verifying stolen cards is worth enough to justify some kind of workaround.

If you host a payment form that informs the user about whether payment was accepted, you're a target.


> Sound like a strange place to have a captcha indeed. What information is needed in the form to submit it? Does it validate stuff that an attacker might want to scrape? I guess they added it for a reason.

In the past, I used curl to get some billing info, add the money to a dedicated virtual prepaid card, then pay the bill, then send an email to a gmail (+paidinvoice) label. These day, at least for my bills, they have pre-approved withdraw directly from the bank. However I guess this is not widely deployed.

If other people did this, but ended up doing it from an insecure machine and lost the credentials / got hacked, I can see why at least some orgs might want to prevent people from doing this. This is a classic over reaction, but a plausible scenario.


> If other people did this, but ended up doing it from an insecure machine and lost the credentials / got hacked, I can see why at least some orgs might want to prevent people from doing this.

The measure is not really about protecting the user that is using the payment form, it is meant to "protect" the system that is validating the payment data. The payment form may be a target for attacker which has gotten a large batch of credit cards from somewhere else, and wants to validate the data. They then regularly exploit such forms, or other naive payment system to check if the credit card data is valid.

CandyJapan owner wrote some blog posts about the subject.

https://www.candyjapan.com/behind-the-scenes/how-i-got-credi...

https://www.candyjapan.com/behind-the-scenes/candy-japan-hit...

https://www.candyjapan.com/behind-the-scenes/fraudulent-tran...


My electric company requires one to login - but only after a the browser session expires and I have to login again anyway.


> So attackers cannot password spray.

My password's not crackable, so it's annoying to be lumped in to that. I'd happily use a service-generated password to avoid login hassles.


I imagine what you are proposing then is to record the entropy on the password when you first register and for accounts with sufficient password entropy to not ask for a captcha after few failed attempts.

With that, the site gives away whether the account has a low entropy password or not.


> I imagine what you are proposing then is to record the entropy on the password

Or just generate secure high-entropy passwords and force users to use them.

Making users look up SMS codes before each login is acceptable. Making them solve obnoxious, long, privacy-hostile riddles is acceptable. But forcing them to use pre-generated secure passwords?! That can't possibly work. They will revolt!


> With that, the site gives away whether the account has a low entropy password or not.

Sure, why not? Way more than half of passwords are low-entropy, so that doesn't meaningfully help them focus attacks.

And they still have to keep solving captchas to make those attempts.


The weirdest one I have ever seen is on frikking walmart.com - here is my cynical paraphrasing of their 'thought process': "We don't want your money! Go back to Amazon! No captchas there cause they are not stupid!" I persist because I don't want to go back to being a 2nd-class non-Prime Amazon citizen but the darned unnecessary captchas really ruin my walmart.com shopping experience to no end.

If anyone from Walmart.com is reading, please please get rid of these useless captchas - it is an incredibly stupid thing that you do and unfortunately you do it too well as well.


The problem with CAPTCHA and the like are they seek to stop programmatic-browsing of websites, that both Firefox and Chrome support out of the box. If companies are concerned about non-human access they should make an official API instead of their website being a de-facto unofficial API. If they are concerned about fraud they will be woefully defended by CAPTCHA, it makes no judgement on the validity of transactions at all and doesn't prevent frauds signing in manually.

Ironically, Google has committed at least $75 million and likely hundreds more of fraud, via stolen refunds and stolen banned-account balances!

https://www.businessinsider.com/google-emails-adtrader-lawsu...

https://www.searchenginejournal.com/adsense-lawsuit/248135/


> If companies are concerned about non-human access they should make an official API instead of their website being a de-facto unofficial API

This is often impractical for several important use cases, like image rendering and PDF generation. Just hand waving away the cost of developing dedicated, pure APIs won't make companies more likely to do so.

> If they are concerned about fraud they will be woefully defended by CAPTCHA, it makes no judgement on the validity of transactions at all and doesn't prevent frauds signing in manually.

There are many different vectors of attack and fraud and CAPTCHA tackles one of them. It's silly to say it's unnecessary just because it doesn't cover all fraudulent activity


I implemented simple question / answer antibot filters on registration forms for a few sites. Nobosy ever made the effort to customize their bot to answer to those very few questions. I guess it doesn't make sense economically. However if a big site would go that way, it would be filled with bots in a day.


I once implemented a "poor man's captcha" that presented a simple randomized question that anyone would be able to answer (ranging from "what year is it" to "what's 2 + 2"). I guessed that nobody would make the effort to write a custom script for this, because the website in question was so niche and the stakes so low -- a very quiet corner of the Internet; I don't even remember what is was, possibly some feedback form that went to a support email. I actually felt some irrational measure of pride when, probably a year later, I was looking at some logs and discovered that some script kid had cracked the questionnaire and was currently using the form to post nonsense text with Viagra links. Someone had actually sat down and written code to crack my terrible solution, and probably spent more time on it than I had (which is to say, more than five minutes). Made my day.


For small scale sites you don't even need to do much that requires human intervention. Most bots (or at least most bot-actions) seem to invest very little in sophisticated techniques and rely instead on finding vulnerable servers by casting a very wide net. As long as that is true, you can filter out 99+% of the noise by applying very simple but slightly bespoke techniques.

As long as there continue to be enough cookie-cutter blog/forum/ecommerce sites out there for the bots to exploit, very simple techniques (JS-populated form fields or request parameters, very basic validation of the HTTP headers, taking into account the rate or frequency at which requests are made, etc.) will quickly and cheaply identify almost all of the bot activity.

Of course sophisticated or dedicated bots will still pose a problem, but assuming you're not just standing up a popular off-the-shelf platform without any hardening or customization, you'll need get pretty big (or otherwise valuable) before attracting that kind of attention.

A reasonable analogy here is the observation that simply running sensitive services on non-standard ports (e.g., not running SSH on port 22) will eliminate a ridiculous volume of malware probes against your system. To be clear, that's no substitute for actual robust security practices -- you almost certainly shouldn't have something like SSH world-visible to begin with -- but given how trivially easy it is do something like to change the default port for services you're not expecting the public at large to reach it's absurd that servers are compromised by dumb scripts blinding probing the Internet to exploit well-known and long-ago-patched exploits every day.


I did that for on an old forum that has been dead for year, I thought spammers would not care enough.

But one of them did! Whenever I changed the questions, bots would stop for a few days, and then start again. Someone cared enough to manually enter the correct responses (no, blind dictionary attacks were not possible)!


This is probably good enough for 90% of websites that accept user content. Then in the small chance it isn't because of growth or some random spammer decided to spend some time on your site, then you can switch to something like recaptcha.


Hobby sites may be in a more difficult position, but businesses may decide between developer convenience and low cost, or excluding some of their users and tormenting them.

There are also ways to reduce the damage reCAPTCHA causes, such as keeping it out of the default UX path. Discord for example will show a reCAPTCHA challenge on the login page only if you are signing in from a new location.

reCAPTCHA cannot effectively defend sites against targeted attacks either.


OK, Discord specifically is terrible. I login in incognito mode from the same location/browser every time, and have to deal with Captcha most of the time.


I use Discord from an incognito Chrome window. I avoid it most of the time, by doing: 1. Email is manually typed, password is copy pasted 2. I move the mouse around in the window in a fairly non-mechanical manner. I don't know if you use Chrome proper for it, so that could still be a point of difference.


I mean do you want Discord to fingerprint your browser so you don't have to deal with captchas? Kind of defeats the purpose of incognito doesn't it?


> Kind of defeats the purpose of incognito doesn't it?

They're going to track my IP whether I want them to or not. So they should go ahead and use it to reduce hassle.


> …only if you are signing in from a new location.

Or you clean your cookies out, thank you "Cookie Autodelete".


I don't understand this. You're logging in from a fresh browser. Do you want sites to fingerprint you in other ways so you can clear your cookies and not have to deal with captchas?


If there haven't been any failed logins on the account since last success, there's no need to throw up a captcha.


Sending my data to Google as a condition of using someone else’s site also isn’t secure. Training Google AI also isn’t something I signed up for.


Not saying I like the precedent of Google being inescapable, you're not "signing up" for anything. A web server is 100% in its rights to refuse to send you a page, on their terms.


That is true. However, if I sign up for a service, for example TransferWise, then later, signing into the account, I get a Google Captcha, now I am engaged in a relationship/data share with Google and if I don’t agree, I lose access to my account. When I signed up, I didn’t have “you must help train Google AI” as a condition of use.


Not sure why you're downvoted, it's a valid point. It feels icky to use a service that you pay for, and incidentally provide free labor to Google's AI which they resell in Google Cloud as a walled garden. The result of reCaptcha isn't public as far as I can tell, and humanity probably doesn't get a net benefit from Google's monopoly on AI anymore.


People talk about "free labor" and forget all the times they were able to do Google searches or use Google Maps for free. It seems rather ungrateful? This isn't a one-sided relationship, both sides benefit.


The difference lies in whether you willingly subjected yourself to this transaction (give eyeballs, get Maps service) or whether it was imposed on you without anyone bothering to mention or question it beforehand.

Also the gratefulness part is strange. The corporation has no gratefulness for me, why should we show it any kind of loyalty. It's not a living entity with a consistent mind or consciousness. It will change its will based on Wall Street's demands. It will ban you silently with no recourse.


Perhaps "ungrateful" is the wrong word. But in a purely transactional society where we charge each other for every little thing we do on the Internet to avoid any "free labor", I suspect that we would be considerably worse off.


Logical error here.

Some people avoid Google Search, Chrome etc. They are still subject to this.


This is simple corporate sycophancy.


You seem to be a bot. Write a poem describing the outage and email it at larry@google.com . We will look at it and unblock you if we believe you are a human.


I believe we agree with you there. OP was just referencing the methodologies people user, often choosing tools like Google Analytics and ReCaptcha that are "free" by virtue of offloading compromises onto the site's users rather than the site itself.

I endorse a site's right to forbid me its content if I can't prove I'm human. I won't endorse a site that accomplishes it by asking me to pay the cost.


Unless you dont want to access to whatever is behind a sites captcha, you are signing up to solving their ai cv problems.


Not entirely accurate. The GDPR restricts the terms they can use, for example. And anti-discrimination law probably also applies. These don't really apply to captcha, of course, under current interpretations.


It's very easy to argue that CAPTCHA is an essential service and therefore not under GDPR.

> anti-discrimination law

Google-avoiders are not a protected class.


> It's very easy to argue that CAPTCHA is an essential service and therefore not under GDPR

No it isn't. In fact, out-of-the-box reCaptcha is not GDPR compliant, and using it on your site will open you up to possible liability. See https://complianz.io/google-recaptcha-and-the-gdpr-a-possibl...

My reCaptcha strategy is to fire off an email to the site owners every time I am subjected to a reCaptcha, asking for all my data under GDPR. Most websites only need a few such requests to quickly start looking for an alternative. Fuck Google and their constant attacks on my rights.


The blind are. And the audio captcha is roughly useless.


That's why I always answer the captcha's wrong. the machine!


It's the only way to stop Skynet!


> The problem with recaptcha alternatives is that they either are insecure or require time and money to continue to be ahead of bots.

You're posting this in response to an automated recaptcha solver. Clearly recaptcha also has trouble staying ahead of bots.

It seems to me that any simple automated test at the entrance is inevitably going to be easy to solve by bots, especially when it's a one-size-fits-all test like recaptcha, so bots have only a single target to aim at. A small-scale unique test will be more successful simply for that reason.

But it seems to me that the better way than to ban bots together with humans who fail to pass your Turing test, is to check for the behaviour you want. If you don't want spam, have a system to recognise spamming behaviour, rather than traffic lights.


wrong. captcha blocks bots and humans alike. so why bother with the fake puzzle at all? just replace whatever triggers your captcha with a straight up block. or else please consider a responsible alternative.


ReCaptcha blocks (or deters) an extraordinarily larger percentage of bots than it does to humans by far.


of course it does. so does an automatic ban. that's precisely not the issue.

i think you probably meant to say recaptcha allows an extraordinarily large number of humans compared to false positives? because that would be the relevant metric. you sure about that one?


> and with almost any mobile phone speech recognition engine

My only problem with recaptcha is when audio doesn't work (google decides I'm spamming their network… sure…). Because their audio validation seems to use only one rule that says "letters where typed". So I'm not sure how being able to beat it with voice recognition makes it worse.


How hard would it be to create an alternative using GPT-2 or the like?

Create a dozen models based on different things. Street signs, cats, houses, cars, etc. Then show the user a random selection of images generated from different models and say "select all the cats" and they get it right if they choose the images generated from the cat model.


To understand the depth and complexity of Captcha2 I highly recommend this:

https://www.quora.com/Why-cant-bots-check-“I-am-not-a-robot”...

Was posted on HN a while ago.


So the short version is that they try to fingerprint the user and then distinguish fingerprints that seem like humans from fingerprints that don't.

The interesting question then becomes how this is going to interact with future browser anti-fingerprinting measures whose purpose is to prevent just that.


I don't doubt that it's far easier to abuse traditional captcha systems, but I wonder how wide spread that is. A while ago I did a test with securimage and tensorflow/python/opencv/keras after I read a Medium post. While it could solve captchas with a little distortion when I added squiggles, dots, and more distortion it was unable to solve the captchas. I'm sure you could spend more time and create a system that can solve these captchas, I wonder how much effort some random spammer will put in to attack your blog. Yandex uses traditional captchas, and they don't seem to have any issues.


Honest question: can we start a class action lawsuit for psychological damages due to this? I've experienced this firsthand when trying to use a service through a VPN. I spent legitimately 5 minutes trying to get through only to get "Please try again" every time even though I selected them meticulously. It is infuriating. I thought I was going crazy


In fact, Google has a patent on blocking users by means of CAPTCHAs that always return failure[1].

[1] https://patents.google.com/patent/US9407661


> In fact, Google has a patent on blocking users by means of CAPTCHAs that always return failure[1].

Erm, unless I'm mistaken, that patent says it's owned by Juniper, not Google. Google is just hosting the patent document.


You cannot make an appointment on the california dmv website without using google services, in particular recaptcha. Also, just browsing the website, it tries to log you in.

https://dmv.ca.gov

additionally, lots of schools now require their students to use google services.

I hope there is a privacy lawsuit in the future to stop this sort of nonsense.


I just click randomly on Chrome and get trough .-.


I've recently had issues with buster where Google detects it, giving me this error:

"Your computer or network may be sending automated queries. To protect our users, we can't process your request right now".

Is there a solution for this?


It may not be buster causing that. I see that sometimes on a VPN, but also when not on a VPN but using Firefox with ghostery/ublock origin, etc.


It may not, but Ive seen the same. Only with Buster. And only since recently.


It may help if you go to the extension's settings and enable user input simulation and install the client app.

Though Google may block your access to the audio challenge regardless of the browser or extensions you use, see more details here: https://github.com/w3c/apa/issues/25


I also get this sometimes, not even using buster. Once I was not able to access package tracking information, because Google blocked me completely via recaptcha from that.

I actually do a lot of automated queries from my computer.

I like to scrape and save content that may disappear. Just recently one psychology website I liked years ago where I put a lot of effort to comment on, silently deleted all 60k user comments, including 100s I wrote, and started putting old articles behind a paywall. My activity is perfectly legal, as I'm doing all this for my own personal use.

Thankfully I have all the content locally in the database.

Does it mean I should be prevented from accessing third party services that use recaptcha?


reCAPTCHA also just doesn't work in the most populous country in the world. translate.google.cn does, but Google's reCAPTCHA does not. This is a big pain point. Thanks for the link to turingtest, I will certainly test it.


To be fair, lots of things on the internet don’t work in the most populous country in the world.


> the most populous country in the world

The United Nations estimates the current population of China around 50,000 more than the population of India. Given the uncertainty of these numbers, I can't exclude that India already has the most numerous population.


Could be, I don't know. 50,000 is a village in these contexts! I'd really like to explore India, it is, also, vast.


You're correct, quite a lot of things are not accessible in the most populous country in the world.

However, federated things are accessible. The big names Facebook/Twitter/Youtube/Google are blocked, and the services below them. However it is a blacklist of blocked not a whitelist of accessible. Putting google analytics traction in a header of a federated blog, meaning it's actually not federated, is indeed a stupid pain. China internet is restricted, but it is only restricted 'enough' for the current power.

Edit: And that seems good enough for now. Wechat 'moments' and use of Tiktok, from my observation of friends or even taking the train, are on a steep decline. Wechat's future seems mainly as a commercial P2Passist or very simple blog platform. Both dropped the ball and mobile payments will not disappear but the tide has turned (NFC, anyone? this was an already solved problem. The only real challenger bank China has is China Merchants Bank but they're after merchants, the clue in the name. For customer service and being one to perhaps pull a rabbit out of the hat, China Construction Bank. I have no idea how BEA didn't grab mobile payments.


Could it have something to do with the most populous country in the world blocking the rest of the world? For fear someone might massacre square...


Hmmm.. ok.. I could and should write something on this a lot longer.

The government facilitate corruption. The government is a hegemony.

Aside from that broad shot, 10 years ago you enter the aforementioned square freely, not only after going through a 'police' security check, bags x-rayed, IDs checked.


I thought recaptcha provided alternate domains/hosts not linked to Google so that you can use it in China. Is that not the case anymore?


reCAPTCHA does not work in China mainland (it does in HK, but that's different for now). But translate.google.cn (note the .cn) works fine. Similar visual recaptchs used on Chinese services tend to focus on Chinese characters on a low resolution picture background. Training for street names? I don't know

Resolving to google.com does not resolve (gmail does, a bit, IMAP but only every few hours or days, depending on connection sans VPN).


if this is true, I'd love to hear the alternate! I use recaptcha and hate that my chinese customers need to do wacky stuff to circumvent it.


See: https://developers.google.com/recaptcha/docs/faq

Look under the section "use recaptcha globally" -- this is what I was referring to. However it's not clear to me if this approach enables use in China or not.


Thanks for that. Changing to www.recaptcha.net right now!

Could be a while before I get enquiries from China but there is only one way to find out.

Google did say 'globally'...


Just out of curiosity, why do you use reCaptcha?


Not sure if this is "officially" supported but I believe you can proxy the `api.js` file yourself without issue.


The photos or voice still needs to come from from somewhere. The somewhere is google.com. The .com is blocked.


Please see my other reply in this thread: "recaptcha.net" can also be used. Is that blocked in China too? I can't find a clear answer.


I pinged recapture.net and got a 50ms response time. Baidu would give a 20ms response time. That's on WiFi. That leads me to think the server responding to these pings is in certainly in mainland China, I think in Alibaba's IP range, but probably not a CDN. Interesting, thanks.


I find it ironic that out of all things google, it was translate.google.cn to be given an exemption. There is a meme going around that this was country's chief censor's personal decision.


reCaptcha works just fine in India. It does have some troubles in the world's second most populous country.


Source for India being most populous?

All sources I can find say that population of China is bigger than India.


reCaptcha may be racist against black people. I hear a lot of AI is, google dropped the ball here.


> reCAPTCHA discriminates against people with disabilities

It discriminates against people who value their time. Who in the right mind thinks that spending several minutes on captcha is ok?


Without taking any moral stance, it should be noted that accessibility was (and is) the most successful attack surface of anti-bots measures.


Recapcha is absolutely heinous on an iPhone SE. The pictures are way too small and blurry to figure out what they are looking for half the time and it’s really not built well for zooming.


If you want a good look at the state of the art in this field, look at Ticketmaster.

Ticketmaster uses both recaptcha and a pre-filtering solution they supply based on their own heuristics, as well as a complex user activity tracking system to determine whether you're a bot or not based on the activity you present and traffic you pass, so even if you pass all CAPTCHAs, they still might tell you to pound sand if you try to reserve something.

In the last few weeks, for select sales, they've even required unique phone numbers which they will SMS a number to or call and relay a code to which you need to enter just to get a single place in line for a sale.

I'm not sure of any company more actively on the forefront of prevented automated access than Ticketmaster (which makes it kind of funny when everyone chimes in about how Ticketmaster doesn't do anything to prevent brokers from getting all the tickets).

The problem is that what Ticketmaster is up against is people running specialized software that's able to emulate a browser, which ties into services that are specifically designed to beat CAPTCHAs in an automated manner using mechanical turk type solutions, but at a very low cost.[1] I have reliable testimony that some people spin up the largest AWS instance for an hour or so as needed, run this software, use a proxying service, and make 8k connections to queue up for tickets on a sale. Each AWS machine is another 8k positions in the queue. Every new layer Ticketmaster throws into the verification process knocks these people out for a couple weeks, until the company providing the software (which I believe charges a small percentage for every ticket purchased, so they fix problems fast) works around it. The arms race metaphor is very apt.

That's just one of the companies trying to circumvent Ticketmaster's road blacks for brokers. There are others that try to automate their purchasing to varying degrees. I myself work for a broker that takes a very different approach, where we use (relatively) very minimal automation, and have a person in front of a browser for every purchase (and we don't have many people at all), and instead try to make select purchases based of complex analysis and lots of data. Even that's gotten much harder in the last few years as venues and promoters have learned to play with the allocations of tickets, and hold large chunks of the inventory back to be released later at higher cost. I don't really see anything wrong with that, it's a market response to supply and demand, but it is unfortunately hidden in a purposeful manner, which affects not only brokers but the the end consumer, as market information is purposefully obfuscated (which makes the markets less efficient).

I've written on this multiple times before, so if anyone finds this interesting, just do an HN search for my username and Ticketmaster together.

1: https://anti-captcha.com/ (Scroll down and read their animated infographic for what is possibly the most amazing graphical metaphor of this I can imagine at step 4. It's so disturbing it's funny).


> it gaslights you into thinking you did not solve the challenge correctly, which is plain cruel

That's interesting. Unless you are talking about having to click on more than one "page" of tiles (as illustrated in the video in the OP) guess I don't run into reCAPTCHA often enough to have noticed this phenomenon. Can you elaborate on what you mean by that?


reCaptcha v3 works well for me. There are no challenges anymore and it just gives you a score based on whether it thinks the user is a bot/spammer, then you can do whatever with that. Personally if the score is low enough I just place the user in a restricted user group that needs approval on certain site actions.


Was just looking into using v3 today. Can you share what you consider to be low enough? I haven't seen any guidance on thresholds


Yeah I have the threshold set at 0.6. Anything below that gets put in the restricted usergroup.


Google recommends 0.5 as a default threshold, and you can then tweak it based on your analysis of the scores in the dashboard.


Audio is not offered if you have non-default privacy settings, so this doesn't work when you're getting the most time-consuming captchas. So your extension is good for the captchas which take 15-20secs but not the 1minute+ ones, unfortunately.


Thanks for this. Extensions like this one make Firefox for Android worth it despite all the quirks.


I just wanted to say thanks for posting this. I installed your addon when I first read the HN comments yesterday, and looking forward to testing out your work. It looked great!


just adding another thank you. it has made the internet accesible to me and other humans again. cheers!


None of your complaints are applicable with reCAPTCHA V3.


>If you're a developer, please consider replacing reCAPTCHA on your site with an alternative

I second this (for the same reasons that you cite), and it's fresh in my mind as I just recently began reimplementing authentication for my personal CMS. reCAPTCHA is not a nice thing to do to your users. And I also don't want to feed The Beast.


> and it gaslights you into thinking you did not solve the challenge correctly, which is plain cruel.

It's good to see some confirmation that you're not insane. Google's ReCAPTCHA is plain EVIL.


I've never understood what happened to reCAPTCHA, it was originally so great and is now just so, so toxic.

Originally it was an awesome solution based on OCR'ing books that usually worked quickly on the first try, and almost never took more than two.

Then it turned into a single checkbox (analyzing mouse movement) so it was even faster... and I remember some simple image-based like "select the images of cats" that were also easy to get right. So even better.

But THEN... in the past couple of years, the image-matching started asking exclusively for analysis of street images, that has two huge problems:

1) The images are so blurry and ambiguous it's really hard to get right, it feels like a test designed to make you fail

2) You never know how far you have to go -- you keep clicking items, they keep replacing them with new ones, and there's zero indication of if you're almost done or if you're getting better or worse.

Once I did one for three minutes straight, neither passing nor failing, until I just gave up and left the page... if it's a bug, that should never happen. If that's supposed to be able to happen, that's the apex of asshole design. Either way, it's a failure in every way.


There's a third problem: quite a bit of the stuff they present is (almost) uniquely American and presents a recognition challenge in other cultural contexts. That yellow vehicle? Looks nothing like a bus in most other parts of the world. And so the rest of the world gets to learn what an American Bus looks like... Not, I think, what was intended.


Or it tells you to pick out pictures of cars and shows you a pickup truck. Now you have to figure out if people would call that a car or not. How about a delivery truck? A motorcycle?

Or it will ask for pictures of crosswalks, and you have to decide if 3 pixels of a crosswalk in the corner of one of the pictures counts.


If it makes you feel any better, I'm fairly sure the answer to those questions don't count. I know I've gotten some reCAPTCHAs "wrong" and gotten marked as a human. It's picking up on a lot of signals, not just whether or not you're "right". So, the good news is you can relax, and safely rewrite all the questions to "Do I think this is a store front?" or "Do I think this square counts as a crosswalk?" or whatever without loss.


My "favorite" is the one where you have to select the boxes with traffic lights. Does that mean just the actual lights, or the entire structure? More importantly, what does Google's AI think the answer is?


crosswalks are also an american term for pedestrian crossings.


I often get asked to identify store fronts. They are the worst.

The pictures are blurry and positioned at weird angles. There are lots of signs with east-asian letters (I'm not informed enough to guess what kind of alphabet they belong to) and I have no idea wether they are store fronts or not.

Is a sign to a dentist's office a store front? Generally it seems like anything with a sign above some sort of door or window qualifies as a store front.


Came here to say the same thing. It's literally impossible to distinguish a store from any other kind of business in many of those pictures. If Google wants to do behavioral fingerprinting they should just say so instead of pretending to do image recognition. But I guess some people just lie so much that they forget how to tell the truth.


What makes you think any store is not a store front? I realize that’s part of the problem, I’m just wondering why you wouldn’t assume the very literal “it is the front of a store” interpretation.


A commercial building with a sign on it might not be a store. They didn't ask for officefronts or warehousefronts. What about a bank or brokerage? A dental office or urgent-care center? Those can look a lot like storefronts, but whether they're considered such is pretty arbitrary.


I understand where you’re coming from and I’m having difficulty explaining the difference... it mostly comes down to what you consider a store (or a shop or whatever you call it). I know they could localize it more, but I feel like it should be pretty obvious what they’re talking about - a place of business selling good to the general public. Whatever you call that, banks and dentists and warehouses and medical facilities don’t really apply.

So yes, it’s arbitrary, but it’s supposed to be. It’s about your gut feeling as a human because that’s the whole reason they’re showing you any of these images.

If it “looks a lot like” a storefront then you’ve really got the same problem as everyone else in the comments: they’re small, blurry, images and it’s hard to tell what it is. That’s also the whole point: their algorithms can’t tell, so they want a general consensus from users. There are images they know and use as a control, but some percentage of the ones you see they’re legitimately not sure about.


E.g “Spot the fire hydrant” - oh, it’s those things that cops drive over in Hollywood movies. I don’t know if other counties have them too but it seems distinctly American and this capatcha is oddly common


Are you in america or using a vpn that shows as in america?


NZer here. The captures are usually American places with American themes.

I have definitely seen the "fire-hydrant" one, and we don't have fire hydrants (they are underground below well marked covers that are illegal to park on or placed where you can't park).

And coming from a first-world Western country, I have definitely been flummoxed by at least one that was too American for me to decipher. I feel sorry for anyone that doesn't watch American media.


Huh, there's fire hydrant here in Brazil. Although not as common as it was a time ago!


I see that stuff too. Not American.


I am from India, not using VPN. Except for storefronts, everything I get looks like from US-traffic lights, cars, buses (including yellow school buses), cross walks etc.


That hasn't been my experience. Most of the "storefronts" are (from what I can tell) based on Asia. I almost never see English signs. I'm still able to complete these challenges with only a little bit of difficulty.


Because it’s still created in an entirely American context. For example, the word storefront is an Americanism. The more commonly used word in the UK is shopfront, and in other English speaking countries they may just call them shops or stores, without the addition of the word front.


Fourth problem: How vague the instructions are. When I'm asked to click the boxes that contain signs, do I include the poles?


Yeah, this one puzzles me too. Generally, it seems like signs and traffic lights don't include supports, poles, etc.


Totally this, I'm British and am probably more exposed to american culture than other nationalities on average, and yet recaptcha still sometimes leaves me clueless on some americanism, that is when it's not driving me crazy with it's infinite loop. For other nationalities it must be straight up discrimination.

I sometimes wonder if these projects are actually internal astroturfing, someone trying to make people hate Google from the inside, it's so bad it must be intentional right?


Originaly it didn't belong to google, it was an aquisition. I remember seeing a ted talk about it.

To me it constantly feels like I'm working for google for free for their AI projects which is very annoying comparing to help a smaller company OCR books.


Trying to convince a robot that you aren’t a robot by teaching a robot how to look at pictures is a pretty absurd state of the world.

When they reboot the Matrix, instead of being used as batteries, the machines will keep humans around for machine learning test sets.


I think that was the original story for the matrix https://scifi.stackexchange.com/questions/19817/was-executiv...


Well, it might have been too close to the storyline of Hyperion Cantos (which probably got it from somewhere else).


You aren't working for free. You get access to a website and the publisher gets bot protection. It's a 3 way win-win-win transaction.


I think two things happened:

1) Computer vision got a lot better over the past few years. It's also become way easier for the average Joe bot operator to run cutting-edge stuff. OCR tasks don't cut it for distinguishing people from machines any more. Every time I see a blog post about a new computer vision architecture or how some random developer trained a neural network to get an X% result on benchmark Y, I think to myself CAPTCHAs are going to get more annoying.

2) The frequency at which most people have to solve a CAPTCHA has gone way down. In the beginning, I remember having to solve a CAPTCHA every single time I did anything on some sites. Now, I can't even remember the last time I had to do more than just check the checkbox. So, the amount of annoyance is amortized over a larger number of sessions, and Google probably feels like they can ask the user to complete more tasks as a result.


I've noticed the opposite on #2, especially in the last year or so. I've been solving a lot more captchas than I used to. I run Firefox with a lot of privacy focused add ons and I don't stay logged in to Google, I wonder if those have something to do with it.


Yes, they most likely do have something to do with it. If Google is unable to ID you in some way (e.g. browser fingerprint, cookies, IP, etc) and determine you're a good Internet citizen, they'll assume that you could be a bot and offer challenging Captchas. It's annoying, but on the bright side it proves that your privacy add-ons are working!


Same here. When this highly advertized service was launched ('just a click!') it worked perfectly. Slowly, over the past couple of years, they deliberately replaced that wonderful service with another one where we act as Google's unpaid workers.


Captcha Data has been used to traon ML models for a very long time. What's changed recently is that simple stuff like OCR has already been solved and democratized so the simple puzzles no longer work.


I'm not talking about the simple puzzles or 'words' that reCaptcha initially used to show. I'm talking about their 'improved' way of testing whether you are a bot by just making you click a checkbox. That doesn't work anymore (most of the times).


The frequency goes down as Google identifies you with stronger confidence. Try browsing from a VPN and you will spend half your time solving CAPTCHAs.


I am also getting way more captchas at least since the last 6 months. Exclusively using Firefox with clear everything on exit, multiple profiles, fingerprint flag on, some addons etc. No VPN. I get captcha almost all the time, even for Google searches from Firefox address bar (one out of 10 searches I think). But never gets a captcha for Google websites (gmail, youtube etc).


2) isn't true at all for me. I've always loved captcha and it has become a huuuuuge annoyance as soon as I'm using a vpn, tor, a weird wifi, a non-typical device, etc.

It is so freaking slow. I sometimes lose 60s to complete a captcha.


An insightful remark about ReCaptcha on HN recently (I don't have a link) was that it went from being "are you human" to "which human are you".


Ha, ha, very accurate observation.

And if Google keeps the pressure and nothing hits them back, soon the answer will be "Number 17 of 312 still using Firefox".

I still can't believe how Google has changed their tune - from "dont be evil" to being worse than MS ever was, which is quite an achievement in itself.


Google is in some ways much more adverse in impact than MS, but I suspect that hiring a bunch of people under the "don't be evil" mantra (and baking that "we're the good guys" into culture) has helped hold them back from some bad behavior.

At the same time an implicit belief in "we're the good guys" (combined with indoctrination including interview hazing rituals) can enable bad behavior, because then: "of course whatever we do is good, by definition, because we're the good guys" and then not questioned. MS did some really underhanded and insidious things with its power, and it's easier to see some of Google's behavior as due more to hubris/brainwashing.

I've started to use the CS101 whiteboard hazing as a litmus test for whether there's any point in trying to do good at Google, for my own career. So long as they insist on subjecting everyone to that (starting with people having just spent 4 years and a quarter of a million dollars on a Stanford CS education, and then people with verifiable experience on top of that), and also considering having been caught on abusive hiring/mobility conspiracy at they executive level, I think the CS101 whiteboard ridiculousness is not a good sign for corporate ego and intentions. It's also not great when CS students focus on drilling for that, to the exclusion of other things. For myself, if I applied anyway, I'd be fooling myself that I wasn't mainly after the compensation package, rather than wanting to have positive impact.


> I still can't believe how Google has changed their tune - from "dont be evil" to being worse than MS ever was, which is quite an achievement in itself.

It's called "selling out".


It sounds funny but I don't get it. ReCaptcha doesn't identify you does it?


To the website? No. To Google? Almost certainly given how it works.


I can imagine that, if Google already knows enough about you, just clicking "I'm not a bot" would be enough. Though I wouldn't know.

It seems like another way to punish people for caring about privacy.


There’s also this to consider: Google knowing enough about you to know you’re a human, and then wanting to use you to train. That’s why in some cases you can get away with just spamming whatever the hell you want in the picture grid. Because it trusts you enough to train it.


> 1) The images are so blurry and ambiguous it's really hard to get right, it feels like a test designed to make you fail

On top of that, I think some of the training sets are wrong. Multiple times I've been asked to find traffic signs, but it would only let me pass when including street signs.


There's also the issue that it will lie to you if the alogrithm decides it simply doesn't like you. Which means you'll end up doing at least a couple of rounds before it decides to let you through.


Rather, if it does like you (because you frequently get it right), it'll ask you to give it extra data.


Fascinating. Conspiracy theories around software. Might make for a fun sci-fi creative writing exercise.


I always envisioned their devious model to be something like:

- You want to train on an unlabeled dataset, label it along the way.

- You have a set of untrusted validators, some with no history, some with known credibility and accuracy scores. And you have a lot of them.

- You do kind of a zero-knowledge proof by showing the unlabeled dataset to validators that you know you can trust because of their historical high success rate, which you've already established through asking them to label a dataset that you already have high confidence on.

Kind of like how a blue-green colorblind person could find out which pen is blue, which pen is green if he is surrounded by people he can't fully trust. Ask people around you and maybe even show the same person the same pen (or a really dead-easy captcha) twice in a row. If they lie to you both times, they are not to be trusted.


If you use Chrome or Brave you can get multiple boxes wrong and still get through i've found, even on a cheap VPN IP.


Here's a hint: VPNs do almost nothing to safeguard you from modern fingerprinting techniques. If you're using any browser [1] but Firefox or Safari, Google probably knows exactly who you are and is just doing the boxes for shits & giggles.

[1] except those that reCaptcha doesn't support.


You have to answer the way most people would answer, not what is the most technically correct.

I guess if your adversary is a dogmatic AI then that might be by design.


I keep expecting it to eventually ask me to "click on the pictures of terrorists" and them using it to train automatic drone targeting software.


They also changed it so that if you've seemed human in the past, they're able to determine if you're probabilistically a human now.

This data is a few years old but I imagine it's the same based on my experience.

They're using your cookie + IP + your account data to determine if you're probably a human.

A LOT of reCAPTCHA sites never prompt you. You only know if it's there because you're on Tor or something.


> A LOT of reCAPTCHA sites never prompt you.

That has only happened to me in Chrome, not Firefox or Safari. Which is the subject of this article.


Yea it was much better when it was run by Carnegie Mellon. I guess selling it to Google seemed like a good idea at the time.

Today I feel like Google uses it mostly for their self-driving-car computer vision projects.


I believe even worse than showing you new sets of images is when the reCAPTCHA system gives you a "low trust score" and intentionally fades out the selected images, but very slowly, and replaces them with new images of the same type. Just downright feels abusive to the end user. Good luck if if you have tweaked any browser settings to be more amenable to privacy!

I wish more sites would implement a Jigsaw-puzzle-style similar to the Binance login captcha, but I can't speak to the efficiency of that in defeating bots.


Sometimes it is straight up wrong too. I once got a picture of a sign with a traffic light on it asking me to identify the traffic light. If you selected nothing it wouldn't let you go ahead. So I clicked the squares with the sign and it let me proceed. I don't even think it should be that difficult to see that it wasn't a traffic light since all colors were bright. A typical in use light will only show one color at a time.


>Originally it was an awesome solution based on OCR'ing books that usually worked quickly on the first try, and almost never took more than two.

People kept trolling it by typing the test word correctly, and random garbage instead of the OCR word. It was easy to spot which one was which. Source: I was one of these people.


It is made by google to train their neural networks. Neural networks are evolving and need harder examples for training.


Because it is an adversarial system, the busters are getting better, so reCaptcha needs to catchup.


What happened? The spambot algorithms have gotten better and can now defeat the simple tasks. It's a perpetual arms race of you vs. the spambot developers.


they’re using the service to train self-driving cars to recognize traffic lights, bicyclists, etc


Big rant, there are few things I hate more than filling out their endless useless CAPTCHA's when browsing websites that have nothing to do with Google.

Google is a hypocritical pile of burning . They use bots right? They scrape websites, they infest everything from my banking website to console emulators with their tracking, and yet we little people are not allowed to scrape or interface with the web programmatically.

I want them to burn so badly, I hope the EU breaks them up. Screw captcha, screw AWP, screw them.


It's the web developer that doesn't want you to interact with their site programmatically.


Google and Facebook tend to do it as a matter of policy, and while they say it's to protect privacy and prevent abuse, it also furthers the "walled garden" effect.


> It's the web developer that doesn't want you to interact with their site programmatically.

Yeah because too many people abuse any hint of such functionality to peddle fake Viagra, pennystock scams, MLMs, ICOs or the good old SEO link spam.


> Big rant, there are few things I hate more than filling out their endless useless CAPTCHA's when browsing websites that have nothing to do with Google

In some cases, the blame should be put on the site runners. I get a ReCAPTCHA when logging into my Patreon account. I've been paying then $10+/month for years now, they should know by now I'm not a spammer


They're not hypocritical but cynical. There is an appreciable difference.


I thought you could disallow Googlebot in your robots.txt file?


Google respects robots.txt


I thought this was just me and their stupid caption being impossible for even humans to solve; turns out I was just being gaslighted this entire time and they're just discriminating against Firefox users? How does the EU or someone not shut down this sort of anti-competative monopolostic nosense? I didn't think I could get more furious about having to struggle with these captions all day, but somehow I am. Please everyone stop using recaptcha on your sites, it's not worth the pain it costs your users.


  they're just discriminating
  against Firefox users?
At least part of the behaviour shown in the video depends on factors like cookies, IP address, and whether you have features like anti-fingerprinting protection turned on. [1]

Recaptcha is frustrating and I dislike it, especially the slow fade-ins and multiple challenges, but if you repeat the test shown in the video you won't find it 100% repeatable just because you're using Firefox.

[1] https://github.com/google/recaptcha/issues/268#issuecomment-...


I just wasted ~15 minutes on doing the disqus login captcha under different conditions .. turns out that as soon as uMatrix is enabled (and blocks 18 cookies from google.com and 5 more from www.google.com), it starts to act up and get annoying..at least for me.

It then takes between 1 minute and 1 minute 30 to get past the recaptcha when blocking those cookies - and I was certain to be 100% correct in most cases and it kept asking me to solve more and more ..

most of the time spent solving the captchas is from the countless '4s fade ins' via inline style when cookies are blocked (as opposed to 1s fade ins via css, when cookies are set).

I'm curious why they would add 3s to the fade in if their cookies are blocked .. does that help to fight off bots, or does google just want to punish me for blocking their cookies?


That's what I don't understand. If you're building a bot to get past reCAPTCHA then you're almost certainly in some selenium/chrome headless environment, with full chrome support of cookies, Javascript, you name it. There's certain methods of detecting such environments based on their environmental variables there were again more work around to patch those.

Also the fade is irrelevant because the bot already has access to the image without the fade (although it still has to await the fades completion to continue).


The fade thing is to rate limit attackers.

By blocking specific cookies you're making yourself look like a certain kind of botnet, so obviously you're going to have a difficult time convincing the site that you're a legitimate user.

Most users don't block normal cookies, so if you go tweaking the machinery that manages the relationship between your browser and the site, then be prepared to deal with a buggy experience. This is what it means when they say that what you're doing is "unsupported." Nobody is going to spend any time optimizing for your weird setup.


Once again, Google obstructs the web for people who take even basic privacy measures.


As far as I can tell it's 100% repeatable, every now and then one works the first time on Firefox, but it almost never does. If I use it in Chrome on the same sites, it works. Then I go back to Firefox, and sure enough it doesn't work again. Maybe there's something else making it work for you? I don't know what other factor there could be; some privacy settings in Firefox maybe?


After a a couple of minutes they sure should have an idea that I am a human, right?

Especially when I'm logged in with my 12+ year old paid account?

Won't say anything bad about googlers but in between this and the deeply irrelevant ads I get despite all yheir metrics the company seems deeply dysfunctional these days.


If it can be statistically proven that this is occurring more on Firefox than Chrome then Google has a really, really big problem. The burden to make sure it isn’t is on them, most especially in the EU. Google is facing a very real future where they will have no web browser and possibly no operating system.


It absolutely is happening more on Firefox. I open Chrome almost exclusively to bypass CAPTCHAS, and I doubt they will get in trouble because Chrome gives more detailed data due to its invasive lack of privacy. You can't really blame Google for using its own tech to provide "better" results, but it is high time we started blaming them for the massive privacy violations they use to make their convenience work.


> You can't really blame Google for using its own tech to provide "better" results ...

Sounds like antitrust to me.


Safari too. If it also happens in the new Edge, I will be sad but not surprised.


The website author seems to be Russian which might be an indicator. But not a good enough excuse for such terrible UX.

Using Firefox shouldn't be an indicator of anything malicious.


EU may end up dealing with it, they need complaints first. You’d be amazed how few people fill out complaints with the government. I just filed a complaint about this with the US department of justice antitrust division, feel free to do so as well so they realize how abusive this is!


> EU may end up dealing with it, they need complaints first. You’d be amazed how few people fill out complaints with the government. I just filed a complaint about this with the US department of justice antitrust division, feel free to do so as well so they realize how abusive this is!

How do you go about filing this complaint? I'm sure many others (myself included) are interested


Here’s the page with the instructions: https://www.justice.gov/atr/report-violations In this case, I let them know that Google was using their position as the market-dominant browser company to make it more difficult for consumers to use alternative browsers by making captchas much harder to use on alternative browsers. I explained what a captcha was and how it affected me as a consumer using Firefox.


Care to share your complaint?


In Chrome Incognito mode, I saw it a few years back, but less frequently recently, it also happened to me with other browsers like Edge, Chrome in iOS, etc.


Google Captcha is a complete mess at this point and I often leave websites that use it if it's not essential to what I am doing


I don't log in to Google Captcha sites anymore, unless I absolutely have to.

Their discrimination against FF users has been fairly evident over the past year or so.

It's amazing how my identification abilities improve exponentially by using Chrome instead of Firefox.


Reading these types of comments on HN, you'd think HN doesn't use Recaptcha for login/register. ;)

Easy to hate on Recaptcha while reaping the rewards of participating in a community that deals with less automated spam because of it. :)


> HN doesn't use Recaptcha for login/register

I just created a new account to check; not even so much as a Recaptcha url in the page source.


Hi there!


Really? I use a VPN most of the time and have not once had a captcha for logging into Hacker News.


I signed up years ago, and never have to login really, and have never see a captcha on HN. Not saying it doesn't exist, but I've had myriad captcha issues on other sites, but never once here.


Even more annoying are those sites that insist on using it, even though they know I'm human -- for reasons like I've paid them some money or jumped through their KYC hoops. At that point it's just being rude and exploitive, and, personally, I've reached the point where I'll simply take my business elsewhere if a site chooses to treat me with so little basic respect.


There's a case for preventing bot action even if the bot is willing to pay. Though putting a captcha right after a payment step is borderline fraud.

What's KYC?



Oh you mean like Mongodb Atlas? (Although looks like they've gotten rid of it or toned it down). There were days where I couldn't log in because recaptcha just refused to let me.


I will cancel services that use it too. I am not wasting 5 literal minutes to solve impossible captchas because I don't use Chrome.


My power company (the sole government-run provider in my area) now has reCAPTCHA on their payment form.


Which was my point from my earlier downvoted comment. The idea that training Google AI is a condition of use is ridiculous. You have to provide free labor to Google as a condition of paying your electric bill. You also have to share your data with Google — even if you decide not to complete the Captcha.


The newest version is going to be invisible - i.e. it just "works" without a questionnaire. It's based on a scoring system that doesn't prompt users unless the website owner wants to prompt them below a specific score. You've likely already used it but don't remember it because it was invisible.


That was supposed to be what this version was (reCaptcha v3). As a matter of fact, however, quite a few of us get extremely long or unsolvable captchas every time.


No, the OP is showing v2. v3 doesn't have any UI for end-users: https://developers.google.com/recaptcha/docs/v3 It is simply a scoring system, applying an ML model to typical actions for your website.

It's up to the site owner to determine how to handle those that don't meet v3's score, which can be a traditional CAPTCHA or hopefully something more effective and forgiving to humans: https://www.w3.org/TR/turingtest/


I was assuming that most people were using v3, my browser was flunking the scoring test, and v2 with the UI was being shown as a backup. Did everyone decide not to use v3 for some reason?


v3 is the newest version that came out only recently, and requires changes to the front end implementation. You have probably used it but because it didn't prompt you, you don't remember it (i.e. survivor bias).


I see - I assumed your top level comment was talking about something after v3, since that's already out. It would be interesting to see which sites have already implemented it. Maybe there's a userscript or something that can detect it in the page?

Personally I'm skeptical it will ever work correctly for me without tinkering, because I block third party requests (especially to Google) by default.


Same... so frustrating. If it's Google's goal to create frustration for non-Chrome users that would be evil :'(


Turbo Tax uses Google Captcha when trying to import information from financial institutions.

While filing taxes, on several occasions I had to just give up and try again after several hours because the Captcha won't let me pass through and after several attempts Turbo Tax will throw an error - to come back later.

It was literally a Nightmare


TurboTax itself is a dumpster fire, too. I recommend doing taxes on paper just to avoid touching Intuit or Google in any way.


I don't think that's an option for anything other than 1040-EZ. I get lost filing with turbo tax, I can't even imagine how it'll be on paper.


It's not that hard if your taxes are simple (standard deduction, maybe some capital gains). Keep in mind the filing companies have an incentive to make the process complicated.


I don't think the filing companies are actively making the forms harder to fill out by hand, but the problem is that the IRS has no incentive to minimize the time it takes to file taxes.


Actually, big accounting firms and tax automation companies spend a lot of money lobbying congress to keep the tax code complicated. It would save everyone a lot of time and money if the IRS would just tell us what we owe - they already know the answer, it's not like they just blindly accept whatever we say.


It's a bit more expensive, but for this type of thing and other reasons, I now use a CPA.


... who then use an Intuit product to file your taxes


If it's a CPA that's not on their own, there's a good chance they're using something vertical market instead - Wolters Kluwer, Thomson Reuters, not sure what others.


Interesting, I wonder if this is something TurboTax itself is doing or if it's something the banks are going and TurboTax is making you bypass it in order to scrape it.


Not really sure, it was shown after the credentials for the financial institution were entered.

However, it was shown for each financial institution. So it is possible that the financial institutions (or the API provider) were doing it, though it is equally likely that turbo tax just has a bad implementation. Because TT can make an assumption that I'm a human, I wonder if there is a regulatory requirement or the API provider is doing that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: