If you're a developer, please consider replacing reCAPTCHA on your site with an alternative. reCAPTCHA discriminates against people with disabilities and those who seek privacy, and it gaslights you into thinking you did not solve the challenge correctly, which is plain cruel.
Here are some reCAPTCHA alternatives: https://www.w3.org/TR/turingtest/
All of the "interactive stand-alone approaches" from that page can be beaten with run-of-the-mill OCR (other than perhaps the 3d challenge) and with almost any mobile phone speech recognition engine (and, if the attacker has the money, can send it off to Google's cloud speech-to-text).
All of the non-interactive approaches from the page require this constant tuning and upkeep to make sure bots aren't able to sign up/abuse systems. There's also not \that\ secure if your website is targeted and scripts are made specifically to avoid your anti-abuse methods.
Sure great, but when I see behavior like the above, I just hit back and add the site to my routers firewall black list. If its this much of a PITA to "solve" a captcha, CORRECTLY but I keep getting the middle finger I don't give a crap anymore. Your site isn't worth going to if I have to spend literally minutes "solving" captchas for googles stupid ai which is treating me like prove i'm a bot even when I prove i'm not.
Just realize by using recaptcha this is what you're forcing some users to deal with. And I deal with it by making sure I never come back to your site ever again when you've wasted minutes of my time just to try to get to your page. Even if its googles fault for being jerks, I don't care. You choose to implement it.
Ok rant mode off and stepping off my personal soap box.
I've run into state and local tax agencies, utility companies, and large healthcare companies that require Google's reCAPTCHA. So, unless you don't want healthcare, to have water service at your home, or you're in the mood to just shut down your business, you have to suck it up.
‘Need’ here means exhausted all other opportunities, and have built alternative accessible ways of accessing the same service. I’d certainly have expected a service to have investigated a self-hosted solution, and I doubt a reliance on 3rd party JS from a Google service would fly, regardless of the service, as it breaks a whole bunch of separate resilience guidelines.
there was a time not long ago before wheelchair ramps or accessible doors were commonplace. these people were literally shut out of society.
its the same with captcha forcing privacy-conscious users off the internet.
Or: people who need a wheelchair are protected by anti-discriminatory laws, while people who prefer not to use Google products aren't.
Why make me solve a Captcha to see static content?
Why make me solve a Captcha to log in when I've already completed one to register?
Why make me solve a Captcha to pay utility bills? Is there some underground group of deviants going around surreptitiously paying other people's utility bills? The monsters.
Fair point, I usually run into this when using Tor, or VPN when accessing content behind Cloudflare, and or similar services. This is some anti abuse stuff, but is often overly agressive with giving you captchas.
> Why make me solve a Captcha to log in when I've already completed one to register?
So attackers cannot password spray. This is typically after attackers has gotten access to the latest database breach, and are just blindly trying username/password combinations.
> Why make me solve a Captcha to pay utility bills? Is there some underground group of deviants going around surreptitiously paying other people's utility bills?
Sound like a strange place to have a captcha indeed. What information is needed in the form to submit it? Does it validate stuff that an attacker might want to scrape? I guess they added it for a reason.
This is not necessarily a reasonable assumption. People often do things because they heard it was a good practice, or because it solves a problem they don't actually have, but think they might, or arbitrarily without giving it much thought.
A simple ratelimit takes care of that. Plus, it's not like attackers would be easily defeated by a CAPTCHA anyway --- there are services selling batches of valid tokens, likely generated by actual humans or very close emulations thereof, for ReCAPTCHA.
Captcha solving service also has other costs than just the money it costs. It adds time costs and additional resource usage on the machines it is running on. A quick look at a service shows that the average response for a challenge was 40 seconds (this value changed a lot when refreshing the page). The attacker has now gone from the 200ms range per attempt to several seconds, slowing the down a lot. This gives defenders additional time to respond, it is also a useful metric for detecting malicious logins.
This should waste less time than reCAPTCHAs. I know it's not 1:1 in terms of pros/cons, but it gets a good subset of the advantages without the key disadvantages mentioned above.
Secondly, botnets can, and presumably do, randomize which accounts they try, too.
Incidentally, you still need rate-limiting if you use Google's CAPTCHA. If you don't rate-limit CAPTCHA endpoint, an attacker can DDoS you (especially if your server-side captcha component uses low-performance single-threaded HTTP client). Furthermore, an attacker within the same AS as their target can purposefully screw over their account by performing attacks on Google's services until the reputation of the network hits rock bottom.
Conveniently, normal users with typical browser configurations get nothing but the animated checkbox. For nearly everyone, the whole experience is simple and easy. The only people who get inconvenienced are the low-grade privacy enthusiasts who think that preventing tracking is the path to Internet safety. Ironically, "tracking" is literally the mechanism by which legitimate users can be distinguished from attackers, so down that road lies a sort of self-inflicted hell for which the only sensible solution is to stop hitting yourself.
"Be a good little sheeple and do what Big Brother Google says." Fuck no.
...congratulations, I just locked out all of your users. Have a nice day.
This is not theory, this is hard-earned experience. Locking-out people is bad, the most that's acceptable is rate limiting to a once every few seconds.
> Sound like a strange place to have a captcha indeed. What information is needed in the form to submit it? Does it validate stuff that an attacker might want to scrape? I guess they added it for a reason.
Ive seen captchas on payment forms to prevent credit card checking. You can take a dump of CC details and try them all out on a site and get back the valid ones. I'd assume they charge $1 to the CC to test it before allowing you to continue and then you could cancel your order before they charge the full amount. However, assuming you have to be logged in to pay your bill that seems less reasonable.
If you host a payment form that informs the user about whether payment was accepted, you're a target.
In the past, I used curl to get some billing info, add the money to a dedicated virtual prepaid card, then pay the bill, then send an email to a gmail (+paidinvoice) label. These day, at least for my bills, they have pre-approved withdraw directly from the bank. However I guess this is not widely deployed.
If other people did this, but ended up doing it from an insecure machine and lost the credentials / got hacked, I can see why at least some orgs might want to prevent people from doing this. This is a classic over reaction, but a plausible scenario.
The measure is not really about protecting the user that is using the payment form, it is meant to "protect" the system that is validating the payment data. The payment form may be a target for attacker which has gotten a large batch of credit cards from somewhere else, and wants to validate the data. They then regularly exploit such forms, or other naive payment system to check if the credit card data is valid.
CandyJapan owner wrote some blog posts about the subject.
My password's not crackable, so it's annoying to be lumped in to that. I'd happily use a service-generated password to avoid login hassles.
With that, the site gives away whether the account has a low entropy password or not.
Or just generate secure high-entropy passwords and force users to use them.
Making users look up SMS codes before each login is acceptable. Making them solve obnoxious, long, privacy-hostile riddles is acceptable. But forcing them to use pre-generated secure passwords?! That can't possibly work. They will revolt!
Sure, why not? Way more than half of passwords are low-entropy, so that doesn't meaningfully help them focus attacks.
And they still have to keep solving captchas to make those attempts.
If anyone from Walmart.com is reading, please please get rid of these useless captchas - it is an incredibly stupid thing that you do and unfortunately you do it too well as well.
Ironically, Google has committed at least $75 million and likely hundreds more of fraud, via stolen refunds and stolen banned-account balances!
This is often impractical for several important use cases, like image rendering and PDF generation. Just hand waving away the cost of developing dedicated, pure APIs won't make companies more likely to do so.
> If they are concerned about fraud they will be woefully defended by CAPTCHA, it makes no judgement on the validity of transactions at all and doesn't prevent frauds signing in manually.
There are many different vectors of attack and fraud and CAPTCHA tackles one of them. It's silly to say it's unnecessary just because it doesn't cover all fraudulent activity
As long as there continue to be enough cookie-cutter blog/forum/ecommerce sites out there for the bots to exploit, very simple techniques (JS-populated form fields or request parameters, very basic validation of the HTTP headers, taking into account the rate or frequency at which requests are made, etc.) will quickly and cheaply identify almost all of the bot activity.
Of course sophisticated or dedicated bots will still pose a problem, but assuming you're not just standing up a popular off-the-shelf platform without any hardening or customization, you'll need get pretty big (or otherwise valuable) before attracting that kind of attention.
A reasonable analogy here is the observation that simply running sensitive services on non-standard ports (e.g., not running SSH on port 22) will eliminate a ridiculous volume of malware probes against your system. To be clear, that's no substitute for actual robust security practices -- you almost certainly shouldn't have something like SSH world-visible to begin with -- but given how trivially easy it is do something like to change the default port for services you're not expecting the public at large to reach it's absurd that servers are compromised by dumb scripts blinding probing the Internet to exploit well-known and long-ago-patched exploits every day.
But one of them did! Whenever I changed the questions, bots would stop for a few days, and then start again. Someone cared enough to manually enter the correct responses (no, blind dictionary attacks were not possible)!
There are also ways to reduce the damage reCAPTCHA causes, such as keeping it out of the default UX path. Discord for example will show a reCAPTCHA challenge on the login page only if you are signing in from a new location.
reCAPTCHA cannot effectively defend sites against targeted attacks either.
They're going to track my IP whether I want them to or not. So they should go ahead and use it to reduce hassle.
Or you clean your cookies out, thank you "Cookie Autodelete".
Also the gratefulness part is strange. The corporation has no gratefulness for me, why should we show it any kind of loyalty. It's not a living entity with a consistent mind or consciousness. It will change its will based on Wall Street's demands. It will ban you silently with no recourse.
Some people avoid Google Search, Chrome etc. They are still subject to this.
I endorse a site's right to forbid me its content if I can't prove I'm human. I won't endorse a site that accomplishes it by asking me to pay the cost.
> anti-discrimination law
Google-avoiders are not a protected class.
No it isn't. In fact, out-of-the-box reCaptcha is not GDPR compliant, and using it on your site will open you up to possible liability. See https://complianz.io/google-recaptcha-and-the-gdpr-a-possibl...
My reCaptcha strategy is to fire off an email to the site owners every time I am subjected to a reCaptcha, asking for all my data under GDPR. Most websites only need a few such requests to quickly start looking for an alternative. Fuck Google and their constant attacks on my rights.
You're posting this in response to an automated recaptcha solver. Clearly recaptcha also has trouble staying ahead of bots.
It seems to me that any simple automated test at the entrance is inevitably going to be easy to solve by bots, especially when it's a one-size-fits-all test like recaptcha, so bots have only a single target to aim at. A small-scale unique test will be more successful simply for that reason.
But it seems to me that the better way than to ban bots together with humans who fail to pass your Turing test, is to check for the behaviour you want. If you don't want spam, have a system to recognise spamming behaviour, rather than traffic lights.
i think you probably meant to say recaptcha allows an extraordinarily large number of humans compared to false positives? because that would be the relevant metric. you sure about that one?
My only problem with recaptcha is when audio doesn't work (google decides I'm spamming their network… sure…). Because their audio validation seems to use only one rule that says "letters where typed". So I'm not sure how being able to beat it with voice recognition makes it worse.
Create a dozen models based on different things. Street signs, cats, houses, cars, etc. Then show the user a random selection of images generated from different models and say "select all the cats" and they get it right if they choose the images generated from the cat model.
Was posted on HN a while ago.
The interesting question then becomes how this is going to interact with future browser anti-fingerprinting measures whose purpose is to prevent just that.
Erm, unless I'm mistaken, that patent says it's owned by Juniper, not Google. Google is just hosting the patent document.
additionally, lots of schools now require their students to use google services.
I hope there is a privacy lawsuit in the future to stop this sort of nonsense.
"Your computer or network may be sending automated queries. To protect our users, we can't process your request right now".
Is there a solution for this?
Though Google may block your access to the audio challenge regardless of the browser or extensions you use, see more details here: https://github.com/w3c/apa/issues/25
I actually do a lot of automated queries from my computer.
I like to scrape and save content that may disappear. Just recently one psychology website I liked years ago where I put a lot of effort to comment on, silently deleted all 60k user comments, including 100s I wrote, and started putting old articles behind a paywall. My activity is perfectly legal, as I'm doing all this for my own personal use.
Thankfully I have all the content locally in the database.
Does it mean I should be prevented from accessing third party services that use recaptcha?
The United Nations estimates the current population of China around 50,000 more than the population of India. Given the uncertainty of these numbers, I can't exclude that India already has the most numerous population.
However, federated things are accessible. The big names Facebook/Twitter/Youtube/Google are blocked, and the services below them. However it is a blacklist of blocked not a whitelist of accessible. Putting google analytics traction in a header of a federated blog, meaning it's actually not federated, is indeed a stupid pain. China internet is restricted, but it is only restricted 'enough' for the current power.
Edit: And that seems good enough for now. Wechat 'moments' and use of Tiktok, from my observation of friends or even taking the train, are on a steep decline. Wechat's future seems mainly as a commercial P2Passist or very simple blog platform. Both dropped the ball and mobile payments will not disappear but the tide has turned (NFC, anyone? this was an already solved problem. The only real challenger bank China has is China Merchants Bank but they're after merchants, the clue in the name. For customer service and being one to perhaps pull a rabbit out of the hat, China Construction Bank. I have no idea how BEA didn't grab mobile payments.
The government facilitate corruption. The government is a hegemony.
Aside from that broad shot, 10 years ago you enter the aforementioned square freely, not only after going through a 'police' security check, bags x-rayed, IDs checked.
Resolving to google.com does not resolve (gmail does, a bit, IMAP but only every few hours or days, depending on connection sans VPN).
Look under the section "use recaptcha globally" -- this is what I was referring to. However it's not clear to me if this approach enables use in China or not.
Could be a while before I get enquiries from China but there is only one way to find out.
Google did say 'globally'...
All sources I can find say that population of China is bigger than India.
It discriminates against people who value their time. Who in the right mind thinks that spending several minutes on captcha is ok?
Ticketmaster uses both recaptcha and a pre-filtering solution they supply based on their own heuristics, as well as a complex user activity tracking system to determine whether you're a bot or not based on the activity you present and traffic you pass, so even if you pass all CAPTCHAs, they still might tell you to pound sand if you try to reserve something.
In the last few weeks, for select sales, they've even required unique phone numbers which they will SMS a number to or call and relay a code to which you need to enter just to get a single place in line for a sale.
I'm not sure of any company more actively on the forefront of prevented automated access than Ticketmaster (which makes it kind of funny when everyone chimes in about how Ticketmaster doesn't do anything to prevent brokers from getting all the tickets).
The problem is that what Ticketmaster is up against is people running specialized software that's able to emulate a browser, which ties into services that are specifically designed to beat CAPTCHAs in an automated manner using mechanical turk type solutions, but at a very low cost. I have reliable testimony that some people spin up the largest AWS instance for an hour or so as needed, run this software, use a proxying service, and make 8k connections to queue up for tickets on a sale. Each AWS machine is another 8k positions in the queue. Every new layer Ticketmaster throws into the verification process knocks these people out for a couple weeks, until the company providing the software (which I believe charges a small percentage for every ticket purchased, so they fix problems fast) works around it. The arms race metaphor is very apt.
That's just one of the companies trying to circumvent Ticketmaster's road blacks for brokers. There are others that try to automate their purchasing to varying degrees. I myself work for a broker that takes a very different approach, where we use (relatively) very minimal automation, and have a person in front of a browser for every purchase (and we don't have many people at all), and instead try to make select purchases based of complex analysis and lots of data. Even that's gotten much harder in the last few years as venues and promoters have learned to play with the allocations of tickets, and hold large chunks of the inventory back to be released later at higher cost. I don't really see anything wrong with that, it's a market response to supply and demand, but it is unfortunately hidden in a purposeful manner, which affects not only brokers but the the end consumer, as market information is purposefully obfuscated (which makes the markets less efficient).
I've written on this multiple times before, so if anyone finds this interesting, just do an HN search for my username and Ticketmaster together.
1: https://anti-captcha.com/ (Scroll down and read their animated infographic for what is possibly the most amazing graphical metaphor of this I can imagine at step 4. It's so disturbing it's funny).
That's interesting. Unless you are talking about having to click on more than one "page" of tiles (as illustrated in the video in the OP) guess I don't run into reCAPTCHA often enough to have noticed this phenomenon. Can you elaborate on what you mean by that?
I second this (for the same reasons that you cite), and it's fresh in my mind as I just recently began reimplementing authentication for my personal CMS. reCAPTCHA is not a nice thing to do to your users. And I also don't want to feed The Beast.
It's good to see some confirmation that you're not insane. Google's ReCAPTCHA is plain EVIL.
Originally it was an awesome solution based on OCR'ing books that usually worked quickly on the first try, and almost never took more than two.
Then it turned into a single checkbox (analyzing mouse movement) so it was even faster... and I remember some simple image-based like "select the images of cats" that were also easy to get right. So even better.
But THEN... in the past couple of years, the image-matching started asking exclusively for analysis of street images, that has two huge problems:
1) The images are so blurry and ambiguous it's really hard to get right, it feels like a test designed to make you fail
2) You never know how far you have to go -- you keep clicking items, they keep replacing them with new ones, and there's zero indication of if you're almost done or if you're getting better or worse.
Once I did one for three minutes straight, neither passing nor failing, until I just gave up and left the page... if it's a bug, that should never happen. If that's supposed to be able to happen, that's the apex of asshole design. Either way, it's a failure in every way.
Or it will ask for pictures of crosswalks, and you have to decide if 3 pixels of a crosswalk in the corner of one of the pictures counts.
The pictures are blurry and positioned at weird angles.
There are lots of signs with east-asian letters (I'm not informed enough to guess what kind of alphabet they belong to) and I have no idea wether they are store fronts or not.
Is a sign to a dentist's office a store front? Generally it seems like anything with a sign above some sort of door or window qualifies as a store front.
So yes, it’s arbitrary, but it’s supposed to be. It’s about your gut feeling as a human because that’s the whole reason they’re showing you any of these images.
If it “looks a lot like” a storefront then you’ve really got the same problem as everyone else in the comments: they’re small, blurry, images and it’s hard to tell what it is. That’s also the whole point: their algorithms can’t tell, so they want a general consensus from users. There are images they know and use as a control, but some percentage of the ones you see they’re legitimately not sure about.
I have definitely seen the "fire-hydrant" one, and we don't have fire hydrants (they are underground below well marked covers that are illegal to park on or placed where you can't park).
And coming from a first-world Western country, I have definitely been flummoxed by at least one that was too American for me to decipher. I feel sorry for anyone that doesn't watch American media.
I sometimes wonder if these projects are actually internal astroturfing, someone trying to make people hate Google from the inside, it's so bad it must be intentional right?
To me it constantly feels like I'm working for google for free for their AI projects which is very annoying comparing to help a smaller company OCR books.
When they reboot the Matrix, instead of being used as batteries, the machines will keep humans around for machine learning test sets.
1) Computer vision got a lot better over the past few years. It's also become way easier for the average Joe bot operator to run cutting-edge stuff. OCR tasks don't cut it for distinguishing people from machines any more. Every time I see a blog post about a new computer vision architecture or how some random developer trained a neural network to get an X% result on benchmark Y, I think to myself CAPTCHAs are going to get more annoying.
2) The frequency at which most people have to solve a CAPTCHA has gone way down. In the beginning, I remember having to solve a CAPTCHA every single time I did anything on some sites. Now, I can't even remember the last time I had to do more than just check the checkbox. So, the amount of annoyance is amortized over a larger number of sessions, and Google probably feels like they can ask the user to complete more tasks as a result.
It is so freaking slow. I sometimes lose 60s to complete a captcha.
And if Google keeps the pressure and nothing hits them back, soon the answer will be "Number 17 of 312 still using Firefox".
I still can't believe how Google has changed their tune - from "dont be evil" to being worse than MS ever was, which is quite an achievement in itself.
At the same time an implicit belief in "we're the good guys" (combined with indoctrination including interview hazing rituals) can enable bad behavior, because then: "of course whatever we do is good, by definition, because we're the good guys" and then not questioned. MS did some really underhanded and insidious things with its power, and it's easier to see some of Google's behavior as due more to hubris/brainwashing.
I've started to use the CS101 whiteboard hazing as a litmus test for whether there's any point in trying to do good at Google, for my own career. So long as they insist on subjecting everyone to that (starting with people having just spent 4 years and a quarter of a million dollars on a Stanford CS education, and then people with verifiable experience on top of that), and also considering having been caught on abusive hiring/mobility conspiracy at they executive level, I think the CS101 whiteboard ridiculousness is not a good sign for corporate ego and intentions. It's also not great when CS students focus on drilling for that, to the exclusion of other things. For myself, if I applied anyway, I'd be fooling myself that I wasn't mainly after the compensation package, rather than wanting to have positive impact.
It's called "selling out".
It seems like another way to punish people for caring about privacy.
On top of that, I think some of the training sets are wrong. Multiple times I've been asked to find traffic signs, but it would only let me pass when including street signs.
- You want to train on an unlabeled dataset, label it along the way.
- You have a set of untrusted validators, some with no history, some with known credibility and accuracy scores. And you have a lot of them.
- You do kind of a zero-knowledge proof by showing the unlabeled dataset to validators that you know you can trust because of their historical high success rate, which you've already established through asking them to label a dataset that you already have high confidence on.
Kind of like how a blue-green colorblind person could find out which pen is blue, which pen is green if he is surrounded by people he can't fully trust. Ask people around you and maybe even show the same person the same pen (or a really dead-easy captcha) twice in a row. If they lie to you both times, they are not to be trusted.
 except those that reCaptcha doesn't support.
I guess if your adversary is a dogmatic AI then that might be by design.
This data is a few years old but I imagine it's the same based on my experience.
They're using your cookie + IP + your account data to determine if you're probably a human.
A LOT of reCAPTCHA sites never prompt you. You only know if it's there because you're on Tor or something.
That has only happened to me in Chrome, not Firefox or Safari. Which is the subject of this article.
Today I feel like Google uses it mostly for their self-driving-car computer vision projects.
I wish more sites would implement a Jigsaw-puzzle-style similar to the Binance login captcha, but I can't speak to the efficiency of that in defeating bots.
People kept trolling it by typing the test word correctly, and random garbage instead of the OCR word. It was easy to spot which one was which. Source: I was one of these people.
Google is a hypocritical pile of burning . They use bots right? They scrape websites, they infest everything from my banking website to console emulators with their tracking, and yet we little people are not allowed to scrape or interface with the web programmatically.
I want them to burn so badly, I hope the EU breaks them up. Screw captcha, screw AWP, screw them.
Yeah because too many people abuse any hint of such functionality to peddle fake Viagra, pennystock scams, MLMs, ICOs or the good old SEO link spam.
In some cases, the blame should be put on the site runners. I get a ReCAPTCHA when logging into my Patreon account. I've been paying then $10+/month for years now, they should know by now I'm not a spammer
they're just discriminating
against Firefox users?
Recaptcha is frustrating and I dislike it, especially the slow fade-ins and multiple challenges, but if you repeat the test shown in the video you won't find it 100% repeatable just because you're using Firefox.
It then takes between 1 minute and 1 minute 30 to get past the recaptcha when blocking those cookies - and I was certain to be 100% correct in most cases and it kept asking me to solve more and more ..
most of the time spent solving the captchas is from the countless '4s fade ins' via inline style when cookies are blocked (as opposed to 1s fade ins via css, when cookies are set).
I'm curious why they would add 3s to the fade in if their cookies are blocked .. does that help to fight off bots, or does google just want to punish me for blocking their cookies?
Also the fade is irrelevant because the bot already has access to the image without the fade (although it still has to await the fades completion to continue).
By blocking specific cookies you're making yourself look like a certain kind of botnet, so obviously you're going to have a difficult time convincing the site that you're a legitimate user.
Most users don't block normal cookies, so if you go tweaking the machinery that manages the relationship between your browser and the site, then be prepared to deal with a buggy experience. This is what it means when they say that what you're doing is "unsupported." Nobody is going to spend any time optimizing for your weird setup.
Especially when I'm logged in with my 12+ year old paid account?
Won't say anything bad about googlers but in between this and the deeply irrelevant ads I get despite all yheir metrics the company seems deeply dysfunctional these days.
Sounds like antitrust to me.
Using Firefox shouldn't be an indicator of anything malicious.
How do you go about filing this complaint? I'm sure many others (myself included) are interested
Their discrimination against FF users has been fairly evident over the past year or so.
It's amazing how my identification abilities improve exponentially by using Chrome instead of Firefox.
Easy to hate on Recaptcha while reaping the rewards of participating in a community that deals with less automated spam because of it. :)
I just created a new account to check; not even so much as a Recaptcha url in the page source.
It's up to the site owner to determine how to handle those that don't meet v3's score, which can be a traditional CAPTCHA or hopefully something more effective and forgiving to humans: https://www.w3.org/TR/turingtest/
Personally I'm skeptical it will ever work correctly for me without tinkering, because I block third party requests (especially to Google) by default.
While filing taxes, on several occasions I had to just give up and try again after several hours because the Captcha won't let me pass through and after several attempts Turbo Tax will throw an error - to come back later.
It was literally a Nightmare
However, it was shown for each financial institution. So it is possible that the financial institutions (or the API provider) were doing it, though it is equally likely that turbo tax just has a bad implementation. Because TT can make an assumption that I'm a human, I wonder if there is a regulatory requirement or the API provider is doing that.