Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Friendly Captcha generates a unique crypto puzzle for each visitor. As soon as the user starts filling a form it starts getting solved automatically. Solving it will usually take a few seconds. By the time the user is ready to submit, the puzzle is probably already solved.

What makes this NOT work on a bot machine?



It sounds like a proof of work rate limiter similar to something hashcash. I don't think it will stop a bot machine, just make it very expensive to use. Which is actually all regular captchas do anyway.

Whenever this comes up as an alternative to regular captchas I see a lot of push back that we can't do this because it would cost mobile users to much battery power. If that is really such a concern, let the mobile users solve shitty captchas as an alternative and the rest of us use something like this. Mobile users already endure horrible privacy, no easy ad blocking, countless "install our app" popups and a software ecosystem that is infested with dark patterns so I don't see how they would really even notice.


> I don't think it will stop a bot machine, just make it very expensive to use

My phone solves the captcha puzzle in about three seconds. I assume it's working on one core. If you're running this on a server and it's able to do one every, say, two seconds, and you have sixteen cores, that's still about eight per second. At that point, what is this defending against? You're running into API rate limit territory.

The whole point of a captcha is to make it unsolvable for a machine. Not to make it more expensive. Because the bad actors will eventually make it cheap, and then it's not effective anymore. Consider that today, it's often cheaper to farm out CAPTCHA puzzles to a room full of humans on laptops than it is to solve them. Making it a purely computational challenge is almost certainly saving money for the bad actors.


> At that point, what is this defending against?

I have seen spam attacks against webforms running with hundreds of calls per seconds. We in the end ran our own solution - a simple math captcha was all it took.


In college (2010) I built a honeypot to test this. Simply adding a field that blocks anything that doesn't run JavaScript worked in most cases. And that makes sense: a lot of this junk is garbage like malicious WordPress plugins that crank away to just fire off HTTP requests.

But you don't need proof of work to stop that abuse. The simplest JS with a fallback to a "I'm not a bot" checkbox would do the trick. So you're defending against folks that do run JavaScript, but...not fast?


> If you're running this on a server and it's able to do one every, say, two seconds, and you have sixteen cores, that's still about eight per second.

That's no problem. It's supposed to protect against bots making billions requests a second.


> It's supposed to protect against bots making billions requests a second.

Billions of requests per second is the sort of traffic that Google receives in total. Not the traffic to your blog.

The spam isn't the bottleneck here: at the point where you're caring about the actual load it's putting on your system, you're talking about open connections and the number of occupied workers in your HTTP server. Captcha doesn't help with that. You still need to accept the request in order to reject it.

But even if the goal is to just slow down a botnet that's pounding your server into oblivion, this still ain't it. There's no 16xlarge ec2 instance somewhere beating on your server. It's a bunch of malicious chrome extensions and garbage mobile apps. Why pay for servers when you can have ten thousand people install your software and run it for nearly nothing? The cost of the compute load isn't felt by the bad actor.


Captchas are not just ddos protection, and even if it were, the botnets don't send tons of spam from any single device. Otherwise it's too easy to identify and block.


That's why you use something like this, where each request incurs a cost for the attacker so it doesn't matter if the origins are distributed.


The attacker doesn't have to calculate the puzzles in one central place. It can do that on the hacked devices.


> It sounds like a proof of work rate limiter similar to something hashcash. I don't think it will stop a bot machine, just make it very expensive to use

Ah, OK. I was wondering the exact same thing as toxicFork. This makes some sense. It's a shame they don't explain it on their website.

But then the natural followup question: why do they keep mentioning blockchain? What's that bringing to the table? If it's just about soaking up processing time, then surely anything computationally heavy would do the trick, so why include something that would set off some people's alarm bells?


I really think it's meant to awe the business customer with a slick-looking demo, along with assurances that it's "made in Europe, GDPR-compliant, and proven accessible" rather than actually doing the job of a captcha. Sorry to be cynical, but it's oversimplifying the problem and just doesn't work (see below).


Nothing, most JS challenges simply rely on the headless browser not executing the JS or that the delay & computational cost would be enough to render most bot attacks ineffective.


A better question is why you can't just use a token bucket rather than mining bitcoins on your client's phone wasting their battery.


because bots use hundreds IP addresses assigned to the same system, if you have 5r/s from 10k IP addresses it adds up if you require computational power you force them to invest money in hardware and potentially make it unprofitable


The last botnet I fended off had 49131669 IPs so believe me I know: https://ipv4.games/statusz The issue is it's not their money. A lot of these botnets are compromised of ordinary people's devices that got hacked into or hijacked by some slimy mobile app, that fires off a DDOS request every ~5sec or so in the background, and they do it because hacked devices aren't easy to fingerprint. So I feel bad for what's going to happen to all those normal people if the industry pivots to using CPU hard approaches to defend themselves.


I guess this depends what kind of traffic do you get in some cases data that they try to push is confidential like their user session. I switched on some systems rate limiting from per IP to per session, because of thousands ips used the same session cookie, that's why I assume all of them use the same physical machine


Right. Captchas are supposed to ensure the operation is human-initiated. This solution doesn't work.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: