Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

cloudflare can just allow a fair crawl rate instead of a captcha on first request


The problem is that bad actors can masquerade as a lot of independent clients (The first D in DDoS stands for "distributed").

Figuring out whether a site is under a DDoS attack or getting legitimate requests from many sources is a very hard problem, and can just be worded "telling good actors from bad actors" -- no simple solution works; also, who YOU consider a good actor and who the website owner considers a good actor may be at odds.

Most people (and CloudFlare by default) consider FAcebook a good actor; but as far as I'm concerned, Facebook is an evil an actor as one can be.


> sources is a very hard problem

We're talking about virtually unknown blogs that get 1 http request from my server's IP, which is not blacklisted anywhere. It's not hard at all , i just think cloudflare's tech s not that good


You're really pulling a "how hard could it really be??" to DDoS prevention?

You should at least be humbled by how few services can even offer DDoS protection that works against volumetric attacks and isn't just based on null-routing. The people with skin and money in the game might know something you don't.


here's how simple it is :

    if (!website.underDDoS && website.requestedTimesToday[ip] <10) showCaptcha=0;


How do you implement "website.underDDoS"?

Through a proxy - mind you; CloudFlare makes their decision without access to your CPU or DB metrics, and don't know which page load times are legitimately slow and which aren't supposed to be.


how about "haven't had requests for the past 2 minutes". Again, i m talking about links to obscure blogs that barely anyone reads, let alone DDoSes

I think another comment here may be closer to the truth, CF may only be running heuristics on the user agent


If hardly anyone reads or DDoSes them, why did they go to the trouble of setting up CloudFlare? It’s free for those obscure blogs, but it’s definitely a non trivial hassle. Usually people set it up only after they experienced their first attack.

I get it that you are upset Google gets to scrape them and you don’t. But bad actors really are making it difficult for everyone to just “be” on the internet.


i dont know! but they do it, everyone does it because everyone else does it. it s not unusual


I got round it by just making sure the user agent is set to the latest version of Chrome rather than a version from a few years ago that I had hardcoded before. It seems Cloudflares protection is pretty much "is your user agent in the top 10 user agents?".

Did you try that?


I have, iirc it worked some times, but not always. Is it a reliable solution for you?


It's at least a 95% reliable solution, which seems to be about the same as a real user sees.


Well if you have an easy solution that you think would work, why don't you put up a website, commission a DDOS attack from a skilled actor and try to demonstrate mitigation?

Companies pay big money to CloudFlare. If a simpler and cheaper solution is workable, they'll pay you instead.


Just like telling if it's raining is easy but stopping rain once has started is hard, the claim is that it's not hard to detect if a site is being ddosed.


It is not at all easy to tell the difference between a DDoS and the slashdot effect (or HN hug of death, depending on your age). At least not without a man in the loop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: