Remember smurf? Spoof-ping a broadcast address for a multiplication effect. It's from 1997 or so. 15 years later and we're still living with that kind of problem.
There is at least one legitimate use for "spoofed" IPs... http://en.wikipedia.org/wiki/Mobile_IP
Of course this protocol is rarely used and can probably be blocked on most consumer ISPs...
Cloudflare uses anycast DNS - machines in different geographically located data centers all sharing the same IP.
If you want to try to make your site DOS-proof (and potentially faster), one way is to move the site to the network edge. Move the data closer to the user. Put a copy on a machine in the data center nearest the user. Do this in data centers around the world. ("CDN") Give all the machines the same IP. ("Anycast") Your users will be accessing a mirrored copy of your site at some regional data center, instead of actually sending requests that go out to the internet. Does Google do this? Akamai? Netflix?
Next time you access a popular website ask yourself "Am I actually accessing the internet? Or am I just downloading a copy of something from a local data center?"
A lot of these services are just marketing. In theory they sound great, but things may be different in practice. And that's why we frequently see comments that things did not work as expected.
I did some CDN experiments downloading pages using Akamai where I accessed content on the "true IP address" (the master copy so to speak) versus the regional IP address they provide through stupid DNS tricks. Guess which one was faster?
It all depends on caching: what is in the cache and what isn't. Same applies to DNS. A DNS caching server (resolver) is only faster than non-caching DNS server (authoritative) if it's primed with the records you're after. If they are not in the cache, it will not be faster. In fact, it will be slower because there are more steps to the process.
These strategies are often based on 80/20, power law thinking. If you are not in the 20 percent of content being accessed 80 percent of the time, then you do not see the benefits. If no one in your region has requested a given page, and you're the first, it will be slower to wait for it to be cached at your regional data center than if you just grabbed it from the internet.
rachelbythebay is asking which ISPs allow spoofed UDP packets.
The way this attack works is you send a query to an open resolver, using the target's IP address as the "source" IP address in the UDP header instead of your own.
However, ISPs can (and should) block UDP packets where the source IP address is outside the IP-blocks they own. Why don't ISPs do this?
I'm not really sure what the rest of your post has to do with any of this.
As for the rest the comment, this appears to be an "informational advertising" style marketing piece for Cloudflare so I think it's relevant.
My question is does anyone filter UDP egress based on source IP? Is there guidance somewhere that tells admins to do this?
Let me put it another way: If it was a workable solution to get admins to do this - to filter outgoing UDP based on source IP, then why are people trying to get network admins to change their DNS server settings as a way to reduce the possibility of DNS-based DDOS? That seems like a far more difficult task given that there hundreds of thousands of open resolvers and most admins understand working with firewall rulesets better than DNS configuration.
Any responsible host filters all outgoing packets to limit them to <only IP addresses we own>.
For example, linode does this afaik.
If every user where to go around the cdn then the site would break.
You may want to re-read it yourself:
"The attack on Saturday used one such amplification technique called DNS reflection."
But where's the proof they used DNS? Upload a packet capture and let us be the judge.
Though this is super-relevant because during the struggle with Cloudflare, we released an article about LOIC and how easy it is to reveal the locations and identities of individuals involved in a DDoS attack using LOIC.
LOIC and a number of the more public DDoS tools make the attackers' identities relatively easy to track. The big attack we saw last Saturday is much more difficult to trace both because it is originated with a UDP request (the headers of which can be forged) and because it is reflected off open resolvers (essentially laundering the identity of the attack's source).
The real reason that we stopped using Cloudflare is that when you were unable to serve the cached page, it throws up a Cloudflare branded page. We thought that those sorts of error pages would diminish our image as an open source publication because it seems to suggest that we can't rely on our own tools and abilities.
If we were able to display a "Powerbase" branded 404 type page, we would have been more satisfied.
I unfortunately use PayPal and their Instant Notification thing, basically a callback to a web page with a POST about the transaction that just happened. Upon receiving the POST I can then do things like notify the customer, dispatch goods, award virtual goods, etc.
The problem I had was that after putting my website (in the London Linode datacenter) behind CloudFlare, PayPal started randomly failing to reach the callback page.
PayPal, being PayPal, failed silently for a few days before finally sending an email to say that they couldn't notify me of transactions. I figured it out just before that though, because users were getting the CloudFlare "site offline" message.
The pages on my site are 100% dynamic, so nothing was cached by the feature that keeps a site online.
My biggest problem with CloudFlare was visibility for debugging: I had no visibility.
If it wasn't for my users letting me know and PayPal emailing me confirming what I thought... I wouldn't have known. Even then it took too long to find out, over a week from when it started to fail silently.
According to Linode there was no downtime in that period, and according to my server logs load was never above average and there was no reason it should've been unable to be reached.
Did I submit a ticket? I submitted some questions beforehand and got back answers that were very friendly but not technically detailed. That's also how I found the interface, I couldn't debug using CloudFlare, no way to answer the questions "What is happening? Why is it happening?". So no, when I figured it out I wasn't going to stay with CloudFlare as even if it was resolved I would still lack visibility for future problem-solving.
In the end it was costing me goodwill with my users.
I wanted things you didn't provide:
How often did CloudFlare fail to contact my network?
Can I see a chart of such failures over time?
What were the failure error codes and times so that I can cross-reference them to my logs?
Basically: I wanted transparency so I could have confidence in the service, and detail so that I can debug failures when they occur.
And I was going to email this as it's really a "just to let you know". But you have no email in your HN profile, and looking through the support emails I had I see tenderapp.com and can't make a guess what your email address may be, and I pinged you on Twitter but no response and it's very late here... so posting it so you can see.
If you add ways for developers to debug issues when using CloudFlare then I may well be tempted back in the future. The fundamental premise is a good one and I really wanted it to work (paid for the Enterprise level, had every intention of using it). But when failing silently costs real money and customer goodwill, I don't feel I had a choice but to U-turn very rapidly.
As soon as I was off CloudFlare, PayPal Instant Payment Notification worked again and there hasn't been a single failure since.
Support was fairly unhelpful, basically just saying "sometimes this happens, and it usually clears up".
Or are you suggesting bypassing CloudFlare for all dynamic content? In which case, why use them?
I actually did use them just for a CDN for a few more days. I use a second domain (sslcache.se) to proxy inline images in user generated content that doesn't originate from an SSL site but the user is viewing my site over SSL. Similar to this https://github.com/atmos/camo , except mine isn't written in node.js
Even then, users complained of broken images when they were logged in (on SSL).
The very basic thing: CloudFlare fails silently, sometimes. Well, that still happened with very basic content. From CloudFlares perspective the sslcache.se service was a site of static files, as I prime the content (download in the background) when a user submits the content. By the time the request for an item goes to CloudFlare the image is already being served from a local file system.
CDN only, it still errored enough that the end user noticed. And still without any information for me to resolve it.
If CDN is your only purpose? this mostly only useful for static files like JS, CSS and HTML. Somehow your answer is not clear what you try to accomplish. Too little information to go by. If you experience issues with their service, bother them more to get it resolved. Any gain you have improves the service for others.
In the worst case you can always look at other services. I am not sure but I believe Amazon CloudFront provides a similar solution.
I'd also love to see the option of periodic uploads of raw traffic logs to an S3 bucket or similar. Something akin to how AWS CloudFront handles raw logs. I believe raw logs are currently available on their enterprise packages, but this seems like basic functionality (that would provide much of the needed transparency) and should be included in all of the paid plans.
Yes, Dreamhost sucks. So does Hostgator. So does every other oversold host.
Yesterday I posted a post mortem on an outage we had
Saturday. The outage was caused when we applied an overly
aggressive rate limit to traffic on our network while
battling a determined DDoS attacker
1) Learn how to mitigate the attack in the future
2) Catalog data on botnets
Cataloging data on these botnets is one sure way to get them shut down.
I DOS'ed Billy Bob's Bike Repair website or I DOS'ed CloudFlare?
While it's nice that they can stop an attack without the intended victim noticing, it's still probably a good idea to let them know.
(i love these posts; i'm old + jaded and have no specialist knowledge of networks and protocols, but they're like the spaghetti westerns of the internet age :o)
You can put hosts file on RAM disk.
With some servers it's also possible to save and reload caches.
Assuming you're not doing hundreds of thousands of new lookups (sites you've never visited before) every day, it's very easy to configure a system for yourself that is faster than any open resolver.
There is one trade-off: if sites switch IP's without telling their users (preferring instead to wait for ttl's in open caches to expire) then for those sites that like to hop from IP to another unexpectedly, you need to monitor for this. This is rare though, and you can try to safeguard against it by "pinging" less oft visited sites periodically, but it does happen occasionally.
As for reliability, if you lose access to your "professioinally-configured endpoint" you're SOL. You can't do lookups (assuming you don't know how to do them by hand). Meanwhile I'm unaffected.
I would not have done this for myself and tell you about it if it wasn't faster. I'm not gathering info on users or selling anything. I'm not telling you what to do proclaiming I'm an "expert". I'm just an end user, like you.
When you use an open resolver, you are sharing a cache with everyone else who uses it. Some might do nefarious things to the cache.
When you use a resolver listening on 127.0.0.1 you are sharing it with whoever can access localhost on that port. i.e., no one (hopefully)
"professionally-configured" C'mon. You sound like a marketer's dream. Be a hobbyist. Be a hacker. Experiment. Thinks for yourself. Or don't.
How much time have you devoted to learning how DNS works?
I ran an MMO a while ago, and we would have a few hundred login packets spammed every minute. When we were DDoS'd, I responded by moving my server to a larger line (1 gbps) since the DDoS itself wasn't nearly as massive. Yet, we had no way of figuring out (at a base level) what was a legitimate packet.
PS - found the error on our side. Fixing. You'll be getting an email shortly.
Akamai recently deflected an attack on the scale of 1 TBit/s and is present in pretty much every DC (~1000 POPs). CloudFlare has 23 POPs and brags about handling 65 GBit/s...
He could have been lying but considering how uptight Akamai is about everything I doubt he made it up.
"we solve it by having 100's GBps networks" (and redirect whatever is legitimate to the client ofc)
Okay. Maybe my expectations were set too high :)
Leaves me to wonder what they can do if the traffic looks 100% legitimate.
This seems to be a bit more specific and does indeed give insight on how to mitigate this type of attack.
Am I the only one getting this error?
It's a gift to anyone wanting to do this type of DDOS.
Basically, DNSSEC just mean you do not need to search the for a large zone to request. Given that large zones are not directly in shot supply, and that searching for them is (in the age of ipv4) rather easy, I wonder if DNSSEC actually have any affect on the issue what so ever.