> This is definitely on the large end of the scale as far as DoS attacks go, but I wouldn't call it "record smashing" or "game changing" in any special way. It's just another large attack, maybe 10-15% larger than other similar ones we've seen inthe past
Heh. Nice. Yeah, I expressed skepticism that 300G/sec qualified as "largest ever" - I mean, I personally have been hit by 10G+ attacks, and Cogent mostly shrugged. (I mean, my cogent side was down until the target was blackholed at the cogent border.) I know 10 gigabits is a lot less than 300 gigabits, but I am a nobody compared to the people involved in this little kerfuffle.)
Note that CloudFlare itself doesn't claim it is the largest ever, but references NY Times' claim that it was, and then later goes on to say "that would make this attack one of the largest ever reported."
Other than the headline and the reference to the NY Times article, CloudFlares claims and the linked article are pretty much in line.
Can you describe Cogent's reaction to your request to blackhole the route? We had a very small incident with XO recently (perhaps 500Mbps reflected DNS flood), and it took them 24 hrs to get back to me to blackhole the target! We just packet filtered in our edge switch. Was Cogent any faster?
If you run a network of any size you should be speaking BGP with your upstream, even if its using a private AS. You can then announce a prefix to them specially tagged with a "blackhole community" that drops traffic at the edge of their network.
Yup. The details are slightly different for Cogent (I think we had it setup as a separate BGP session rather than as a community tag like he.net does it, but that was because my customer requested it that way.)
But yeah, you give them /32s to null, and they drop those /32s at the network edge.
It stops the attack, well, almost immediately, but the problem is that it kills the target site completely.
(well, often people have web frontends to this, which, well, work poorly when your pipe is completely full, and for that matter, just getting the bgp data to your peer can take a few tries. but yeah, it's still pretty quick and effective, compared to calling someone to whine.)
What we really need is to get everyone to implement bcp38 anti-spoofing rules. If everyone did that, we'd be able to block the sources of the problem, rather than the destination. But, well, that's unlikely to happen, so for now, you just, ah, finish the job.
This is what I don't understand about DoS mitigation. So i'd null route the target, but then I have to move the service to another IP. I will also announce this IP to the customers, but that would simply redirect the DoS to the new address and take it down too... I mean, there seems to be no way to deal with DoS without throwing a baby (customers) with the water (attackers) or am I missing something?
Well services like Cloudflare or Blockdos try to mitigate this problem by absorbing the bad traffic and filtering it so that it is spread out across nodes and then dropped by firewalls with custom rules.
With ordinary DDoS attacks an effective method (which Cloudflare uses) is to prompt you with an captcha before letting you pass or dropping your connection when you fail often enough.
As far as I understand it was not that hard to block this attack because it follows traffic patterns (DNS responses from open resolvers). The actual problem was that the attack was so massive that it clogged the pipes before it could reach a router (belonging to Cloudflare) that would've been able to drop the packages.
Disclaimer: I am no network engineer so don't rely on my reply being factually 100% correct.
meh, it's hard. First prerequisite to effective DDoS mitigation? you need to have enough capacity (in terms of upstream transit/peering ports) to absorb the whole attack. If not? it's over, you blackhole the target at your upstream's routers (and thus throw out the baby) and the attack ends, or you don't, and your service is dead for all customers.
That's why I don't advertise any sort of "DoS protection" - I know that attacks that are bigger and badder than my network are fairly common. This is also why I'm not going to take any promises of DoS mitigation from anyone who doesn't have a terrifyingly huge network seriously.
Now, once you have enough upstream port capacity to soak the attack, you then filter the good traffic from the bad. This is a whole 'nother can of "very hard" but it's easy compared to getting the capacity in the first place. Note, this filtering becomes /way/ easier if you have some idea of the sort of traffic you are expecting, but it's still difficult.
There are "clean pipes" services that claim to do this for you, with varying degrees of credibility. The thing is, CloudFair is one of the smaller companies to offer this. I was looking at the offering from level3, (in my mind, considering their network capacity, probably the most credible provider of such a service. Also, their service claimed to work with all traffic, not just http and the like, so it would work for me.) but it sounded like the price was somewhere along the lines of "give us 25% of your revenue, and we'll give you half the cleaned capacity you need." I mean, even the regular level 3 bandwidth is between one and two orders of magnitude more expensive than the cogent/he.net mix that is common in my industry, so uh, yeah. I didn't spend the requisite six months with the salesman to get the real price, but I think "more than I can afford" is a likely guess.
I mean, the idea here, usually, is that the network providing this service is large enough that it has a whole bunch of peering connections and can filter the incoming traffic fairly close to the source. Even if you've got hundreds of gigabits of capacity at one location, if it's all at that one location, it's very likely that something else is going to gum up the works between, say Austria and you. If you've got a giant, global network, though, your traffic from Austria goes to your POP in Austria, where you can filter the "bad" traffic (assuming you figured out how to do that.)
And really, you don't have to filter /all/ the attack traffic, just enough that the target isn't completely overwhelmed. Like spam-filtering or anything else, nothing is 100%.
And that's how most of the low-end hosting world feels about it. The upshot is that we throw out the baby; the small customer who gets hit repeatedly by large DDoS attacks, generally speaking, has to change to a different provider, 'cause they get kicked off. I mean, if you are paying someone $20/month, and your enemies take that service down hard? yeah, after the problem is fixed, you are very likely to need to find a new provider.
well, what I got from the article was that the person was saying "We aren't the whole internet, this is larger than what we normally see, but not by a huge margin." - which is to say, it's possible that other providers have seen larger DDoS attacks.
Record changing from a scientific definition? Yep.
But, when the relatively non-technical populous hears 'record changing' a completely different set of emotions are triggered in their brains. Its just like any other overly grandios marketing headline, designed to excite those whom aren't familiar with the topic.
Yeah, I highly doubt 300G/sec is the largest attack ever as well. I've seen (well, we only saw a gigabit of it.. our host saw the rest) a 41G/sec attack. I imagine large attacks like this happen all the time but don't make the news because they just get fixed and the companies move on because it happens so much.
I disagree. Calling the story a straight-up lie was a mistake, and link bait to boot. I thought Biddle failed to understand what the risk to "The Internet" truly was, and how an attack like this (based on a known architecture vulnerability), if repeated or made more common or effective, could cripple a tier 1 provider and cause serious routing issues, among other things.
In all fairness, the Gizmodo article was pretty well researched and informative. It was more like an anti-hype article - because that's what the headlines were. Interesting things did happen, but the internet was not "in danger" of being down.
RAS has been pretty awesome in the community. He was actually the cofounder of the company we called nLayer. He's given a bunch of very valuable "101" talks at NANOG (think of this as the Hacker News for networking people), and until the acquisition, he was pretty active on the mailing list.
Cloudflare uses it as advertising. "Look at this attack we mitigated for our client!", which then gets picked up by HN and similar sites because Cloudflare does write fairly good blog posts about it. As others have said, Cloudflare does a fairly good job explaining parts of the internet that most of us don't get to see, so it gets attention.
More like they invent drama to cover their bad days. Akamai tells its clients when they are under DDoS, Akamai didn't really notice anything going on. CF just had a bad day and decided to make a story out of it. One other time when CF claimed a DDoS it was because the Google Bot Decided to up it's crawl rate, and took down many CF sites. CF told clients it was DDoS and then later admitted it was an issue with Google.
IMO and not very familiar on the similarities between Akamai and CF, if Akamai handle's similar issues without driving pr/marketing efforts based on them that's their loss and probably a bad business play.
In my mind, this is similar to an antivirus company saying "hey look at all these nasty viruses out there, but we find and destroy them effectively."
This really just seems like effective case-studying on CF's part. It's arguably their job to hype it as much as possible (though of course they are responsible for the inaccuracies).
"if Akamai handle's similar issues without driving pr/marketing efforts based on them that's their loss and probably a bad business play."
Akamai has been around since 1998.
Cloudflare since 2009 (both according according to crunchbase which jives with the date the domain were registered iim).
Consequently the established company has more to lose and less to gain from the publicity then the newer company.
For a newer company any publicity is good publicity even if it's over a negative event because you have less to lose and more to gain add: and people become familiar with your name.
Taking this to an extreme example (to make a point) let's say you start a new hamburger restaurant. You have no customers. On day 5 some people get sick (just sick not deathly sick). All the sudden you are in the local paper with a headline and a story that people just skim but they see your name. Almost guaranteed you will pick up business from the mention. Even though it's bad PR all the sudden down the road people will remember you and either forget or not care about the negative story they read 6 mos. prior (add: assuming they even read the story and didn't just see the headline).
Collapsing in the sense that prices have been in free fall for a while.
Akamai sees their market shrink from both ends. At the top-end companies like Netflix start building their own networks because they want more control, less reliance on a third-party, and the cost savings. At the low-end you have pseudo-CDNs like Cloudflare eat into their snake-oil business, and commodity-CDNs like Cloudfront grab the longtail.
Connectivity in US/EU has also gotten so good on average (and bandwidth so cheap) that the body of mid-range sites that feel a genuine need to enter an expensive conversation with a "traditional" CDN is evaporating. And this is doubly true for Akamai where that conversation tends to be particularly unpleasant.
Akamai and friends are of course well aware and have long shifted their focus to the emerging markets (asia, africa) and mobile. Time will tell for how long that can keep them afloat before they are marginalized.
As an observation, the same day the new york times story ran sharespost announced that they had just added cloudfare to their secondary market platform. This typically means that an insider such as a founder or early investor plans to auction some of their shares. They can't put that in the media story as that breaks SEC rules, but positive press is very likely to drive up share pricing.
Because most companies who employ DDOS mitigation services want discretion, and smart DDOS protection companies are discrete. Even Cloudflare does not discuss most of their high-end customers. In this case Spamhaus gave them permission to write it up.
Cloudflare differs from most DDOS mitigation companies because of their low-dollar, self-service tier. This gives them a reason to blog, and customers and attacks to talk about. Most DDOS companies only provide bespoke service for $$$, and those contracts usually come with silence requirements.
Akamai is not 2nd on the list of DDOS mitigation companies, BTW. I know a couple companies who left Akamai for DOS Arrest, one of great companies you'll never see mentioned in a NYTimes article (because that's how their customer want it).
Exactly. DoS Arrest was using BGP anycast for DDoS mitigation long before CloudFlare even existed. We had customers on their service back in 2008.
I'm somewhat skeptical of CloudFlare's low-cost approach to DDoS mitigation. Going for volume on low MRC clients means that you have a lot of potential targets on your network. And attacks against any one customer can always impact every other customer, which puts you constantly at risk, even if you yourself are not attacked all that often.
That's why it's frustrating for those who have a network arch/eng background when we run across the sensationalized pieces. Or when all of the sec-only wonks make this out to be something to combat with a firewall, and put out pieces that never mention BGP, but are somehow helpful.
End rant, but at least there's more of an awareness of good network folk with things related to this sort of story. Not that it's the ideal path, but still.
Cloudflare can toot its own horn all day long. That's just marketing. If journalists take that marketing and reprint it without incorporating other sources then you have a "puff piece" rather than quality journalism. The onus is on the journalists to do a better job investigating.
I think it's worth mentioning that if you say TCP/IP you should also say UDP/IP etc and as such it's IP where you really where you want to focus on. Because, a router can generally ignore if something is a TCP or UDP packet.
He really gets into how, with traceroute, you can get a glimpse of how everything fits together, beyond just "oh, this hop and now that hop". Also, Andrew Blum's book Tubes is a good casual look into how the world of data centers, IXPs, carriers, etc all fit together.
Lol, opposite reaction here. He said pretty much what I thought of this and what I've been telling some others on other places. And in the last few paragraphs he said something that announced the coming of the really juicy stuff, but then it was about the exchanges which I already knew about :(
Would you mind explaining this piece of the story in a little more detail?
"When the attackers stumbled upon this, probably by accident, it resulted in a lot of bogus traffic being injected into the IXP fabrics in an unusual way, until the IXP operators were able to work with everyone to make certain the IXP IP blocks weren't being globally re-advertised."
It's pretty fascinating and I think most of the HN audience, myself included, would be able to understand the actual technical detail.
Updated with a few more details for you (but still trying to keep it in laymen's terms for those who don't do advanced networking). I wasn't really expecting this thing to take off or get linked anywhere, it was just a dump of the e-mail I sent this morning so I could link it to Facebook. :)
I have to smile when people are praising you for the plaintext writeup whose purpose was to link from facebook. It's like saying you finally got your house fully off-grid using your hand made windmill that generates power so you can watch the Kardashians. ;)
This incident drives home the fact that there is no one entity responsible for "The Internet." It is run by a network of for-profit companies, governments, and non-profit public and private standards bodies.
I think this just reminds everyone that as large as google, facebook, etc seem to be, they are just a small part of this huge global network we humans have created.
As large as Google, Facebook, Amazon, etc is on the web, the major telcos have to be even larger (in terms of network size, capacity, amount of fibre, switches, datacentres) in order to carry the traffic.
That's only true to a point. In addition to the consumer-directed packets, large volumes of traffic for Google and Amazon never leave their networks. Shuttling data between datacenters for Google; moving data within the many cloud services of Amazon; or transferring between the two companies (e.g. GCS to S3). This means it's no longer a given that telcos must be larger.
The closer we get to living "in the cloud", the more our traffic can be seen as a window into operations taking place within and between cloud services.