Hacker Newsnew | comments | show | ask | jobs | submitlogin
A Note from one of Cloudflare's upstream providers (cluepon.net)
644 points by jauer 704 days ago | comments



> This is definitely on the large end of the scale as far as DoS attacks go, but I wouldn't call it "record smashing" or "game changing" in any special way. It's just another large attack, maybe 10-15% larger than other similar ones we've seen inthe past

Heh. Nice. Yeah, I expressed skepticism that 300G/sec qualified as "largest ever" - I mean, I personally have been hit by 10G+ attacks, and Cogent mostly shrugged. (I mean, my cogent side was down until the target was blackholed at the cogent border.) I know 10 gigabits is a lot less than 300 gigabits, but I am a nobody compared to the people involved in this little kerfuffle.)

-----


Note that CloudFlare itself doesn't claim it is the largest ever, but references NY Times' claim that it was, and then later goes on to say "that would make this attack one of the largest ever reported."

Other than the headline and the reference to the NY Times article, CloudFlares claims and the linked article are pretty much in line.

-----


Can you describe Cogent's reaction to your request to blackhole the route? We had a very small incident with XO recently (perhaps 500Mbps reflected DNS flood), and it took them 24 hrs to get back to me to blackhole the target! We just packet filtered in our edge switch. Was Cogent any faster?

-----


If you run a network of any size you should be speaking BGP with your upstream, even if its using a private AS. You can then announce a prefix to them specially tagged with a "blackhole community" that drops traffic at the edge of their network.

The exact details vary by network, but here is how Hurricane does it http://www.he.net/adm/blackhole.html

-----


Yup. The details are slightly different for Cogent (I think we had it setup as a separate BGP session rather than as a community tag like he.net does it, but that was because my customer requested it that way.)

But yeah, you give them /32s to null, and they drop those /32s at the network edge.

It stops the attack, well, almost immediately, but the problem is that it kills the target site completely.

(well, often people have web frontends to this, which, well, work poorly when your pipe is completely full, and for that matter, just getting the bgp data to your peer can take a few tries. but yeah, it's still pretty quick and effective, compared to calling someone to whine.)

What we really need is to get everyone to implement bcp38 anti-spoofing rules. If everyone did that, we'd be able to block the sources of the problem, rather than the destination. But, well, that's unlikely to happen, so for now, you just, ah, finish the job.

-----


This is what I don't understand about DoS mitigation. So i'd null route the target, but then I have to move the service to another IP. I will also announce this IP to the customers, but that would simply redirect the DoS to the new address and take it down too... I mean, there seems to be no way to deal with DoS without throwing a baby (customers) with the water (attackers) or am I missing something?

-----


Well services like Cloudflare or Blockdos try to mitigate this problem by absorbing the bad traffic and filtering it so that it is spread out across nodes and then dropped by firewalls with custom rules.

-----


I don't follow. How do they know which traffic is "bad" and which is legit?

-----


With ordinary DDoS attacks an effective method (which Cloudflare uses) is to prompt you with an captcha before letting you pass or dropping your connection when you fail often enough. As far as I understand it was not that hard to block this attack because it follows traffic patterns (DNS responses from open resolvers). The actual problem was that the attack was so massive that it clogged the pipes before it could reach a router (belonging to Cloudflare) that would've been able to drop the packages.

Disclaimer: I am no network engineer so don't rely on my reply being factually 100% correct.

-----


Captcha. Right.

-----


meh, it's hard. First prerequisite to effective DDoS mitigation? you need to have enough capacity (in terms of upstream transit/peering ports) to absorb the whole attack. If not? it's over, you blackhole the target at your upstream's routers (and thus throw out the baby) and the attack ends, or you don't, and your service is dead for all customers.

That's why I don't advertise any sort of "DoS protection" - I know that attacks that are bigger and badder than my network are fairly common. This is also why I'm not going to take any promises of DoS mitigation from anyone who doesn't have a terrifyingly huge network seriously.

Now, once you have enough upstream port capacity to soak the attack, you then filter the good traffic from the bad. This is a whole 'nother can of "very hard" but it's easy compared to getting the capacity in the first place. Note, this filtering becomes /way/ easier if you have some idea of the sort of traffic you are expecting, but it's still difficult.

There are "clean pipes" services that claim to do this for you, with varying degrees of credibility. The thing is, CloudFair is one of the smaller companies to offer this. I was looking at the offering from level3, (in my mind, considering their network capacity, probably the most credible provider of such a service. Also, their service claimed to work with all traffic, not just http and the like, so it would work for me.) but it sounded like the price was somewhere along the lines of "give us 25% of your revenue, and we'll give you half the cleaned capacity you need." I mean, even the regular level 3 bandwidth is between one and two orders of magnitude more expensive than the cogent/he.net mix that is common in my industry, so uh, yeah. I didn't spend the requisite six months with the salesman to get the real price, but I think "more than I can afford" is a likely guess.

I mean, the idea here, usually, is that the network providing this service is large enough that it has a whole bunch of peering connections and can filter the incoming traffic fairly close to the source. Even if you've got hundreds of gigabits of capacity at one location, if it's all at that one location, it's very likely that something else is going to gum up the works between, say Austria and you. If you've got a giant, global network, though, your traffic from Austria goes to your POP in Austria, where you can filter the "bad" traffic (assuming you figured out how to do that.)

And really, you don't have to filter /all/ the attack traffic, just enough that the target isn't completely overwhelmed. Like spam-filtering or anything else, nothing is 100%.

And that's how most of the low-end hosting world feels about it. The upshot is that we throw out the baby; the small customer who gets hit repeatedly by large DDoS attacks, generally speaking, has to change to a different provider, 'cause they get kicked off. I mean, if you are paying someone $20/month, and your enemies take that service down hard? yeah, after the problem is fixed, you are very likely to need to find a new provider.

-----


Good stuff, thanks a lot.

-----


I thought that it was the largest ever, but it wasn't larger enough to really be notable, just sort of the natural progression of internet traffic.

-----


Exactly. The author already said "10-15% larger than other similar ones we've seen inthe past". Doesn't that make a record-changing by definition? :)

-----


well, what I got from the article was that the person was saying "We aren't the whole internet, this is larger than what we normally see, but not by a huge margin." - which is to say, it's possible that other providers have seen larger DDoS attacks.

-----


Record changing from a scientific definition? Yep.

But, when the relatively non-technical populous hears 'record changing' a completely different set of emotions are triggered in their brains. Its just like any other overly grandios marketing headline, designed to excite those whom aren't familiar with the topic.

-----


It does. But consider all the other times this record must have been broken -- wasn't a big news story. Incremental increases in the bytes/second record just aren't, by themselves, that interesting.

-----


That's from GTT/NLayer's perspective, which is just one of many Tier 2 network providers. Tier 1 networks are more likely to see the largest attacks.

-----


Yeah, I highly doubt 300G/sec is the largest attack ever as well. I've seen (well, we only saw a gigabit of it.. our host saw the rest) a 41G/sec attack. I imagine large attacks like this happen all the time but don't make the news because they just get fixed and the companies move on because it happens so much.

-----


I never read the Gizmodo piece, because I think they're mostly trolls, but I really liked this response. Kudos to him.

-----


The Gizmodo piece was better journalism than the nytimes or bbc articles which regurgitated Cloudfare's over the top PR uncritically.

-----


I disagree. Calling the story a straight-up lie was a mistake, and link bait to boot. I thought Biddle failed to understand what the risk to "The Internet" truly was, and how an attack like this (based on a known architecture vulnerability), if repeated or made more common or effective, could cripple a tier 1 provider and cause serious routing issues, among other things.

-----


Better, but still a long way before it's factually correct.

-----


NYTimes especially has been hyping up cyberattacks and cyberwarfare a lot lately.

-----


In all fairness, the Gizmodo article was pretty well researched and informative. It was more like an anti-hype article - because that's what the headlines were. Interesting things did happen, but the internet was not "in danger" of being down.

-----


That was a very nicely worded response that echos what, I hope/believe, most of us think.

Plus; that was in ASCII. Damn. :)

-----


Word. Respects given to serving in text/plain (as is proper in this context).

-----


Uber respect if the file had gzip content-encoding.

-----


Or if the output was gzip without encoding header. The FBI will be all over your place the next day with a warrant for the encryption key :D

-----


As someone who's fallen in love with writing absolutely everything in straight plain-text, seeing this brought a warmth feeling to my heart : )

-----


For those of you who don't know Ras is one of the people that run/ran nlayer (now part of GTT) which is Cloudflare's primary provider.

-----


RAS has been pretty awesome in the community. He was actually the cofounder of the company we called nLayer. He's given a bunch of very valuable "101" talks at NANOG (think of this as the Hacker News for networking people), and until the acquisition, he was pretty active on the mailing list.

-----


What if this had been Akamai and not Cloudflare? Would we even hear about it? Why is Cloudflare in the news regularly and never Akamai? Do they just like drama? Is that good or bad for their business?

-----


Cloudflare uses it as advertising. "Look at this attack we mitigated for our client!", which then gets picked up by HN and similar sites because Cloudflare does write fairly good blog posts about it. As others have said, Cloudflare does a fairly good job explaining parts of the internet that most of us don't get to see, so it gets attention.

-----


It gets picked up by HN as CloudFlare post it to HN themselves.

That said they're interesting articles if you ignore the hype.

-----


More like they invent drama to cover their bad days. Akamai tells its clients when they are under DDoS, Akamai didn't really notice anything going on. CF just had a bad day and decided to make a story out of it. One other time when CF claimed a DDoS it was because the Google Bot Decided to up it's crawl rate, and took down many CF sites. CF told clients it was DDoS and then later admitted it was an issue with Google.

-----


IMO and not very familiar on the similarities between Akamai and CF, if Akamai handle's similar issues without driving pr/marketing efforts based on them that's their loss and probably a bad business play.

In my mind, this is similar to an antivirus company saying "hey look at all these nasty viruses out there, but we find and destroy them effectively."

This really just seems like effective case-studying on CF's part. It's arguably their job to hype it as much as possible (though of course they are responsible for the inaccuracies).

-----


"if Akamai handle's similar issues without driving pr/marketing efforts based on them that's their loss and probably a bad business play."

Akamai has been around since 1998.

Cloudflare since 2009 (both according according to crunchbase which jives with the date the domain were registered iim).

Consequently the established company has more to lose and less to gain from the publicity then the newer company.

For a newer company any publicity is good publicity even if it's over a negative event because you have less to lose and more to gain add: and people become familiar with your name.

Taking this to an extreme example (to make a point) let's say you start a new hamburger restaurant. You have no customers. On day 5 some people get sick (just sick not deathly sick). All the sudden you are in the local paper with a headline and a story that people just skim but they see your name. Almost guaranteed you will pick up business from the mention. Even though it's bad PR all the sudden down the road people will remember you and either forget or not care about the negative story they read 6 mos. prior (add: assuming they even read the story and didn't just see the headline).

-----


Akamai is not really in the DDOS mitigation business. They are in the CDN business, and "DDOS protection" is just one way they market their CDN resources.

-----


Actually Akamai is very much pushing their DDoS protection and "Antivirus" snake oil ever since the CDN market started collapsing.

Remember Akamai is the Oracle of the CDN world. They're in a bit of a shuffle right now as their competitors have long caught up and offer the same for less, minus the legendary Akamai-arrogance.

-----


Could you elaborate why the CDN market is collapsing? Genuinely interested.

-----


Collapsing in the sense that prices have been in free fall for a while.

Akamai sees their market shrink from both ends. At the top-end companies like Netflix start building their own networks because they want more control, less reliance on a third-party, and the cost savings. At the low-end you have pseudo-CDNs like Cloudflare eat into their snake-oil business, and commodity-CDNs like Cloudfront grab the longtail.

Connectivity in US/EU has also gotten so good on average (and bandwidth so cheap) that the body of mid-range sites that feel a genuine need to enter an expensive conversation with a "traditional" CDN is evaporating. And this is doubly true for Akamai where that conversation tends to be particularly unpleasant.

Akamai and friends are of course well aware and have long shifted their focus to the emerging markets (asia, africa) and mobile. Time will tell for how long that can keep them afloat before they are marginalized.

-----


Probably because Amazon entered the game...

-----


> In my mind, this is similar to an antivirus company saying "hey look at all these nasty viruses out there, but we find and destroy them effectively."

I'd upvote such an article if it had the technical depth/layman readability of a Cloudflare puff piece.

-----


As an observation, the same day the new york times story ran sharespost announced that they had just added cloudfare to their secondary market platform. This typically means that an insider such as a founder or early investor plans to auction some of their shares. They can't put that in the media story as that breaks SEC rules, but positive press is very likely to drive up share pricing.

-----


We have no plans to allow CloudFlare's shares to be traded via Sharespost or any secondary market. We'll investigate if Sharespost has done something without our permission.

-----


Because most companies who employ DDOS mitigation services want discretion, and smart DDOS protection companies are discrete. Even Cloudflare does not discuss most of their high-end customers. In this case Spamhaus gave them permission to write it up.

Cloudflare differs from most DDOS mitigation companies because of their low-dollar, self-service tier. This gives them a reason to blog, and customers and attacks to talk about. Most DDOS companies only provide bespoke service for $$$, and those contracts usually come with silence requirements.

Akamai is not 2nd on the list of DDOS mitigation companies, BTW. I know a couple companies who left Akamai for DOS Arrest, one of great companies you'll never see mentioned in a NYTimes article (because that's how their customer want it).

-----


Exactly. DoS Arrest was using BGP anycast for DDoS mitigation long before CloudFlare even existed. We had customers on their service back in 2008.

I'm somewhat skeptical of CloudFlare's low-cost approach to DDoS mitigation. Going for volume on low MRC clients means that you have a lot of potential targets on your network. And attacks against any one customer can always impact every other customer, which puts you constantly at risk, even if you yourself are not attacked all that often.

-----


Definitely good for their business that many people think it was a groundbreaking attack and CloudFlare stood strong against it.

-----


> But, having a bad day on the Internet is nothing new.

That's my new quote of the year.

-----


It amazes me how little I know about overral internet traffic infraestructures

-----


That's why it's frustrating for those who have a network arch/eng background when we run across the sensationalized pieces. Or when all of the sec-only wonks make this out to be something to combat with a firewall, and put out pieces that never mention BGP, but are somehow helpful.

End rant, but at least there's more of an awareness of good network folk with things related to this sort of story. Not that it's the ideal path, but still.

-----


Who is guilty of the sensationalized piece? My impression was that it was cloudflare itself.

-----


Cloudflare can toot its own horn all day long. That's just marketing. If journalists take that marketing and reprint it without incorporating other sources then you have a "puff piece" rather than quality journalism. The onus is on the journalists to do a better job investigating.

-----


Welcome to modern media… it's got very little to do with "journalism"…

One of my local major media outlets has sacked ~600 writers and editors recently, all the while advertising voraciously for tech staff for its real estate and travel websites…

-----


The New York Times and BBC took their account at face value and published it as fact.

-----


I avoid most articles by John Markoff (nytimes) as he is prone to exaggerating, but he showed up on the radio talking about this story.

It seems the way now that press releases can turn into articles.

-----


Same here. Can anyone point to a good starting point for someone who understands TCP/IP to start understanding the Internet topology?

-----


You might want to check out the NANOG mailing list. There is also a list of Internet exchange points [1], these pages are packed with details about how many of them function.

TCP/IP and Internet topology are different things. Think of it this way, internet topology is the highway, and TCP/IP is a vehicle that travels the highway (along with UDP, ICMP, etc).

[1] http://en.wikipedia.org/wiki/List_of_Internet_exchange_point...

-----


Just please don't post to NANOG when your cablemodem/DSL stops working. sigh

-----


I had to post a response to this:

LOL for 1 minute...

Also: don't post if your business DSL stops working. Or your ISP. But maybe if you find your BGP from your upstream is poisoned/severely b0rken.

-----


I would NOT advise the NANOG mailing list as a starting point. It starts to go off-tangent very quickly. NANOG presentations, and wikipedia might be a good place.

-----


I think it's worth mentioning that if you say TCP/IP you should also say UDP/IP etc and as such it's IP where you really where you want to focus on. Because, a router can generally ignore if something is a TCP or UDP packet.

-----


TCP/IP stands for "Internet Protocol Suite" in general.

see: http://en.wikipedia.org/wiki/Internet_protocol_suite

-----


I'd start with the slides from a powerpoint by ras on traceroute: http://www.nanog.org/meetings/nanog47/presentations/Sunday/R...

He really gets into how, with traceroute, you can get a glimpse of how everything fits together, beyond just "oh, this hop and now that hop". Also, Andrew Blum's book Tubes is a good casual look into how the world of data centers, IXPs, carriers, etc all fit together.

-----


Ars Technica had a pretty good article about peering and transit: http://arstechnica.com/features/2008/09/peering-and-transit/

don't know if that is what you were looking for but it does a pretty good job explaining that part.

-----


drpeering.net

I like his map of US peering as well:

http://drpeering.net/white-papers/shared_images/USInterconne...

-----


For humor, there's always http://www.routergod.com/ Sadly, it hasn't really been updated in a while, but it was technically fairly accurate.

-----


Lol, opposite reaction here. He said pretty much what I thought of this and what I've been telling some others on other places. And in the last few paragraphs he said something that announced the coming of the really juicy stuff, but then it was about the exchanges which I already knew about :(

-----


This is a great writeup and aligns with what we saw at CloudFlare. The most interesting part of the attack was that the attackers went after the IXs.

-----


PS - This was written by nLayer/GTT's CEO, one of CloudFlare's bandwidth providers. They have been a terrific partner and were extremely helpful as we mitigated this attack.

-----


Would you mind explaining this piece of the story in a little more detail?

"When the attackers stumbled upon this, probably by accident, it resulted in a lot of bogus traffic being injected into the IXP fabrics in an unusual way, until the IXP operators were able to work with everyone to make certain the IXP IP blocks weren't being globally re-advertised."

It's pretty fascinating and I think most of the HN audience, myself included, would be able to understand the actual technical detail.

-----


Updated with a few more details for you (but still trying to keep it in laymen's terms for those who don't do advanced networking). I wasn't really expecting this thing to take off or get linked anywhere, it was just a dump of the e-mail I sent this morning so I could link it to Facebook. :)

-----


Thanks for the very informative write up!

I have to smile when people are praising you for the plaintext writeup whose purpose was to link from facebook. It's like saying you finally got your house fully off-grid using your hand made windmill that generates power so you can watch the Kardashians. ;)

-----


This may be the only HN comment I've ever committed to Evernote. Well played :)

-----


Thanks for participating in Hacker News... please stick around!

-----


Not that it matters much, as the mail speaks for itself, but according to the website: CTO

-----


Fantastic response. Thank you so much for writing and sharing. Cuts through all the spin and FUD effectively.

-----


This incident drives home the fact that there is no one entity responsible for "The Internet." It is run by a network of for-profit companies, governments, and non-profit public and private standards bodies.

-----


>The next part is where things got interesting, and is the part that nobody outside of extremely technical circles has actually bothered to try and understand yet.

Isn't this talked about in CloudFare's write-up?

-----


I think this just reminds everyone that as large as google, facebook, etc seem to be, they are just a small part of this huge global network we humans have created.

As large as Google, Facebook, Amazon, etc is on the web, the major telcos have to be even larger (in terms of network size, capacity, amount of fibre, switches, datacentres) in order to carry the traffic.

-----


That's only true to a point. In addition to the consumer-directed packets, large volumes of traffic for Google and Amazon never leave their networks. Shuttling data between datacenters for Google; moving data within the many cloud services of Amazon; or transferring between the two companies (e.g. GCS to S3). This means it's no longer a given that telcos must be larger.

The closer we get to living "in the cloud", the more our traffic can be seen as a window into operations taking place within and between cloud services.

-----


My first reaction to these kinds of news is fear. As someone who builds stuff for the web, I really hate the idea of some malicious being trying to purposefully ruin your work.

My second reaction is that of "bring-it-on". Basically this is an impulse for improvement, and as with any major threat you either stand your ground or get run over.

-----


Nice response, nice sentiment, nice format. I often dislike the sensationalism surrounding attacks or viruses - it makes people distrust and gives others excuses for issues.

-----


Richard you can hit me with your cluepon any day.

-----


Related, http://cloudscare.com

-----


Straight up Lie. Talking with Google Engineers, and Amazon Engineers they say this is not happening. So unless someone picked a fight with CF, it is more likely CF is just having a bad day.

-----




Applications are open for YC Summer 2015

Guidelines | FAQ | Support | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: