Hacker News new | past | comments | ask | show | jobs | submit login

Still not a good move for Akamai, though.

I get him speaking out for them about the hosting having been free, but Akamai is now the CDN that got bullied into kicking someone of their service against their own will.

Terrible PR, and that mud will stick in tech circles. Akamai folds under pressure.

I know it's a crude comparison, but we don't negotiate with terrorists for a reason.




I would bet money that the attack was truly epic... to the point where it was impacting (or was about to impact) other Akamai customers. Because you are right, this is terrible PR, and Akamai knows it.

DDoS mitigation is fundamentally a problem of "who has more bandwidth?" - if the attacker has more bandwidth than you (and how much bandwidth you have depends on "to where" ) - it's over.

The problem is that the economics, right now, are heavily tipped in favor of the attacker.


  I would bet money that the attack was truly epic
From the article: "The assault has flooded Krebs' site with more than 620 Gbps per second of traffic — nearly double what Akamai has seen in the past."

Sounds pretty epic.


If it takes a mere 620 G-bit to screw Akamai, then they're obviously not much of a content distribution platform. I only need a few thousand compromised machines in the right countries to run 5+ T-bit scale attacks. This isn't the 90s any more, and Akamai has no reason to not have improved bandwidth capabilities, unless they're doing like the Telecoms companies by saving that money and never using it to improve their own infrastructure.


I'm going to call shenanigans. Do a quick google for the largest DDoS attacks on record, and this is one of them if not the largest. Pulling from places like Arbor, and their yearly reports, the largest previously seen were ~500Gbps. I seriously doubt you and "a few thousand" machines can magically be 8-10x stronger than the largest attacks on record.

I would love to see some sources that ANYONE can get close to that number. Short of the NSA bringing its full power to target a specific pipe, I don't think we're there yet.


I'm willing to bet you're not including compromised backbone routers and symmetrical gigabit-fiber connections. There's enough of the latter in the USA, to homes, to do that much and then some.


But what you're arguing isn't reality. Show me a source article where someone has compromised a backbone router, and then used it for DDoS. This is almost exactly what I was addressing when I said "Unless you use the power of the NSA to target a single pipe." Even in a hypothetical scenario where you have gotten your hands on one: How long do you think companies are going to let their half million dollar router be consumed for a DDoS before they take notice?

I think its pretty obvious you don't understand how internet traffic really flows, when you think "all I have to do is compromise 600 pc's with a Gb connection and I can launch a 600Gbps DDoS."


"I think its pretty obvious you don't understand how internet traffic really flows, when you think "all I have to do is compromise 600 pc's with a Gb connection and I can launch a 600Gbps DDoS."

I've been doing networking for 26 years. One of my largest jobs was mitigating Slashdot effect for two high-profile sites. I know very well how a DISTRIBUTED denial of service attack works, can work, and have done many of my own in checking security measures for those whom I consult. Compromising backbone routers is actually fairly simple. Too much reliance upon software stacks and not enough reliance upon sound hardware logic design that's proofed against attack in the first place.


>Compromising backbone routers is actually fairly simple.

Yes, the state of security on routers, even some rather large routers is embarrassing, but when routers have business-critical amounts of bandwidth? they are attached to pagers.

Regardless of what you think of us, the folks attached to the pager, when you start messing with big important routers, at least if you mess with them to the point where it interferes with the business needs of the people who are paying money for said routers? you are going to wake us up. You are going to have a really hard time using these routers for much more than an hour before there is someone on-site trying to fix it.

Sure, the state of security for monitoring is also abysmal. if you wanted to put in per-router effort, I'm sure you could take my pager offline when you take my router offline. but customers will notice, customers will complain, and at almost every place where I've been on pager, there have been alternate routes to get to me. Hell, I once woke up to a very excited office manager shouting and pounding on my door because the whole office was down, I was sleeping in, and my pager wasn't charged. It freaked the hell out of my roommates; the office manager had a thick accent, and was built like someone out of a HK action film. They thought for sure I was gonna get messed up because I owed someone money.

But yeah, I mean, sure, with sufficient subtlety, you could use a small amount of the available bandwidth on a poorly-monitored backbone router. And a lot of them are poorly monitored. But my point is just that once you start using them hard enough that it interferes with the business needs of the people paying for them? Regardless of how terrible the monitoring system is, people will notice. Security isn't the only thing that is embarrassing on those routers; businesses are used to this shit failing, and even if most people don't know what to do beyond turning it off and back on, when there are dollars involved, there are procedures for getting someone who does know how to fix it on-site.


Especially since Akamai has placed themselves in the cdn market as "WE cost more, but we are the best there is." Hell, a ton of other "cdns" are merely reselling Akamai with friendlier contracts and features.


I was impressed by that number too, but after reading your comment it seems awfully small. In 2016, what kind of attack do you expect CDN like Akamai to handle?


Don't worry about his comment. He has no idea what he is claiming. These recent attacks are labeled record breaking because they actually are. This one at 620Gbps and the recent ~1Tbps against OVH are the biggest in history, and still 5x less than what he claims.

Of course we will get there sooner than any of us in infosec want, but he is almost an order of magnitude off of what realistic threats look like.


Almost an order of magnitude? You obviously don't know what that means. I'm at half a magnitude of order off by your supposed words (protip: An order of magnitude meas you add a zero to the end of the number you're using,) and you know not much about CDNs if you don't think that multiple terabits of traffic are flowing through Akamai every second already.

Currently, I do the physical networking builds for a mental health company. We're already deploying 100 G-bit in these offices as primary connection trunks, because a lot of these services will be done remotely over video and audio.

I could open up a 20,000 user Camfrog Video Cluster chat room and could saturate a T-bit connection link just like that the second it's half-full. Have you ever used (let alone seen)a T-bit scale program before? Camfrog's been out for over a decade.


Tangent: orders of magnitude are exponential - if you want to say "half an order if magnitude" you have to do it along the exponential curve. 5x is about 0.7 orders of magnitude; half an order of magnitude is a bit over 3x.


"Tangent: orders of magnitude are exponential"

Orders of Magnitude, n; a class in a system of classification determined by size, each class being a number of times (usually ten) greater or smaller than the one before."

There are very few disciplines where OOM is done by exponential form (astronomy/star magnitude being one of them.) It's almost always base-ten. When you use electrical conductivity in mineral identification, you're always multiplying a number by ten multiple times over. The effect of that? You either add or remove an equal amount of zeros to the original number being multiplied.


Chattanooga has symmetrical G-bit fiber to the home. 5,000 Chat-town residents got uppity, think of what would happen to Akamai if they couldn't deal with 600-ish G-bit.


Yeah if that network was designed to handle continuous 1gbit per home.


It is, as it's a municipal network and not a shitty company-owned one. Designed right from the ground up from day one, much like the fiber service in Sandy, Oregon (300/300 for $40, no limits, caps, throttling, nada.)


It's accelerating!


> If the attacker has more bandwidth than you (and how much bandwidth you have depends on "to where" ) - it's over.

Is there no way to stop such attacks by coordinating with your upstream ISP, or with the sources of the traffic? Why do backbones allow it to be carried? Is the problem that these attacks are too many different individual streams to identify and filter?

It seems like there ought to be some way to hierarchically punt the problem to network operators. "Your network is contributing 10 gigabytes per second to this DDOS attack. Identify the sources and shut them down." - times each identifiable traffic stream.

Is there no way to capture a list of all IPs involved in the traffic and quickly distribute a "shut off this device" request to the origin network? Maybe a good-faith collaboration of different ISPs could result in quick shut-downs. Or if networks don't cooperate, then the next-nearest border does it for them (and if they don't like the policies under which their neighbor suspends the traffic, they can sign up to do it themselves). Imagine something like a mini automated DMCA type request. "This IP is DDOSing me", signed by the operator of a reputable network, having the effect of suppressing origin traffic from the IP when received by a reputable network. (Any abuse of the mechanism causes the network to lose its privilege to participate, and DMCA requests related to that network fall upon its neighbors.) Perhaps the suppression would be destination-limited so as not to be vulnerable to too much abuse of the mechanism.


There "is" such a way, see RFC 5575, but it seems to be little known/implemented/deployed/enabled/something.

5575 is a BGP extension that says "for packets from x to y, do z". Assuming a router knows on which if its input ports such packets arrive (and during a DDoS it doesn't have to wait long for the next packet), it can disseminate the flow specification towards the actual source(s) quickly, so the packets can be dropped quite far from the DDoS target, in the ideal case as soon as it reaches an honest ISP.

Egress filtering should kill much of the spoofed-origin traffic and this much of the rest — if deployed.

I'd love to know why 5575 isn't deployed. Memory concerns maybe?


It is possible to mitigate, but it takes time.

> Is there no way to capture a list of all IPs involved in the traffic and quickly distribute a "shut off this device" request to the origin network?

Now hackers have a new attack vector.


Without net neutrality economics would likely fix this. "We'll limit our 60GBps traffic to this network block because our peering arrangement makes that expensive."

Couldn't an ISP just throttle traffic to a particular network block on receipt of an authorised request.

And that seemingly provides a general solution. Record output to particular network blocks and throttle traffic when it peaks beyond statistically normal bounds? Basically, applying damping.


The problem is a lot of traffic is spiky if you look at it from the view of any given network provider to a site that isn't always getting traffic.

So distribute the DOS traffic enough and it will be very risky for providers to throttle it without appearing to be overall slow for a lot of legitimate traffic.

The other problem is determining what an "authorised request" involves. As it stands it is already a problem that people can - and now and again do - manage to mess up routing tables by sending broken route announcement, re-routing large address ranges to the wrong location.

Too much of internet routing still relies on a large amount of trust. We unfortunately need less of that, not more.

Eventually that might lead us to a situation where we could properly authenticate and authorise requests like what you suggest, but today it is high risk.


One thing perhaps ISPs could do is share data on these attacks, and work to shut off the botnets after the fact- so they can't be used again. So for example if Akamai shared information with ISPs about which orgin IPs were involved in the DDOS attack, the ISPs could reactively reach out to the affected customers. Or the ISPs could even try to examine their own logs after hearing a report of a large scale DDOS attack to determine if any of their own nodes were part of it. There may be also measures ISPs cold take on their CPE routers to detect infected machines on the customer network and warn them more pro-actively.


And governments can easliy silence "nuisances" like human rights lobbysts.


Governments can already block traffic on their border and can organize attacks again foreign targets (remember China vs Github case).

Oppressive governments can do even worse things to human rights activists, like banning a person from international travel (like Mr. Snowden or Wikileaks founder) and getting them extradicted from other countries.


The Internet is supposed to be a dump series of pipes. However, in practice you can of course try a traceroute and then complain to the ISP, which will or will not cooperate. If the ISP does not cooperate, then you can call their upstream provider and kindly ask them to call their customer and yell at them, but note that money is flowing in the wrong direction for that to work.

Plus you get the problem that you are usually seeing DDOS traffic from innocent bystanders, if the ISP is shutting of a compromised home router, then their customer will not have internet for some length of time.


Shutting off Internet for people housing compromised devices does not seem like a bad thing, and it is probably the best solution that exists.


It probably seems like a bad thing to the customer paying for internet, and since they're the one paying for service they do get some say in the matter.

Which do you think costs Comcast more? Temporarily disabling 20,000 customers' internet connections, or forcing Akamai to drop Brian Krebs as a customer?


I think someone who has equipment engaged in DDoS attacks might be a bigger cost to the telecom than they are paying anyway. Those 20,000 customers are either using disproportionately more bandwidth and resources or they are at least unwittingly involved in illegal activity, so Comcast would have either a financial incentive or social obligation to temporarily suspend their service. The company should be complaining to the customer, and they should be worried about losing their service. If every competitive provider had these very narrow standards, customers should be more worried about not having Internet because of their security illiteracy.


Why not simply block traffic from the specific unknowing customer to the target? I.e. Allow all normal traffic from the customer to go through, except for the traffic that was identified as being part of a DDoS? Is it expensive for an ISP to do that type of intelligent/rule-based routing?


It would probably take too long to get the ISP to comply, and I suppose the targets keep changing with botnets



You cannot stop attack because web is decentralized. You cannot tell other Autonomous System to stop sending traffic to you.

> Why do backbones allow it to be carried?

Because they get paid for any traffic passing through. The more traffic they have the more profit they earn.


You can capture the IPs, but they aren't going to be the true originating IPs. Usually they are just random IPs generated on the fly. Or sometimes the attacker will pick a company or ISP and spoof the packets with those IPs.


> I would bet money that the attack was truly epic...

Seems like it not only was one of the most expensive attacks so far (and by far the biggest one to have ever hit Prolexic, according to them), it also made little use of reflection of amplification making it much harder to mitigate.

> to the point where it was impacting (or was about to impact) other Akamai customers.

That's exactly the scale it had reached, and Akamai provided free service to Krebs, which was nice of them but only to the extent that it wasn't significantly impacting customers.


>That's exactly the scale it had reached, and Akamai provided free service to Krebs, which was nice of them but only to the extent that it wasn't significantly impacting customers.

I... wonder if anything different would have been done for a paying customer. I mean, if the attack was big enough to take down other customers, and if Akamai had the choice between kicking one customer and all customers being down?


I'd expect them to have gone further and probably invested/committed more resources to it.


Why's that? The theory is that it was impacting other customers. If the notion is that they kick off the low-revenue person to preserve the larger revenue stream, then it would seem that the only way to get real protection from Akamai is to be their largest customer.


> Why's that? The theory is that it was impacting other customers. If the notion is that they kick off the low-revenue person

Krebs was not a "low-revenue person" he was a "no-revenue person" they provided the service pro-bono. With customers they'd have a contract, and while I don't know Akamai's contracts I assume they either have specific service clauses and/or "use clauses" where protection costs get charged to the customer.


Nope. Contract basically says "here are the service levels Akamai commits to meet" and "Customer's sole remedy is to cancel service if Akamai does not meet these levels." Looking at their contract language does not make me feel any better about whether they'd drop a paying customer in similar circumstances. (Quoting from a version available online at http://contracts.onecle.com/akamai/msa.shtml)

8.1.3 Akamai shall meet or exceed the network availability, capacity and operations levels as set forth in Section 2 above; provided that Customer's sole remedy for the breach of this provision by Akamai shall be the termination rights set forth in Section 10.2 below.

10.2 TERMINATION UPON DEFAULT. Either party may terminate this Agreement in the event that the other party materially defaults in performing any obligation under this Agreement and such default continues unremedied for a period of thirty (30) days following, written notice of default; provided, however, that in the event this Agreement is terminated by Customer due to Akamai's breach of its representations under Section 8.1.3 above and failure to cure, Customer's sole remedy shall be its election to terminate the Agreement without further liability to either party (except for Customer's obligation to pay all accrued and unpaid fees outstanding at the date of termination).


Having not committed a 620 GBps DDos attack on a blogger I don't like lately, how on earth do you generate that much traffic without reflection or amplification?

Asking for a friend. /s


> There are some indications that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords.

http://webcache.googleusercontent.com/search?q=cache:http://...


Pure speculation but you can get and click a VPS/dedicated server with a 1Gbit/s pipe almost everywhere. Most of these machines are not regularly patched and run often vulnerable PHP software.

I guess if you have a few 0-days for common stuff that is often hosted you can easily!? collect a few thousand servers that blast out traffic.

Additionally more machines have faster access to the internet as fiber gets more popular, if you have a decent sized botnet that can pump out traffic on average with 50mbit or so you can also get some serious traffic, not sure about getting 620GBps, through. I guess if you mix all of these and put some amplification attacks on top it's possible.


620 Gbps, not GBps. Don't create more confusing between (B)ytes and (b)its. You and I know its bits, but other readers might not.


Oh. My. God. This is in text so it is going to come across as sarcastic but I 100% guarantee you it is not, I am 100% serious. I have been making that mistake for over a decade and yes I do know the difference but I've still been writing it down wrong every single time it's come up. Why did nobody tell me before now?!

I am genuinely grateful you pointed out my error, it's these little things that people judge us on when we write emails and technical documents.

Please excuse me now though, the Englishman in me demands that I hide under the bed out of shame for the next 26 hours.


I certainly did not expect a response like yours. (Yes I would have totally taken this sarcastically!) I am glad I helped you learn something :)


You hack into a bunch of servers, PCs and smartphones. You don't even need that many with modern connections.


Yup. Cable subscribers in the USA alone have enough aggregate bandwidth to knock Akamai totally offline if 620 G-bit is something they can't handle. My own uplink is 10 M-bit. 100,000 compromised connections that speed is enough for T-bit scale attacks, and compromising 100,000 machines in the new world of "Idiots of Things" is a no-brainer given almost nobody working in the IoT space even gives a nod towards security.


> Terrible PR, and that mud will stick in tech circles. Akamai folds under pressure.

Definitely. The lesson I'd take from this is that Akamai isn't serious about DDOS protection.

For me, buying DDOS protection is something like buying insurance. I don't expect to need it, but if the worst happens, I expect them to stick with me. The way I measure insurance providers is by asking friends how it was when they had a claim.

It strikes me as especially bad that they're doing it in the moment. It'd be bad enough if they said, "Sorry, Brian, this is too big a distraction; you've got 90 days to find a new home." But that they're dropping him in the middle of an attack? That means I can't trust Akamai.


I had some friends who worked at Akamai. I always got the impression that they were very serious about addressing anything which could disrupt service, including DDoS.


Yup. And it's those people I feel bad for. I'm sure I would have been one of the tech people saying, "We must not give in! Let's use this as incentive to keep upping our game. That's the only way we'll win in the long haul."


You're entirely ignoring the fact that there's no way Krebs could've possibly been paying Akamai enough to tank the attack.

>For me, buying DDOS protection is something like buying insurance. I don't expect to need it, but if the worst happens, I expect them to stick with me. The way I measure insurance providers is by asking friends how it was when they had a claim.

DDoS protection isn't insurance, Krebs gets attacked 24/7. Only an utter moron would be willing to sell Krebs DDoS insurance.

>That means I can't trust Akamai.

Which means nothing at all in a world without alternatives, hosts capable of tanking attacks like that number at two or less. But I get the impression you're not looking to spend hundreds of thousands of dollars a year on DDoS protection anyway.


> You're entirely ignoring the fact that there's no way Krebs could've possibly been paying Akamai enough to tank the attack.

They were hosting it pro bono. He never paid them enough to do anything. And yet...

> Only an utter moron would be willing to sell Krebs DDoS insurance.

But a smart person would cover him for free as a way of proving that they could handle the worst the DDoSsers gave out. To prove that they stick by their customers.

Most people who buy insurance never really use it. So what are they buying? A feeling of safety. Just think about the various insurance company slogans that come to mind.


But he didn't buy anything. We can't extrapolate what happens to paying customers from the experience of a non-paying customers.

That being said, I definitely agree with your thoughts on insurance.


> But he didn't buy anything. We can't extrapolate what happens to paying customers from the experience of a non-paying customers.

So, if he had paid one cent (thus being a paying customer), you could extrapolate?

I don't see how the price is in any way relevant here. They promised to protect him, and they failed to do so. Claiming afterwards that the premium was too low isn't the way this works.

Also, I doubt that it actually was "for free". He may not have paid in money, but likely in the form of (at the time positive) PR, for example.


You're right the actual amount of money is not relevant. What is relevant is that the contract he signed with them is not the contract paying customers sign when they do business with Akamai.

Since we have no idea what was in the contract this guy signed and it's all speculation, this discussion is totally vacuous and pointless.

> They promised to protect him, and they failed to do so.

How do you know what they promised? They could have promised protection, or they could just as well told the guy "hey, here is some free caching for you, m'kay? No strings attached". Hell, it's possible he didn't even sign anything, and there wasn't a contract at all!

If he were a regular paying customer, I would make the assumption that the contract he signed is likely the same, or similar to the contract I would potentially sign, and this would put Akamai in a very bad light to me.

Since the contract this guy signed is not the contract I would sign, I cannot rationally infer any information from this incident, good or bad.

If Akamai emails me and offers me some free service, then yes, this information would be valuable and relevant. Until that day, I can't make any use of this information.


You are missing the point which is that Akamai can't handle this DDoS.

Do potential customers care if Krebs is a paying customer or not? He went with them as they offer this service which apparently doesn't work as well as advertised.


There is no indication that Akamai can, or can't handle the DDoS. The only information we have is that they are only not willing to do it anymore for this particular customer. There is no indication that they won't do it because they lack the technical capacity to do it. Just as well they might not do it because this thing is financially disadvantageous to them.

As a potential paying customer, what they can and can't do is covered by their SLA, and that's all that matters. If they break their SLA they own the customer compensation. This incident is irrelevant.

Of course I don't actually know what kind of SLA and indemnification Akamai provides. Maybe it's bad. Then after analysing the contracts I would make an informed decision. These things are what I use to make decision, not random stories with no technical or business details on random blogs.


From what I've seen (quoted elsewhere in this thread) there isn't any significant penalty for Akamai if they are unable to mitigate, or choose to not mitigate, a DDoS attack. They might negotiate other terms, but I doubt it. DDoS mitigation is, by its very nature, a best-effort service, and reputation for not giving in to attackers is more important than any contract terms you're reasonably going to be able to get.


I haven't read anywhere that they failed. Isn't the timeline:

- Akamai mitigates attack just fine

- After the attack is over, Akamai does the cost/benefit analysis and decides to stop providing the free service.


The attack was ongoing when Akami gave Brian Krebs 2 hours to find alternate hosting.

The attack showed no signs of waning as the day wore on. Some indications suggest it may have grown stronger. At 4 pm, Akamai gave Krebs two hours' notice that it would no longer assume the considerable cost of defending KrebsOnSecurity.

http://arstechnica.com/security/2016/09/why-the-silencing-of...


They gave Krebs the service free of charge.


I mentioned that above, but I think this misses my point, which is:

If Akamai can't provide their service for free, then they shouldn't provide their service for free.

A CDN's whole business is resilience, which in this case makes them the bodyguard, not a bystander.

Whatever your opinion of Cloudflare, it seems clear to me that Matthew Prince keenly understands this, hence him reaching out and offering to step in and get Krebs back online.

tldr; If Akamai can't do the one job they exist to do in the face of an (albeit well armed) assailant, then they're the problem, not Krebs.


"This was flagged to my attention and I've reviewed all the interactions between the author and our team [cloudflare]. The site in question was using the free version of CloudFlare's service. On February 2, 2013, the site came under a substantial Layer 7 DDoS attack. While we provide basic DDoS mitigation for all customers (even those on the Free CloudFlare plan), for the mitigation of large attacks a site needs at least the Business tier of CloudFlare's service. In an effort to keep the site online, our ops team enabled I'm Under Attack Mode, which is available for Free customers and enhances DDoS protection.

The attack continued and began to affect the performance of other CloudFlare customers, at which point we routed traffic to the site away from our network."

https://news.ycombinator.com/item?id=5214480


That was 3.5 years ago. CloudFlare's ability to deal with DDoS has changed substantially in that time and we deal with enormous attacks day in day out automatically.


So how would you feel about 600gbit+? I'm genuinely curious, would it be an interesting challenge or an immediate avoidance? As a business customer myself and a big fan of CF's general attitude to these sorts of things, I know be very happy to see the blog post on it.


We've offered to help Brian Krebs out. We don't see any reason we could not handle an attack of that size. We've already seen gigantic attacks and have a very well developed automatic infrastructure for dealing all manner of attacks at different layers and an experienced 24/7 network team.


How do you feel about Kreb's criticism of CloudFlare for, in his view, sheltering the web presences of various DDoS-for-hire services? Just wondering what your response to his articles such as "Spreading the Disease and Selling the Cure" is. He seems to have taken some pretty strong anti-CloudFlare public positions.



I just want to take this opportunity to say that as a paying customer, this attitude towards your duty of keeping a site online, no matter who's it is, is exactly why we swear by CloudFlare and why almost no one else seems appealing in this field. I've watched your competition snuff out customers for being too controversial, while you guys just get it done. Thanks for this, it's really changed the shape of the internet in some ways.


Honestly I didn't know about this instance, but as another poster has mentioned, I don't believe this reflects where Cloudflare is today, and that's who Akamai is competing with.

On a side note, they really don't appear to be making a concerted effort to get out in front of this which tells me that they either aren't aware of the reaction or don't think it's a big deal.

Either way, it just makes them look bad.


They gave him service for free and covered many, many smaller attacks, including the 20-100Gbps attacks he reported earlier this month. The fact that they decided not to cover one of the largest attacks ever documented pro bono isn't great but I don't think any of their clients would fail to understand the difference between a favor and a signed contract.


What would have been a bunch of positive earned media is now a flood of negative reaction and lingering questions that they buckled under the pressure.

The biggest DDOS ever, and Akamai dumps the client rather than defend it.

However they spin it, doesn't look good.


Exactly this.

Frank Leighton needs to make a statement about this, immediately, and he better pull the Rabbit of Caerbannog out of his hat.


I agree and I could see them reversing their stance now with the press around it. More than likely a middle manager approved the pro bono usage and once it got expensive that manager found him/herself in hot water over the resources being consumed. A few apologetic emails to Krebs from that middle manager, but saying he had to go, and boom we find ourselves here.


In an ideal world, they would not cave... but someone has to pay the giant bill. Who would chip in?

The site must be constantly under attack, and it must cost Akamai a fortune in real $$ all year long. And, when their service is performing well, nobody is talking about it, so there must be very little positive PR.

If someone is ready to foot a 620 Gbps bandwith bill all year long, I am pretty sure Akamai will be more than happy and able to scale up further.

Too bad there is no always a cleaner/smarter solution than pure bandwidth and $$ to fight those attacks.


As someone who actually pays for DDOS protection, this doesn't alter my opinion of Akamai.

I already know that Akamai is expensive and not particularly good at it. They are a CDN who is trying to make some money on the side with unused bandwidth, and will protect their CDN business if it comes down to it.

They don't invest in active defense. In fact, I know folks who actually had to block malware being served from Akamai-owned IPs!

I suspect that is why they gave service to Krebs for free in the first place--they need the marketing.


I can almost guarantee CloudFlare will offer to step into the breach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: