Hacker News new | past | comments | ask | show | jobs | submit login
Update Regarding DDoS Event Against Dyn Managed DNS on Oct 21 (dynstatus.com)
71 points by sajal83 on Oct 22, 2016 | hide | past | favorite | 84 comments

I am sure the DDoS problem is something that the free market will sort out. The individual players will make it costly for the other players to send problems their way. I expect a chain of "charge the next node for resource usage" to evolve.

If this chain will go all the way to the end user, I don't know. If it will, then end users will probably start using routers that feature restrictions / monitoring / control of outbound traffic. So you know your coffee machine will not be able to use up too much resources. Just like you have a fuse for the power line.

Sometimes the free market needs a little help, due to the tragedy of the commons [1].

That's why the FCC has regulations on interference created by electronic devices. Reducing interference costs money. The free market punishes extra costs. The situation of networked devices with security-impacting bad features is analagous.

[1] https://en.wikipedia.org/wiki/Tragedy_of_the_commons

That's assuming that the free market doesn't "solve" things by ISPs and media companies merging into 2-3 mega-conglomerate verticals whose siloed content is only accessible via DRM via their own network.

Wouldn't it lead to just higher internet plans prices? I can also imagine hackers ddosing themselves to get the $$ from the nodes and unfortunate users. And what about international traffic? Same charge for everyone, US citizens and Nigerians? I don't think it can work.

You don't know how DDoS works.

The coffee machine might only have to send one packet per minute. The combined force of 100 million coffee machines distributed around the globe sending 1 packet per minute each to the same destination means that, in your suggested model, the end user traffic sources barely incur cost at all, while the target and every unrelated third party who happens to be unfortunate enough to be "close" to them are bankrupted.

Congratulations, you've just turned a temporary network DDoS into a permanent financial one.

I agree with you. IoT devices inside a SOHO should communicate externally through a proxy gateway device. IoT devices should only have communications in a p2p network in a LAN, and have strong restrictions or none access to WAN. Any type of updates should be given from a proxy device having proper hardening than a normal IoT device.

The router could provide password-protected web proxy to access the LAN IOT webserver. Then you've reduced the attack surface to the router.

It almost seems like we need some protocol extensions:

1) Standard auth protocol (not just web-based) for the router to protect the local computers. Some kind of user-and-software-friendly firewall. This could even extend to game servers and whatnot - what if the "shared password" for connecting to a hosted game server had the shared password implemented at router protocol level?

2) DHCP registration on a network should require a name, one that the user was prompted to provide at some point. No more identifying devices on you router by IP or MAC. You already need to provide a name for SMB or DNS, just finish the job and name all DHCP clients. Possibly this should work with DNS in some way.

This way user-friendly logging information can be presented to the user. Without that, routers don't have the critical information needed to tell the user which device is screwing up.

Edit: Google tells me this is already a thing... Sadly, good conformance on providing meaningful DHCP client names won't happen unless the FCC et al start testing IP-enabled devices for it.

vodafone is building a lowband/narrowband WAN that could/should be used for helping fix an impending Botpocalypse type of thing IF It's implemented with such a goal in mind: http://www.theregister.co.uk/2016/10/20/vodafone_nb_iot_roll...

It's not Vodafone specific, it's really a generic property of cellular IoT.

Any 3GPP technology (for IoT, primarily 2G and LTE) can provide private PDN connectivity, where the device is not put on the Internet but on a private LAN. In this case, the device will only talk to friendly server and will not be directly accessible from random hacker. Now it's not a mandatory features either, so some devices can be put on the Internet and become reachable. Even in this case the situation is less dire: the devices are attached to a subscription, and it's in theory possible to insert some filtering. Also, operators now often require that devices support over-the-air (OTA) software updates, which allows fixing vulnerabilities.

Now with 2G it's possible to have rogue base stations to hack devices. But this is a local attack, so not usable for this kind of massive DDoS. And this hole is closed in LTE, where there is mutual authentication (device authenticates the network too).

So yes, I believe cellular IoT should be safer. The private PDN feature is very simple and very effective. Now, will have to see the prices for the new lower cost IoT LTE categories: CatM1 and NB1.

CatM1 will be the first to appear in the US. It's rather versatile, and can handle low throughput data and will handle VoLTE in a second step. NB1 will appear first in Europe. It's more streamlined but it's really for message based application (sending a message now and then) and data only.

Don't forget that DDoS is a market too, and not the smallest one.

I am not sure why this is being downvoted. I think it's an interesting perspective. After all, it's in pretty much every internet user's and provider's interest to have a working internet.

I think it's being downvoted because it's not a great strategy to address a distributed denial of service. In a DDOS, you have a high number of attack sources, none of them using a great amount of bandwidth on their own.

The bandwidth use isn't notable until you get pretty close to the destination. Thus, the costs are (mostly) borne by the victim. Basically it doesn't put the pain where it needs to go.

If the transaction cost can become insignificantly low, then money could flow in the direction of the victim. Each attack source could end up making a small contribution to the victim's bandwidth bill.

I'm not sure economic incentives will lead to this happening. There's also the difficulty in attributing traffic to the person who requested it ("is this a request, so we should bill the packet source, or is this a reply, so we should bill the packet destination?").

But in theory, I don't see the DDoS pattern (a high number of attack sources) making any difference to this proposal.

But wouldn't this make the matter worse ?

Besides all the complications associating with billing, wouldn't this create more incentives for those controlling the botnets to use them in a creative way to actually make money from traffic ?

Are you suggesting that people should get paid for receiving traffic?

I believe this is what TekMol suggested when he said that the "individual players will make it costly for the other players to send problems their way".

Fuses (circuit breakers actually) are required by code.

The free market will solve this problem with an existing and efficient tool.

Tort law.

A few wins in court will do the trick. Here is how.

A victim of a DDoS attack sues manufacturers, distributors, and retailers of that product for selling a defective product--the IoT device used in the IoT attack.

As soon as there is a win, the product disappears. Distributors and retailers must now price in the externality of the risk created by the product. It is far easier to take the product off the shelf. Therefore the manufacturer must either make a better product or die.

We don't need government regulation. We just need time. The legal system will do what it is designed to do: assign economic consequences to the right parties.

You say this as if it is just one device used in these attacks. And as if it will be easy to stroll over to the owner's location and determine the supply chain of that one device. And as if it will be easy to collect from the Chinese manufacturer who probably folded last week and reopened under a different name for completely different reasons.

This is why tort lawyers sue distributors and retailers in the USA. They are here and they have insurance.

Once there is a court decision, a rational seller (oh, Amazon hypothetically) will understand that selling fly-by-night small manufacturer items is fraught with peril. The offending items disappear from the marketplace.

Your response is still completely ignoring the problem of identifying the offending devices. When all you have to go on is a spoof-able possibly dynamic IP address, and the device is probably behind NAT anyway, good luck tracking it down or knowing what it is, in enough cases to prevent you from making more than a small dent in the problem. Then even once you know what it is, that doesn't tell you who the owner is. Then even if you find the owner, that doesn't tell you the retailer or the distributor. By the time you find the distributor, the device will be obsolete anyway and replaced by the next generation of bad devices.

Your way may have worked in the past when the harm was felt directly by the consumer who knew who the retailer was. But now, the harm is inflicted on remote parties who have no inkling about the source of the harmful devices. So your argument has a huge hole in it.

More likely the marketplace itself moves overseas. People use Amazon because they sell the stuff they want. If they stop selling it, the buyers go somewhere else. Amazon is a website. It could as easily be a website hosted out of China.

Also, it sounds like you don't mind if the effect of your proposal is to destroy things like Etsy. And eBay.

The legal system is less brain-dead than you imagine. :-) This is not the first time in history that a plague of imported items causes a problem.

Etsy and eBay will survive. Ford survived the Exploding Pinto. Firestone survived its tire debacle. And we are all the safer for it.

> Etsy and eBay will survive. Ford survived the Exploding Pinto. Firestone survived its tire debacle. And we are all the safer for it.

Ford and Firestone aren't retailers, they're manufacturers, and they were held responsible for their own mistakes, not the mistakes of third parties.

There is no reasonable way for online retailers to evaluate product safety of millions of small batch third party products. Either they sell them without evaluating them or they don't sell them. Imposing liability on them is exactly how you get them to not sell them, but then we can't have Etsy or eBay.

> This is not the first time in history that a plague of imported items causes a problem.

It seems like the first time the problem has happened in this particular way. Historically importing was a large-scale operation done in bulk with homogeneous products, so the importer knew what they were doing and had deep pockets. Today you can cost-effectively get a 99 cent piece of electronics shipped directly from a one-person shop in China. Either you shut down the entire idea of that, and then things are going to cost a lot more than they do now, or we need a different approach.

I'm starting to think you might have some issues with Amazon. Would you care to elaborate?

Um, AliExpress, heard of it?

On top of that, that isn't generally how liability works. If you make a crappy garage door that anyone can open, the people who bought one might be able to require you to fix it, or possibly make claims for losses if things are stolen. But when some vandals steal spray paint and sledge hammers and smash up the neighborhood, the vandals are the ones responsible for smashing up the neighborhood.

If it is foreseeable that your defective product would be used to harm another person, you can be held liable. Yes, the intervening actor is behaving illegally. That doesn't matter. You still bear your fair share of responsibility.

> If it is foreseeable that your defective product would be used to harm another person, you can be held liable. Yes, the intervening actor is behaving illegally. That doesn't matter. You still bear your fair share of responsibility.

That's the point. The unlawful intervening actor has 99% of the responsibility. The security vulnerability doesn't even give them anything inherently malicious or dangerous -- anyone can buy bandwidth and IP addresses on the open market. All the vulnerability does is give them the opportunity to take them without paying.

The traditional rationale for imposing liability on the "wrong person" like that is when the actually responsible party can't be found and someone who could easily do something to prevent it can be found. But the manufacturers of these things can't be found either, and the retailers can't easily do anything about it.

That's a common, if naive, misconception. How exactly is a plaintiff going to enforce a judgment against a manufacturer overseas, or against an Internet-enabled thermostat?

The US legal system, at least, was designed for quite a few things, but enforcing Econ 101 was not one of them.

The lawsuit will work because US distributors and retailers (hi Amazon) have joint and several liability for damage caused by defective products they sell.

You don't have to chase small anonymous overseas manufacturers. Distributors, acting in their own self-interest, choose to not sell the offending products.

The lawsuit will never happen, because the consumer isn't the one feeling the harm in this case.

The party feeling the harm doesn't know what the device was, where it was, whether it is still on any network, whether it is behind NAT, who owns the device, who bought the device, who distributed the device, who the retailer was, or what the attributes of the device are. So they will have a hell of a time filing a lawsuit, or, even if they try, getting standing to sue anybody.

Am I the only one who finds the notion that "the free market will solve this problem" by using the court system (an entity of the state operating on laws passed by the government) at odds?

And whose judgements are enforced by the government monopoly on violence!

Libertarians have this curious tendency to define institutions as "government" or "not-government" however it suits them best...

Even if you assume an overwhelming court victory (by settlement or judgment), there will still inevitably be tens of millions of vulnerable devices out there. I'm betting most of these products don't have anything in the way of automated remote patching.

And if these bot herders are smart, they're changing the vulnerable configurations or adding rudimentary ACLs so that they can keep the bots to themselves. White hat IoT-fixing scripts may not be too effective.

But yes, court action is probably necessary to prevent the problem from getting much worse than it already is.

UPDATE. I am surprised that no one proposed the obvious counter argument to my "Tort law über alles" position.

The world is a big place. Even if we successfully eliminate bad devices from the U.S. market there will be million of malformed devices everywhere else. They can be herded into a rampaging DDoS horde. The U.S. tort system is powerless to prevent that from happening.

Nothing is easy. We should remember the lesson of Chesterton's fence.

I didn't raise that, because I felt it was a red herring.

More importantly, with regulation we would at least have a prayer of making a dent. With lawsuits, not a prayer.

It's hard to sue when you don't know what the device is, where it is in the world, whether its IP address is real or not, whether it's behind NAT, who the owner is, who bought it, who distributed it, who the retailer is, who their supplier is, who the manufacturer is, and whether it was even a real device or not. Regulation would be way more effective, relatively speaking.

Who would be the regulator for webcams sold in Uzbekistan? Who would enforce those regulations in Ukraine?

I'm not saying regulation is bad. And we have it already. Look at that alphabet soup of little logos in the back of every electronic device you own.

I'm just saying regulation is an imperfect solution. Tort law and economic incentives are an imperfect solution. And I am wary of government solutions, even if they are the better choice.

That's not how it works. They don't design shitty products specifically for sale in Uzbekistan. Off-brand crappy products are typically simply copied from the designs of mainstream products, and the features baked in for the large markets (US) get automatically included. Often times this happens with secret "third" shifts where the factory reports two shifts per day to the main customer that supplied a design for manufacture, and then the third shift is run illicitly and solely for the profit of the factory management / owners. These illicit products are then sole to places where enforcement is sketchy. They don't go and retool the factory for the third shift just to make stuff more crappy.

Yes there are still some crappy products that fall through the cracks. But the rising tide would eventually help lift all boats.

Saying any solution is imperfect is just a waste of breath. Of course that's the case. The point is to move forward with improvements regardless, because improvements, guess what, improve things.

Hey, can one sue manufacturer of a stolen car that was used to rob and kill people (and returned to the owner afterwards of you wish)? Does it matter if that car model was easier or harder to hijack?

I sell a defective telephone pole. A vandal comes along and pushes it. The pole falls on your house. I am liable for damages to your house. The vandal is, too.

Pardon my ignorance, but why don't companies run their own nameservers?

I get why you don't want to run email - it's highly reputation driven. But as far as I can tell, running nameservers is no harder than running webservers or DB servers. HA is potentially even easier, because the system was designed that way from day zero.

I'm not suggesting I'd run one for my personal website, but twitter and github are already managing distributed networks for this. What are the services Dyn and others provide that are so invaluable?

The complex DNS products exist for a reason. For one they can do really good geo-routing. This makes your services go faster for a global audience.

Then, some of the big companies use multiple CDN's. You might want to use one CDN provider in Asia and another in Europe. Furthermore, you may want to select CDN not only on geo-routing dimension, but on arbitrary criteria. Imagine that you had a fixed budget for, say, cloufront, and wanted to route to them as much as you could, but never exceed your budget. Modern DNS services allow all this complex of scenarios.

Furthermore, running your own DNS infrastructure is far from obvious these days. In May 2015 I gave this talk on defending DNS from DDoS:


Draw your own conclusions, but I'd say that running your own DNS makes you _more_ exposed to DDoS and extortion than using someone else's DNS infrastructure.

I've run my own dns infrastructure for a medium sized company. It could get attacked. And you better be sure your providers are OK with the bandwidth usage or they will shut you down. You could be down for hours. If you are targeted it could be very bad.

Amazon and providers like Dyn have Anycast so routing will normally be faster than what most companies would want to spend on their own dns systems. And they can absorb most large attacks. Not to mention that uptime for route 53 is near 100% usually and is pretty cheap. I don't think you could build something cheaper for yourself that offered similar features to route 53.

Getting good, consistent, well routed, fast and secure DNS Is harder than you'd think. Dyn typically sing speed as the main selling point for their DNS product, they do this through a large distribution of domain name servers geographically and anycast. Many hosts (like say, DigitalOcean) run their own DNS but use something like CloudFlare Virtual DNS on top. Personally I was surprised so many large sites trusted Dyn, Route 53 is a more robust product for production and scale. In the past, I've seen hosting providers switch to Dyn, give them load, cripple them, and have to scramble to revert away. I'm not at all surprised his happened, even given the uptick in botnet traffic globally.

Route 53 aint all that. We approached them about handling our customer's domains, and they said no way. They didn't have the capacity. Granted, this was 2 years ago, but Dyn has a much better reputation (still) than Route 53.

It's not that hard but DynDNS can offer a much higher performance, reliable and advanced service. They use anycast with a lot more servers than it's practical for each company to manage. They also offer advanced georouting, failover, etc.

In the face of a DDoS I'm not sure a custom nameserver network would do much better than a company who does that for a living. The only advantage is that attacks would have to target individual services (which did happen other times).

Latency, Ops, Cost, specialised features like latency based routing (nearest datacentre to the user making a request).

It comes down to the same reasons as someone using the cloud or a cdn, why spend more running it yourself (staff, equipment etc) instead getting someone who's who job it is to run that specific piece of software to the absolute best of their ability.

It's just not a core competency of almost all companies.

The simple answer is that running DNS servers at scale is as hard as running anything else at scale. The cost of having someone else do it for you is often much lower than doing it yourself.

I have an opposite question. Why does anyone even need to run their own non-cache nameserver ?

My current understanding of dns infra is, We have root nameservers which takes record change request, apply to itself and send it to other listening root nameservers & cache nameservers. The dns root nameservers would be extremely ddos resilient, more than any other kind of servers. Considering millions of dollars get spent per year on domain keeping, its fair to expect it too.

This attack looks like another probing into critical internet infrastructure Bruce Schneier had talked about.

Who's next?

You suggest it's probing. Then how would the full scale attack look like? Also, how does probing help the perpetrators, doesn't it lead to better defenses in the future?

These attacks seem to be getting sophisticated faster than defenses are being thrown up. It might lead to better defenses in 5 or 10 years, but the Internet at large is built out of infrastructure is difficult and slow to upgrade. And is everyone scrambling for a solution? Not really... Most are taking the mentality "well sucks to by Dyn but it's not affecting me so I don't need to respond".

Especially because the vulnerabilities being exploited right now seem to by systemic. It's not like patching a zero day. "Uh... we've got 45,000,000 un-upgradeable IOT devices from dozens of different manufacturers executing a DoS attack". You can't fix that the same way you fix a privilege escalation bug in the linux kernel.

If the attackers end up finding the Internet's equivalent to a jugular, there might not be much we can do about it. BGP isn't going to be replaced in the next 10 years, it's here whether we like it or not. DNS is also not going to be replaced in the next 10 years. And neither are the major centralized internet exchange points. Any vulnerabilities an attacker can find related to the fundamental design of those things are going to remain vulnerabilities for many years. If the attacker can get good at exploiting them, we are in trouble.

Not sure what the disagreement is, seems logical. The attack knocked out service and presumably attackers recorded the failure threshold and mitigation attempt. Then recalibrated again to confirm/get more data and pushed another high vol attack.

because a number of high profile sites decided to put all their DNS eggs in a single, vulnerable basket does not make Dyn "critical internet infrastructure" nor this volumetric DDoS attack a "probe"

Nothing seems to be loading on YouTube and Hulu this morning, FWIW.

http://downdetector.com/status/hulu http://downdetector.com/status/youtube

YT is fine for me here.

The movie Fantasia Sorcerers Apprentice was right! All the the household objects will rise and overthrow their masters!

Is there any info on what volume (how much bandwidth) the DDoS attacks were?

Curious as well. I didnt see any in the status here nor the original.

DNS seems ripe for revolution!

Yesterday it felt obvious that we are treating DNS data as too ephemeral. I am not intimately familiar with the implementations in BIND and others, but it seems like when we hit the TTL we just throw away the data. Usually, that works fine. But, in the case of the origin servers not responding, yesterday I was wishing that it would just give me back the stale data rather than giving me nothing.

I'll admit the impact on me was somewhat limited. Around 10am Mountain I was trying to install some Atom.io modules and couldn't reach that site or a github download URL. I had some success with using ( was not answering the names).

Using a stale cached result probably wouldn't have helped for atom.io though, I hadn't been there in a while and this was querying my own local name servers. Do I want my name servers keeping weeks old stale data around? Probably not in RAM, but saving old names to disk sounds like it would require a lot of IOPS for a big provider. But I do know I'd visited the sites I was trying to hit within the last few weeks, since I couldn't reach the authoritative servers it'd be nice to have tried the last IP I had for them.

Of course, I ran into this about an hour after I rebooted my entire dev/staging infrastructure to fix the Linux kernel privilege escalation issue, so my caches were cold.

Sure would be nice if my server could "ask around" if it can't talk to an authoritative server. "Hey Google, hey Comcast, hey Level-3, do you know this name?" That's effectively what I did by changing my resolv.conf. But if you start asking around too widely, you ideally probably want to have some signature to verify the data you are getting.

Seems like the new norm might be listing authoritative DNS servers from multiple big providers (Dyn and Route53) and having to keep them in sync? Then you lose some of the advance features...

Funny aside: One of the sites I run uses Distil in front of it to protect against content scrapers. Months ago I was working with their support about getting a health checker set up in Route53 to test the full paper path through distil and fail over to our backup site if anything on the primary paper path didn't work. Distil assured me that their services were so resilient that we shouldn't worry about them being down. Guess what the only part of our infrastructure was that was impacted by this? :-)

So is it retaliation for cutting Assange's internet, retaliation for threatening Russia with a cyber attack or both?

So I'm wondering, what are the implications of this ?

Someone is controlling a powerful enough botnet to do this.

How powerfull is it really ? What else can it do ? Was this just a message or a test ? Or both ?

What would happen if they would point it to google's nameservers ?

What should we expect next ?

It's complicated because the internet is a complicated(and I'm no expert)- but, it's not good, but it's not insurmountable. The big issues today are: lots of network attached compute, lots of types of traffic, highly highly distributed network. There are numerous ways of mitigating DDoS, today a lot of it is via BGP route announcement[1] - although we've seen folks using BGP in questionable ways to mitigate DDoS recently[2]. As more and more of the internet becomes software defined (like SD WAN sees large global rollout)[3] more and more granular but non-disruptive control will be enabled. To answer your question about google, their DNS is probably resilient enough to deal with a pretty huge attack but who knows really -- In my dream world we get some good neural networks built and deployed to the edge watching for unusual traffic patterns and disrupt them. It used to be biggest pipes win, I don't know that will continue to exclusively be the case.

[1] http://www.enterprisenetworkingplanet.com/netsp/article.php/... [2] https://www.youtube.com/watch?v=LFJzu0AFDpU (Dyn Engineer gave the talk) [3 ]http://www.rcrwireless.com/20160408/telecom-software/using-s...

> we get some good neural networks built and deployed to the edge watching for unusual traffic patterns and disrupt them

I was thinking at this as well. It's probably under active development right now, if not already live for some.

From this krebsonsecurity.com post: https://krebsonsecurity.com/2016/10/source-code-for-iot-botn...

According to research from security firm Level3 Communications, the Bashlight botnet currently currently is responsible for enslaving nearly a million IoT devices and is in direct competition with botnets based on Mirai. “Both [are] going after the same IoT device exposure and, in a lot of cases, the same devices,”

So, roughly 1 million IoT devices. And the source code for Mirai is freely available, so there's a DIY kit for anyone to use.

Did they mitigate the attack or attack has stopped? I am asking, because update is lacking the details about said mitigation.

you can watch BGP routes changing (as we speak) here https://stat.ripe.net/widget/bgplay#w.resource=

I think it's about time they updated http://dyn.com/ddos/

It was an unprecedented attack, sure, but I'm not sure how their sales guys are going to spin this.

Did Dyn mitigate the attack, or did it stop by itself?

I'm so looking forward at IPv6, the death of NAT, and billions of IoT devices with all ports exposed to the world :-)

Arent most IOT devices behind a router and thus unexposed directly to the internet (excepting routers)?

This part of these attacks confuses me.

Compromised routers can be used to compromise devices behind it. Also many devices (like IP cameras) usually have port forwarding to allow the users to access it from outside.

This is an interesting approach to get past NAT: https://thehackerblog.com/sonar-a-framework-for-scanning-and...

Many devices use UPNP to automatically punch a hole through the NAT and expose their ports to the world.

Pretty much every router is sold with UPnP turned on.

Many IOT devices use UPnP to open their interface to the world.

NAT != stateful firewall.

Most CPE's running IPv6 will be following RFC 6092. Everything is blocked apart from ICMPv6 basicly.

>Most CPE's running IPv6 will be following RFC 6092.

It's pretty naive to think that any CPE will be following any kind of norm or rule.


A brave new world!

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact