Why can't a list of addresses of all the devices in the world of the type used in the attack be created, and all packets from any address on the list be trashed? At least, for the duration of the attack.
If such lists were made ahead of time, they could be turned on rapidly.
You are speaking of basically an IP-based version of the spamhaus blacklist. For general http or TCP protocol.
I, for one, would be fine with a general internet citizen losing access if they have a compromised device. I suspect this is how we will go -- your home security cam was used in an attack, now every single website you visit for XXXXXXXX days gives you a CAPTCHA.
I maintain the crucial element is informing people why they have that hassle. Add extra friction, but not inhibit what they can do, because they are unable and unwilling to secure their devices.
Yes, this affects the internet-uneducated disproportionately. Yes, I think it is the responsibility of anyone with a broadband connection to understand the responsibilities that come with it.
No, I do not expect grandma to learn this. I expect her to deal with a crippled internet because they are not able to fix their pollution.
NAT: blocking an iot address may be blocking grandma's whole network (or her whole apartment complex's network), and the ISP doesn't want that support call.
Dynamic addresses: depending on the ISP, the IP could change quite frequently (daily), so keeping the list up to date to avoid collateral damage to the unlucky recipient of the blacklisted IP is very difficult.
L3 already confessed to having build a list of >500K Mirai bot static IPs, they used it to .... do NOTHING, because they are in business of selling pipes, and ddos makes good business.
In germany, the Telekom (T-Online) used to block http or SMTP traffic when they recognized an infected connection which sent SPAM emails. Don't know if they still act like this.
I can understand the instinct to move, but I think the answer is to split your eggs among many baskets, not just pick a sturdier basket. I don't think the Dyn team is necessarily bad, it was an enormous assault.
Same. I use OpenDNS and I had no idea there was an attack happening until I asked a co-worker to review something on GitHub and they couldn't access it. SmartCache saved me a lot of hours that could have been lost Friday.
I'm not aware of all the technical intricacies here, but could this kind of attack be preventable by having local DNS caches in the OS itself, kinda like a dynamic hosts file?
It should be terrifying because the government's complete lack of ability to regulate anything has allowed the so-called "internet of things," more accurately called the "internet of corporate things that the user is locked out of," to develop into a national security threat.
I'm trying to understand your position. Are you saying this is partially (or fully) the government's fault because they didn't regulate the IoT device market to force it to be open source? And if they had done so this attack could have been avoided?
There's a very strong argument to be made that regulation is the only way to improve this situation. A negative externality like this is unlikely to correct itself. Cheap manufacturers will continue to save money by cutting security features, and unaware or price-driven consumers will reinforce that behavior. What else can we do?
Note: this isn't something the US can solve. A lot of this traffic came from overseas. It's needs a coordinated response.
The problem with most regulation is that it tends to be quite a bit behind the current trends.
You also need to acount for the fact that these devices are going to be alive for years maybe even decades which means that their security measures would become obsolete.
DDoS needs to be solved on the infrastructure level at this point, securing endpoint nodes is a game you are going to constantly lose.
The same backdoors that owned the devices as a botnet, should be used to brick the devices. And the courts should support that purchasers are entitled to refunds and damages. Your IoT refrigerator got bricked? You can sue for $500 worth of spoiled food. Encourage class action lawsuits and watch how fast this is fixed.
You can build secure devices. All it takes is putting quality engineers in charge.
But paying quality engineers isn't fun and they won't work for idiot management. So as long as management doesn't have to pay for the cost of disasters they cause, nothing will be secure.
It's the same thing that happened at Hillary's State Department.
There are some basic "timeless" regulations that would make sense over an entire device lifespan. For example, requiring unique admin passwords, or an internet off switch.
Nor would this regulation have to come from government. The details could be delegated to a UL-like industry group.
Until some one figures out that the admin PW is some hash of the MAC address or the serial number of the device or until some authbypass vulnerability affects 500,000 wifi enabled lava lamps.
Also while this attack did involve a botnet which used default credentials there will be attacks that infect via RCE or any other unauthenticated vector or simply don't require a botnet at all like say adding a multicast IP address to some wificamera or telemetry device that would cause it to send its traffic to a victim of your choice.
At the end you want to make sure your network and services are resilient to DDoS attacks, securing the endpoint source of choice everytime isn't that good of a strategy.
A 0 day vulnerability in a specific firmware from a specific vendor is an entirely different issue than having all vendors ship their devices with admin:admin as default credentials and open telnet access from the internet ;)
The problem with "IoT"(got I hate this term) is that it connects mass produced lowest barrier of entry devices with network stacks over pretty substantial bandwidth lines.
The difference between "residential" and "commercial" bandwidth is eroding it's no longer orders of magnitude greater even if you are pulling multiple 1gbit lines from your DC.
"IoT" are often not that purposely built they are going to share SoCs and likely firmware/software that will make 100,000's of units if not more vulnerable to compromise.
If you have say a 1M Apple HomeKit enabled light switches across Europe and the US which were built by some lowcost/OEM "insert_your_brand_name_here" manufacturer vulnerable to a single exploit you get the same problem.
And this problem will happen over and over and over again with different devices.
Regulation, yes, like making someone responsible for these devices (owners, network operators, manufacturers, anyone would do, really). The 'internet of corporate things that the user is locked out of' aspect of the GP's comment is a red herring.
A group called New World Hackers have claimed responsibility[1]. I read a bit that they stopped their attacks by 2pm EST, and that another group picked up some of the minor attacks later in the day. Anonymous was mentioned, but didn't claim it. I can't seem to track down that article at the moment... I'll keep looking.
Note: I'm not dismissing the validity of the concern. I'm only reporting that I didn't even know about it as the attack was happening. I'm sure others were much more severely affected.
I'm along the North Western New Mexico/South Western Colorado border and didn't notice a difference either. I'm not saying this is shouldn't be a major concern. Just some areas are effected quite a bit more than others, it appears to mainly effect high metro/coastal regions.
Friday's massive ddos attack barely registered on my day-to-day, the only annoyance was that Github was slightly slower than usual. It has been amusing though to see all the hundreds of news articles trying desperately to sensationalize the event.. and it simply didn't even matter.
Just because it didn't effect you directly doesn't mean it wasn't a big deal. I'm in Nashville and was not effected, but I'm not writing it off as a non-event. The worrying thing is hacked IoT devices were militarized (just using this as a term, not saying this was a state actor, though it very well could have been.)
Imagine if the hack occurs again, but is more targeted towards things that aren't just minor annoyances, and maybe happens on election night in the U.S.; I mean just earlier this year a repo on a package manager being deleted caused mass failure across the world of applications. A targeted attack on certain assets or dependencies could be very bad indeed. I would argue that GitHub being effected was more important than the other consumer based services in this attack. This is the first of many similar attacks in the coming years.
I think you're right that losing access to Github is a serious problem. I'm worried that an attack like this could be used in conjunction with some zero-day. Imagine a bunch of distributed teams trying to deploy a fix to the next rails zero-day yaml parsing RCE with GitHub and slack down.
It's not bullshit though, the threat is real and something needs to be incited amongst the public in order for action to be taken. It's not really about what happened on Friday, it's about the implications that came with it. With the projected growth of IoT and if the hardware used continues to be unregulated we will eventually see a distributed attack on an unimaginable scale that WILL be very damaging. This is one of many major security threats that we are facing currently in the modern world.
For reference, I wasn't able to access any of the affected sites until around 5pm on the day of the attack, and I don't even live in a big city, though I do live on the east coast.
If you look at HN comments from posts about the attack while it was ongoing you can see plenty of people who were also affected by the attack (for example, people looking for name servers that cached longer than allowed, or opendns' smartcache). I understand why you think "the media" sensationalizes things, but in fact it is absolutely not complicit in saying things that are not true, even for the sake of embellishment (i.e. look at what happened to Brian Williams).
ddos attacks are script kiddy bullshit. I bet whatever kids behind it are absolutely loving the attention. Completely sensationalized stupidity to say our entire infrastructure could be "taken out" in such a way and wild speculation about it being a state sponsored attack. If a state wants to actually take out some infrastructure they have fucking bombs and missiles for that.
It mattered for people on the East Coast. Where I am (New York), we couldn't access Github, Twitter, Paypal and many other services. To make matters worse, AWS us-east region was configured to use Dyn. Therefore, many applications that depended on it (and on Heroku) were down. At my workplace, we had to reconfigure DNS for our own applications even though we didn't directly use Dyn. It really was a big deal.
I'm sure a lot of people were unscathed by this event, but it appears to be more of a warm-up than the main event. I think it demonstrates the likelihood of further attacks and the possibility that those could have far-reaching consequences. You're right that the title is sensational, but I do believe the threat is daunting.
West-coaster here. Twitter unreacheable, Github unreacheable, Box unreacheable, Soundcloud unreacheable (I'm a musician, saw a friend's band's big song premiere go stale). Tons more. Just because you didn't feel it doesn't mean others didn't.
East coast here, it was a real big deal. Everything from streaming services to Okta/office365 logins were broken. All kinds of seemingly unrelated services were totally inaccessible for most of the work day, essentially draining productivity from the economy. Something bigger than this could shut down most of the economy considering most things use the Internet as a crucial infrastructure.
Yes. The attack vector and coordination is very concerning. Just because it didn't impact you doesn't mean it wasn't significant. It could have only been a feasibility test. Scaled up, this attack would be devastating.
It seems like it was a warning shot to show what they can do. Imagine the impact to the US if this had gone on for, what, days? Weeks? Is there any reason it couldn't continue indefinitely? It could be as devastating as a nuclear bomb detonation... One of the most profound effects would be the lack of trust everyone would now have for the services (etc) they rely on. The SaaS business model would become extinct overnight, and nothing important would be in the cloud. We'd all go back to desktop first applications, and all the enormous investment in cloud technology would be vaporized. If you don't trust the reliability of the internet, why would you rely on AWS and the like. You wouldn't.
But perhaps this is a smarter approach: build software and services in the most reliable way possible: offline enabled without the assumption of mostly available internet. Oh, and VOIP phone service? Gonzo. Where I work that was the worst effect of the outage. The phones simply didn't work... And we're basically a sales company. Not a lot of sales were made that day...
Why can't a list of addresses of all the devices in the world of the type used in the attack be created, and all packets from any address on the list be trashed? At least, for the duration of the attack.
If such lists were made ahead of time, they could be turned on rapidly.
Is anything like this done?