Hacker News new | past | comments | ask | show | jobs | submit login
The Mirai Botnet Is Proof the Security Industry Is Broken (appcanary.com)
279 points by phillmv on Nov 1, 2016 | hide | past | web | favorite | 256 comments



As I said previously, someone needs to bring negligence suits against some IoT vendors, wholesalers, and retailers. Start with the retailers, like Amazon. They'll find the supply chain for you as they try to pass the buck. It worked with hoverboards.

There's a problem at the China end with crap low-end devices driving out the good ones. Here's a good example: solid state relays, useful little devices for safely switching AC power with a logic level signal. Look at this Fotek solid state relay on Amazon.[1] That's a counterfeit. Fake manufacturer name. Fake UL and CE marks. Here's UL's warning notice on counterfeit Fotek solid state relays, and how to recognize fakes.[2] There are lots of unhappy customers; the fake ones have been reported to overheat, melt, or stick in the ON condition. Every Fotek relay on Amazon that I can find is fake.

The fakes are real solid state relays with grossly exaggerated power ratings. For real ones, cost goes up with power. The fakes all cost about the same regardless of nameplate power rating. Here's an especially bad one: a "100 amp" version.[3] The real Fotek, in Taiwan, doesn't even make a 100 amp version in that form factor - the terminals aren't big enough for 100 amps.

The result is that nobody is selling legit solid state relays on Amazon. They exist; you can buy them through Digi-Key or Mouser. They cost about 2.5x the fake price. But Amazon has been totally conned. (The ones on eBay are fake, too.) Worse, if you're a legit solid state relay maker in China, you have a hard time selling. The counterfeits have pushed the price down too far.

Back to hoverboards. There are now UL-approved hoverboards. They don't catch fire. Heavy pressure on China suppliers worked. That needs to happen with insecure IoT devices.

[1] https://www.amazon.com/Frentaly-24V-380V-Solidstate-Arduino-... [2] http://www.ul.com/newsroom/publicnotices/ul-warns-of-solid-s... [3] https://www.amazon.com/Industrial-FOTEK-Protective-SSR-100DA...


someone needs to bring negligence suits against some IoT vendors, wholesalers, and retailers

IANAL, so I wonder if that would actually work. Especially since everyone always attaches the standard no-warranty disclaimers to software (and we're talking about vulnerabilities in the software on these devices).

Maybe we need to give the FCC power similar to the CPSC.. they can issue recalls of unsafe products (or in the case of the FCC, products that interfere with our communications infrastructure), and they can even have them stopped at the border by customs.

Losing the ability to sell your product in the US is a pretty powerful incentive to get it right


Also IANAL, but I don't think that no-warranty disclaimers work for commercial products. Some countries even specify mandatory warranties (e.g. Australia's Warranty Against Defects[1])

[1] http://www.accc.gov.au/business/treating-customers-fairly/of...


The one harmed in a DDOS attack is not the purchaser. It's a third party. They're not bound by any agreement with the manufacturer or retailer.


I'm not sure those disclaimers work. They usually say something like, "We disclaim all warranties, including fitness for a particular purpose"

And then you look on their marketing pages and call their sales people, and they tell you all the particular purposes for which their software is a perfect fit.


As someone who has dozens of these cheap SSRs, thanks for this. A lot of them don't even have English labels on them. I'll be sure to not use them in any critical situation or something that might be fire-prone.


Tear-down reports indicate that the big problem is way overrated current ratings. Real SSRs start at 5A, which isn't too expensive, and prices go up with the current rating. Fake SSRs start at 25A, and have maybe 10A components inside. Past 10A or so, you have to add a heat sink, which a lot of the fake vendors don't mention.

If you want cheap SSRs, it may be better to order them directly from, say, LIRRD in China.[1] They make solid-state and mechanical relays under their own name, and have UL certification in their own name. There's a minimum order (40 units), but they will send samples. The prices are about as good as the fakes.

[1] https://www.alibaba.com/product-detail/UL-ROHS-dc-to-ac-sing...


Counterfeit component risk and bad security for an assembled IoT device are two separate issues. Sourcing components on Amazon in the first place is ludicrous to me, to be honest, and I'd be shocked if any reputable EMS companies do it. As an aside, a couple years ago I saw some guys from NXP do a talk on component counterfeiting. People will apparently remove them from dead boards, shave a few microns of material off the top of their legitimate capacitors, change their reported value, and resell them. Pretty nuts.


"Sourcing components on Amazon in the first place is ludicrous to me, to be honest"

I'm a hobbyist, I have Amazon Prime, and I would like to use components in a project. What's ludicrous about that?


The "reputable EMS companies" part was an important qualifier. But, as the guy I originally replied to demonstrates, counterfeit risk is a lot higher. I order parts for my projects from Digi-Key.


It's fine as a hobbyist, but once you're making 1000+ then the only way Amazon (which doesn't do volume pricing I think?) can be cheaper is by selling you fakes.

As a hobbyist I buy stuff from ebay, amazon, and aliexpress all the time, but not for anything mains-powered or safety-critical.


I don't follow your comment. I'm not sure how you can claim that Solid state relay is fake. Also poor security practices in IoT devices and counterfeits sounds like 2 completely different topics.

Let me play the devil's advocate:

How can you be sure [1] is fake? To me the picture looks like the real one reported in [2]. It is also 2.5x the price of the fake one reported in [2]. You also say there are lots of unhappy customers, but [1] has only 1 five stars review. If there were unhappy customers wouldn't there be more reviews?

> They exist; you can buy them through Digi-Key or Mouser. They cost about 2.5x the fake price

Do you have a link by any chance to compare the prices? I didn't find any on either Digi-key or Mouser.

Also how are solid state relays related to large quantities IoT devices? Most buying IoT devices aren't buying raw components, but a finished product instead (Hue, Nest, Cameras, Baby monitors, etc.)

Here your main point is against counterfeits, but the issue mentioned in the article is not about counterfeits, it's about bad security. Those IoT devices with low (or non-existent) security unfortunately aren't even fake ones.


How can you be sure [1] is fake? To me the picture looks like the real one reported in [2].

Read the UL warning notice. Note that the real ones have a bevel in the plastic frame outlining the product label, (this prevents putting the label in upside down) while the fake ones do not, even though some have a corner cut on the label. They also say "Taiwan Made", not "Made in Taiwan".

Here's a 5 amp Omron solid state relay in the same form factor on Digi-Key.[1] $24.75. Digi-Key doesn't carry Fotek.

Here's a teardown of a counterfeit relay on Instructables.[2] The author had designed these relays into a larger system and was getting warranty returns of the entire product. It has components sized for maybe half the nameplate current rating. There are complaints on forums from people using these for 3D printer hot bed control, brewing control, freezer control, and "grow rooms". They're a popular way to get an Arduino to switch large AC loads. Under higher loads, some units have been observed to melt. Others failed, sometimes into the ON state.

[1] http://www.digikey.com/product-detail/en/omron-automation-an...

[2] http://www.instructables.com/id/The-inner-workings-of-Counte...


Animats said "There's a problem at the China end with crap low-end devices driving out the good ones"

That is the connection between the two topics. You go on Amazon and no one is selling quality. A lawsuit would kill the crap products and you are left with the certified / branded / or otherwise 'proven' good products.


That first one only has one review: "Sure." Five stars.


    Every Fotek relay on Amazon that I can find is fake.


A Nashville, TN, family are suing Amazon for $30 million following a hoverboard fire which destroyed their $1m home.

http://www.wtsp.com/money/family-sues-amazon-after-hoverboar...

The control points are rapidly becoming retail and payment systems. Suing Amazon, Walmart, Visa, MasterCard, Paypal, etc., for facilitating the commerce of counterfeit and manifestly harmful products seems to be the logical evolution.


Retailers have always been the first point of liability for products. In some jurisdictions it was hard to sue the manufacturer directly - after all, they're not the one you had a contract with.

Payment processors are another matter IMO, and should not be made into product police or underwriters.


It should have happened at least back at the Target breach, but the Corporate Liability lobby is very strong.


With IoT, this may catch the low hanging fruit, sure. Negligence for poor defaults, fine? But then attackers will just evolve to the next lowest fruit. Keep in mind that to some attackers, finding a software or hardware bug to exploit (and weaponising that), even in highly "secure" systems, is probably just a step or two beyond playing with default credential lists.

The author of this article compares the complexity of the Marai code to that of the Morris worm. It's an apt comparison but not because the state of IoT has gotten worse -- rather it's because we haven't gotten that much better at securing our code, on the Internet at large, since 1988.

What I'm trying to say here is negligence suits for IoT will probably just bring a lot of negligence suits and not a more secure Internet. I believe they would only address the symptoms and not the root of the problem and at great risk to innovation.


Isn't the goal of security to remove the lowest hanging fruit and keep at it?


In one sense, sure. But IMO regulating the IoT "industry" in a general way is a bad idea because it will just shift the low hanging fruit around some, while ultimately stifiling innovation, which is what is needed for any deep, meaningful security to happen in the long term .


I cannot take this idea that "innovation" will be stifled because people were told to actually give a shit about what they were doing. Honestly, if it means that fly by nite groups aren't releasing their "innovations" out there, I'd consider it a pretty good trade.


Obviously the products out there that aren't giving a fuck are still selling. There isn't a market without buyers.


That's why we need the regulation in order to make them give a fuck.


What I'm ultimately arguing here is that: even after the regulators step in and make make them give a fuck, I mean really give a fuck, and they are forced to fix the low hanging fruits, these botnets or other similar machinations, will still exist. In fact, if we assume the regulations work and actually make things harder to exploit, we can add in the knowledge that the price for using and creating IoT botnets will go up - possibly making botnet creation a more lucrative career than IoT defense. I'm saying we have a technical conundrum that regulation can't regulate away.


I completely and totally disagree. You're basically saying that, because it will become harder in the future, we shouldn't even try.


Well I think we can agree to disagree on regulation as the method of fixing the issue. Of course we should do something. I'll admit I'm not sure what though and have no better proposal. I just believe regulation is too blunt an instrument.


I don't see how anything but regulation would do it. The companies clearly don't give a shit, and they won't, as they can't really be held accountable.


Given that most of the ddos attacks come from China, isn't it a reasonable assumption that the Chinese manufacturers are complicit in keeping the system broken?


Do most of the ddos attacks actually come from China? I thought they came from botnets of pwned PC's all over the world.


We're talking about who controls the ddos. Where the devices end up is different.


I haven't seen anything about originations on recent broad internet attacks - any good links? Having done "internet work" for about 20 years, it seems like a reasonable statement, but they certainly are getting harder to pinpoint.


I am pretty sure there was nothing really pointing to china so far. Any links?


The Chinese are not a hive mind. The people ddosing are not the people making technology.


I wish more people would talk about the economics of why netsec is such a garbage industry. It's a few honest people screaming to be heard above the din of snake-oil salesmen, but there's an economic reason that goes beyond "dumb users, incompetent programmers and CTOs who just look and speeds and feeds". The problem is there's weak correlation, or at least very difficult-to-see correlation, between the amount of effort you put in on security and the results you get.

You could have no security and just get lucky and never get hacked. Or you could have great security and just get really unlucky and have a determined hacker. Or you could be spending uselessly and still getting lucky, although you (and your vendor!) attribute your good fortune to the product. This kind of information failure makes it really hard to have a functional and efficient market, even when everyone involved is honest.

I don't have a good solution for this, which I why I hope someone smarter than me brings it up.


This sounds like an insurance problem.

You've got events that occur rarely enough to an individual (company) that it's hard to work out what the best course of action is using the limited data that an individual can gather.

If there were, say, a half dozen major security companies that everyone subscribed to one of, and they each published their statistics showing how many of their customers suffered various kinds of security breaches, this market might look much less broken.

You could potentially get to that state by having some major existing insurance companies offer "digital threat insurance", and require anyone who claims on it to be using the security services provided by one of their vetted providers.

Yes, you'd be building the sort of major corporate bureaucracy that HN hates, but you'd also be properly aligning everyone's incentives, and shifting the responsibility for the Hard Problem of determining who was selling real security software to a smaller number of better-resourced people. Perhaps there's a reason why major corporates are the norm in big, serious markets.

Edit: disclaimer, I'm an armchair theorist and have no skin in the game - perhaps this already exists or there's a good reason why it doesn't work


Seems to me that the core of insurance is actuarial analysis of the potential risks and costs... and I don't think anyone has a good model or good data for when and why security problems occur. For the time being (and perhaps for the rest of time) we need a security crash reporting agency, analogous to a transportation crash reporting agency, e.g. the US NTSB. Getting standard reporting on security breaches might be a good start. I think the National Vulnerability Database [0] is honorable and well-intended, but the reporting there is uneven.

[0] https://nvd.nist.gov/


This. We model death rates somewhat accurately. Health care costs somewhat less so. Macro financial systems somewhat less so. I'd put "assign risk to a moderate sized enterprise's network of data systems" at more complex than all of those.


This solves one incentive problem, but not (in my opinion) the main one. The people responsible for security (i.e. corporate IT departments) are quite often not the same people who would suffer in the event of one (i.e. customers). Therefore, security professionals are mainly incentivized to appear trustworthy.

Actually being trustworthy is certainly the easiest way to do this, at least up to a point. But it tends to push people toward public, visible security measures over private, invisible ones, regardless of their relative effectiveness.

It cuts both ways, too. Even if you do everything right, if you do get hacked that trust is gone and no insurance payout can buy it back. And I'm not sure any customer is going to react well to "Yeah we lost your data, but Goldman Sachs claims it's not our fault".


The incentive problem goes even deeper than that, because customers themselves also don't have a good way to measure the costs of breaches.

If someone company gets hacked, a consumer's gets leaked, and 3 years later that info is used to steal that person's ID, how is that consumer supposed to determine the root cause?

The real problem is that the effects of bad security are very far downstream from the initial problems in space, time, and individuals affected, so proper feedback to those responsible happens very slowly or not at all.


I'd buy infosec insurance, if such a device existed. Premiums go down the more secure your site, the security work itself being a standardized checklist. Forces the snake-oil salesmen out because they'd have to pay out in the event of a breach.

Like you, I have no idea what I'm talking about, but as OP demonstrated you can do everything right and get unlucky, or do nothing right and get lucky. Sounds perfect for some kind of insurance scheme.


> Premiums go down the more secure your site

That shit will bring out the snake-oil men harder than anything. It means those peeps will do all they can to get the auditors to think you are more secure.

Instead, have a requirement of some compliance, with penalties for breaking compliance.


Honus is on the auditors to know what works and what doesn't. Auditors tell you "these are the things you must do to be compliant". Then the oilmen have to sell to people who lose money if they're wrong.


Ticking boxes helps with security, but it tends to be easy to tick the box and yet mitigate much of the actual benefits. When this is cheaper, some companies will chose it, and snake-oil-salesmen will help them do that.

You need some kind of incentive that derives directly from the end goal (less breaches), rather than some derivative (better standards compliance). Auditors certainly have their place, but we need more than them.

edit: Also, you probably meant 'onus' rather than 'honus'.


In some respect, there's been a level of insurance like requirements for some segments. PCI DSS. It's been a decade of so since I had to deal with it, but the requirements were for the most part no nonsense good practices, and instituted a base level of security that was good. Separated DB and application servers. Specific SQL access credentials. Firewalls with pinhole access. Restricted network access for some server roles.

I'm pretty sure this is required because the card industry can't insure against risk accurately without a base level of assurance that your company isn't some fly-by-night IT hellhole. The same concept would likely apply towards any security insurance that was put forth. You would need to certify that certain steps had been taken, and certain future actions would not be taken, for it to be valid.


The problem I had with PCI DSS is that you could check the boxes and if you are never audited, you don't actually have to fix those problems. I worked for a place that ran that way for ~3 years.


Yeah, it is a sort of honor system, but I'm sure if you were hacked, and they see you aren't compliant, it won't go well for you. The fines get steep fairly quick[1][2]. Considering it mentions you might be charged $50-$90 per card even if you are compliant. Although I think those are actually fees for the issuers, I can't imagine they don't have a way to pass then along to merchants.

1: http://www.focusonpci.com/site/index.php/pci-101/pci-noncomp...

2: https://www.pcicomplianceguide.org/pci-faqs-2/#15


This is in theory a good track to start with. However there is one small hole in the theory. Self reporting by security companies on breaches is very tough to impose. We have seen what self regulation/reporting did for the banking industry. If security companies sidestep accurate self reporting on breaches they have no incentive (in fact they might be motivated to let things slide for economic reasons) to create rock solid security solutions because they know there is an insurance company who will absorb the hit.

However if you had an independent entity that rated the security companies' products that might work. Or the insurance company has a division that rates the security products and provided different rates based on which product a company decides to use.


I think a more feasible tactic would be to reverse the responsibility so that vendors that produce easily broken products ends up liable for damages unless they can show that they have done due diligence when it comes to securing the devices that they create. One way to get away from liability would then be to be vetted by a reputable security company.


In theory this sounds good, but I'm afraid in practice, very quickly a market of a few giant "reputable" security companies would emerge, and it will include rather charlatans than people who really know what they are doing. In the end, it will look like rating agencies who were giving AAA left and right in 2008.

However, maybe a system like that would have improved at least the prevalence of the most glaring security holes.


Strictly from an insurance business perspective, there are two major problems with this:

* Netsec events are black swans: it's very, very hard to model how often a security breach will occur. One could checklist all the ways by which we know currently sites are getting hacked, and would still have to pay out, _because hacking exploits things we don't already know_.

* When a hack occurs, it can happen at scale. Unlike eg life insurance, where you have a single payout for hard-to-predict events, the better the hack, the higher the potential for damage, and so the higher the total payout.

These two together means an IT-security-insurance company might do well for a few years, then file for bankruptcy at the first event that hits it, due to inability to pay.


If the incidents were isolated, then I could see this working. In the case of the recent DNS blackout, that took out everyone. Wouldn't that bankrupt the insurance co?


Good insurance companies do not go bankrupt because they measure risk correctly and do not take too much of it on themselves. Essentially it is their job to distribute the risk such that the company remains profitable. A global reaching event would simply be uninsurable.


> It's a few honest people screaming to be heard above the din of snake-oil salesmen...

I think it more likely that it's a few honest, knowledgeable people screaming to be heard above many honest, well-intentioned, but misguided people screaming to be heard above the din of snake oil salesmen.

It's much easier to believe the wrong thing and sell it yourself than it is to be a confidence man. And on the same note, it's much easier to crucify a malicious hacker than it is to ruin the life of a guy who supports his wife and kids who just happened to make the economic call that running everything as root was okay.

To your point on economics, I think the old joke about two guys being chased by a bear applies to the economic mindset. One company is seen tying its shoes (for the sake of the metaphor let's add to the joke that he's otherwise naked), and the other company says slyly "I don't think you're going to outrun that bear." "We don't have to," replies the naked company, "we just have to outrun you."

It's going to be hard to justify the extra 80% effort on the remaining 20% when management and sales only want to go 21% just so they can outlast their competitors.


> And on the same note, it's much easier to crucify a malicious hacker than it is to ruin the life of a guy who supports his wife and kids who just happened to make the economic call that running everything as root was okay.

Makes me wonder who makes the snake oil for the salesmen. I'm trying to say this is a plausible caricature that distracts us from a more complex (and way more interesting) reality.


Well in many cases, the product itself is not snake oil, it's just not as good as it's being sold. I work at a vendor and the product I work with has some unhappy customers who were over-sold and promised the moon when we're actually just a telescope. Being able to see the moon is great, but when it's priced like a moon landing and sold like 40 acres of prime lunar real estate, your telescope looks a little lackluster. And your boss just spent $3m on it, they don't want to hear "now we need to buy all these add-ons, plus we still need a couple more tools to shore up the defenses.

The developers (not me) are just writing the best product they can. The sales guys are trying to get sales. The pricing guys are trying to make it profitable. And no one is talking to anyone else, so the developers don't know what features sales is claiming, sales doesn't know the developers roadmap, and pricing... I don't even know who sets the price.


That's a great point regarding the economics of why IoT manufacturers don't invest all that much on security. Here's another one. It's an externality. Botnet attacks don't harm the IoT manufacturers. They don't even harm the IoT products or their users. They harm completely innocent bystanders like DNS/github.

What possible incentive do IoT manufacturers have to invest money on initiatives that bring no benefits to themselves, or their customers? What possible incentive do users have to follow "proper security protocols", when they can just do something simple/convenient, and if something goes wrong, some random internet website pays the price.

As idiotic as this sounds, the only non-regulatory solution I can think of is for all potential BotNet victims to collectively "bribe" the IoT companies into following proper security protocols.


If ISPs turned off your connection if it was found that something on your network is part of an active attack of a botnet until the problem is cleaned up. Now the 'victim' (I'd not really say victim because they chose the cheapest webcam online) has to figure out (or hire somebody to) clean up their network.

If there are actual consequences for people's computers and IoT devices being 'hacked', they will start to look for ways to avoid the inconvenience in the future. They might start using better passwords, not installing every toolbar under the sun, demand better products (and accountability) from the companies they purchase items from.

tl;dr;

The only way to get some of these problems fixed is to introduce some level of pain, somewhere in the system. I say penalize the consumer who decided to buy the cheapest thing online - turn off their Internet connection when an infected device is found.


This seems the most practical, though I imagine the consumer is likely going change their ISP until they find one that won't. So ISPs have no incentive to do this unless they're regulated.

If customers or ISPs had tools to find out which devices were at fault then it may be more feasible.


It's going to take networks with infected devices being cut away from the Internet like infested tree limbs before things get better.


Externalities are usually solved through pigouvian taxes.

But since we can't really know in advance what product will/won't be secure (or at least it's incredibly difficult in the constantly goalpost moving security industry) the only way I can see this happening is if, say DYN could sue the IoT manufacturers for negligence or something of the like.


Honestly, the general public may benefit from the first few attacks targeting IoT owners.

If your toaster attacks Dyn, that's an indirect failure. But if your lights decide to shut off permanently because some kid thought it would be funny, you're going to throw them away and buy some dumb lightbulbs.

Short of regulation or bribery, I can only hope that we'll see progress after some IoT products are directly attacked instead of repurposed into botnets.


Why would anyone in their right mind attack IoT products directly (ie. disabling/destroying them), when using them as a large botnet is far, far more lucrative?

Well... Perhaps once IoT webcams are used for blackmail - but even that is a much more difficult task than "really big botnet attack really big sites" so the incentive isn't there


The same reason we got ILOVEYOU?

I know profit motivated attacks have eclipsed entertainment motivated ones, but that's partly been about the work required to get a 'strong' exploit going. Mirai used an easier access vector than Morris to get full ownership of target devices, so it wouldn't be hard for one person to copy and abuse.

I don't think this is going to become the norm - messing up a refrigerator isn't a lucrative goal - but I expect that it'll happen occasionally if systems stay this weak.


The best we can get out of direct attacks on IoT are new "security products that protect your IoT devices" in a reactive manner. Windows antivirus all over again.


The safety equipment on vehicles is also an externality but it's mandated because when a car is unsafe it affects other people who aren't the driver. I don't see how IoT is any different. If you don't want to secure your product, then don't build your product and save the rest of us from dealing with the fallout of your laziness.


The principle is no different, but the practice might differ significantly. With vehicle safety there's a reasonably common, agreed upon set of vehicle conditions which can be checked with a inspection; do your brake lights work, tires aren't bald, etc. There is also a framework of regulation (at the product level vehicles have to pass a set of design-level safety tests), as well as policing on actual roads to execute vehicle maintenance checks as needed.

I'm not sure we know what a similar framework would look like with consumer devices. I can see the utility, but I would also worry about regulatory overreach, and giving big brother another point of control to latch onto.


I feel that in the age of technological achievements we currently live in, figuring out a way where we can require a certain level of security on these devices without running afoul of "big brother" syndrome seems well within the realm of possibilities.

It's also worth noting had the IoT companies simply done the baseline level of security of making the user change the password that such regulation wouldn't even be needed, so forgive me if I'm unable to care about their stake in things.


My point is that it's not a technical problem. Some IoT devices have failed their "vehicle safety check". Now who has the authority to take them off the road? Who has the authority to ensure that their poor design isn't even allowed on the road? Do we actually want anyone to hold that authority?

Is there some other way to achieve the same results without formal regulations - to review bad designs and keep them from being sold, and taking bad security designs/implementations off the internet?


Couldn't all those affected by the IoT botnet sue the company fabricating the IoT products?


>You could have no security and just get lucky and never get hacked.

It's been a decade and more since that was even remotely true (if it ever was). Back in 2008 the average survival time of an unpatched internet facing XP box was around 4 minutes [0]. Pwning an unpatched XP box was (very marginally) harder than pwning a system with no security at all. The Morris worm (Edit: which is heavily mentioned in TFA, my bad) [1] got loose in 1998. There's some interesting stuff regarding whether such risks are comparable with things we use insurance to hedge at [2].

[0] https://isc.sans.edu/diary/Survival+Time+on+the+Internet/472...

[1] https://en.m.wikipedia.org/wiki/Morris_worm

[2] http://limn.it/the-morris-worm/


>>Back in 2008 the average survival time of an unpatched internet facing XP box was around 4 minutes [0].

Yep. And that's just the average. It was much, much lower for high-value targets, such as universities. The first thing we did at my Network Security class back in 2006 was to hook up an unpatched XP machine to the Internet. It got pwned in about 30 seconds.


What I don't get about this is how the new system is discovered in the first place, assuming the attacker is not already on the network. Sure, gain fast access, but why would you let the traffic on to the local net to discover the machine (except for examples). I can see a box stuck on a home connection getting pwned quick, but surely a Uni network would be blocking rdp traffic, or external pings, or whatever it was that was being used to find and pwn XP computers so quickly??

Wouldn't multiple attackers have to be effectively flooding the network with pings or service/port access attempts to find a new computer so fast?


At that time? At a university? No. You typically got a public IP address via DHCP and there was no firewall at all. Even today that's still pretty much the case, though a new device is probably assigned a non-routable address until a terms of use agreement is clicked.


They were talking about a XP facing the internet, so without network protections like blocking pings.


Probably just a typing error in your otherwise fine post, but the Morris worm was 1988.


Indeed, good catch.


The solution isn't to have random security consultants come in and kludge up your process and generate useless reports of irrelevant statistics. The solution is to have a red team on staff permanently, to offer bug bounties based on actual access, to install on-server monitoring for outdated packages (like Appcanary, the authors of this piece), to monitor outbound packets for suspicious behaviour (this is currently the hardest part, imo, since, other than detecting major viruses, it's largely domain and network specific), and to have an automated "take the servers off the internet" button for serious 0-days and leaked credentials. Also, always use HTTPS / HSTS lists and two factor authentication.

You'll still get hacked, but you'll be far better off.


"Offer bug bounties based on actual access" why?


Maybe misworded, but actual access is a clear line on the spectrum that starts at reporting a potential DDOS attack on an endpoint to dumping all your users credit cards and passwords. Access to a server isn't necessarily access to a DB, but it's usually serious enough to warrant cash, no matter who you are.


I'm asking: why would you make a bounty conditional on "access" at all? What's the win? A bug is a bug. If it has the potential for access, it's worth the bounty. All a demonstrated access requirement does is encourage strangers to violate the privacy of your customers. It seems like an incredibly reckless idea.


Bad actors are already trying to get that data, no?


Yes, but what does that have to do with the structure of a bounty?


Bounty programs are very noisy. I don't even have a bug bounty program, and have several messages from confused people in my inbox asking about one. The "bugs" they propose are not bugs in my programs---for example, one reports that data can be uploaded to a collaboration system, downloaded, and then executed in a user-provided interpreter---and that this interpreter may surprise the user with its behavior.

Any better ideas of how to structure a bounty to get bugs and not confused users?


Saying something is theoretically possible with automated vulnerability scanners (which have incredibly high type 1 error due to out of date headers due to lazy programmers and misconfigured webservers) and showing that it's actually possible are completely different things. A whitehat proving he can get user access or MITMing data they created as a proof of concept is completely benign. I've yet to hear this as the source of a leak of customer data.


Those sort of information asymmetry problems (in this case, you can't know the quality of the service you're buying until it's far too late) are a textbook case of market failure [1] [2]. You can improve the outcome sometimes with some smart regulation.

But in the security industry there are multiple levels of this problem:

- The end user doesn't generally know how hardened the product he buys is

- The manufacturer is rarely certain how high quality the security auditing/services he is buying

- How much to invest in securing a product is not an easy decision.

I'm of course simplifying a lot here, but you asked for the economics of those problems, and hopefully this was interesting to some.

[1] http://www.sfu.ca/~allen/leffler2.pdf [2]https://www.iei.liu.se/nek/730g83/artiklar/1.328833/AkerlofM...


First off, we need to decentralize centralized services. Second, those decentralized services need to have different types of business models applied to them, long term, for achieving sustainability. Business models are what makes a centralized service centralized, secret and brittle. Change the game theory around infrastructure and you change the speed in which that infrastructure can be hacked. All infrastructure should do is to try to be self-sustaining. Making money off infrastructure is inherently greedy, given it doesn't scale linearly. This is the primary problem today, but I've found most people reject it given their intent is to get "rich" off whatever it is they do.


And of course, even if you get hacked, no consumer buys a camera "how good is the anti-hack feature on this?" and how to even quantify it? Hell people still install viruses because the popup told them to.

That's the problem. Security is either the default, or ignored.


Fire the current crop of security cert vendors and start designing them practically. I may be in the minority, but every {insert security title here} I've ever encountered seems to have an alphabet soup after their name and no hard skills.

Makes me feel bad for the competent security researchers having to share their field with glorified PowerPoint designers.


Sounds like a solution would be to rule out the luck factor and institute dedicated hackers, who aren't ultimately getting paid by the manufacturers.

The first thing that comes to mind is a governmental agency, but then that opens the government/industry watchdog collusion ball of worms.


It's even more slimey in the enterprise space.

I'd like to think that the majority of people involved with the industry in some way have good intentions, but once you start involving, sales, marketing, and certain executives...


Basically some developers need to go to jail for gross negligence.


It wouldn't help. We need to rewrite our entire infrastructure with security as a primary goal. Then we need to find a way to get people to buy it (assuming of course we can even get anyone to pay for it in the first place). Buy one router for 50 bucks or the secure router for 250 ... I don't think we'll get far.

At the end of the day a few people who can't afford super lawyers go to jail, and meanwhile ubercorp producing cheap routers with bad security continue with business as usual.


I think it's deeper than infrastructure.

It doesn't cost more to not write a SQL injection vulnerability. It just takes a programmer who has a basic understanding of internet 101, and who while writing any line of code involving user input will ask himself "how will these assholes use this to fuck with my system". As long as one line of code can take down your whole infrastructure, and unless all devs of anything serious have a minimum competency level, we are doomed to continue the current path, with a major data leak pretty much every week.

So it will probably take a combination of new, safer programming languages, and minimum proficiency levels, enforced with regulations. I don't like it, but I don't know a better solution, and the statu quo is unacceptable.


> It doesn't cost more to not write a SQL injection vulnerability.

It does.

> It just takes a programmer who has a basic understanding of internet 101

Most likely this programmer will cost more than a $10/hr contractor from a third-world country that worked on the software before.


On SQL injections, it doesn't cost more, it just requires to use the correct syntax. Pretty much every database framework offer a syntax to avoid SQL injections. They take a negligible amount of time to type over concatenating a string.

On third world country programmers, first I wish only third world country programmers wrote SQL injections vulnerabilities and other easy to avoid mistakes. Then, if you need to audit your accounts or get some engineers to review the structure of your bridge, you will insist on a good reputation / demonstrable competency. If vulnerabilities will land you in jail, you start looking at quotes from software vendors in a different way.


The point is that it still costs more, on average, to hire the person who understands how to avoid SQL injection by using the right tools/syntax. The person who only understands string concatenation and basic SQL will always be available to hire.


And again you are assuming that these vulnerabilities are introduced by cheap untrained or foreign programmers. Massive silicon valley firms who pay top market rates do the same mistakes. I am sure there is a correlation between pay level and understanding of security but we are very far from a position where if you hire a team of developper, a business can have any confidence that they won't do something dumb like md5 a password, concatenate a string in a SQL qry, rely on user supplied array length in an unmanaged language, not protect themselves against CSRF (I suspect 50% of professional web dev still don't even know what it is!), etc.


> I am sure there is a correlation between pay level and understanding of security

Good, then maybe you can see a path forward to stop arguing the opposite?

Yes, it is possible to pay a lot for a little. Developed country, less developed country, wherever. It remains, nevertheless, relatively less expensive to hire an inexperienced coder than it is to hire an experienced one, who has a greater likelihood of being security-conscious. But no formal mechanism prevents the inexperienced coder from finding work cranking out unreviewed programs.


It's not the failure of the security industry, it's the success of market forces over the security industry.

Normal folk want to consume new gadgets because that's the culture we have. So it's a race to put new gadgets with new features in front of people. Sure, as a customer I could insist on my manufacturer having taken security seriously and having their products thoroughly tested and reviewed and hardened and patchable and all that good stuff, but then I'm going to have to pay more money for my gadget than my buddy here who just wants to be able to flush his toilet from his smartphone.

There is literally no consequence for manufacturers of poor quality products where the impact isn't directly impacting their own consumers, and so there's no market force that is going to address this.

When viewed this way, it's a classic case of where we need government/legislature involvement.


The major flaw with that proposal is that the government has shown itself to be exceptionally incompetent (just like everyone else) when it comes to security.

For example, the NSA's security -- not some underfunded, minor agency, but the NSA itself, the world's leading cybersecurity agency -- has had its security breached on a large scale basis, multiple times.

And that is just the beginning of the very long list. It's not unique to the US, either; other governments are the same or even worse in terms of security.

Given that government cannot even create working policies to secure systems that it directly controls, even in agencies with practically unlimited budgets and the strongest possible security mandate, how on earth can it be expected to create policy for anyone else, to supervise systems it does not even control, for commercial users with tiny or no budgets?

Issuing nice-sounding legal regulations that say 'go forth and be secure' will accomplish nothing.


Fair, but we are not expecting manufacturers to make bullet-proof devices. We are expecting them to make devices that do not let you achieve root access over the internet using an unchanged username and password combination. That's a very easy and specific thing to regulate.


But they pretty much do have to be bulletproof. Every single device connected to the Internet now effectively has a fully automated machine gun firing at it all the time. One gap in the armor is all it takes.


Well, the NSA let a low level contractor (Snowden) walk off with a thumb drive containing half their archive. That's not far removed from not changing the default root password.


> Sure, as a customer I could insist on my manufacturer having taken security seriously and having their products thoroughly tested and reviewed and hardened and patchable and all that good stuff

How exactly would you insist on that? Ask them? Aren't they going to tell you, "Yes, it's very secure, no worries"?


> How exactly would you insist on that?

How about "show me three different independent security audits by researchers or firms I trust who didn't find major issues in your product"? Sure, there needs to be a sizable group of people demanding that (and be willing to have it be the difference between a $500 and a $5K smart TV), but it is possible. For corporate IoT in certain settings, it might even be plausible.


You should be on top. Just as we have FCC approvals before you connect a device to 3G, landlines or to the power grid, we'll have to have approvals for all devices connected to the internet. And the top test of the list is a penetration test by a preapproved firm.

Note that open-sourcing the firmwares would go great lengths in building a better world: Less spying, more upgradeability, more confidence in the tools, easier pentests and a legacy of new code for future generations.


You're going to need either hard regulation, or liability for such breaches to change behaviour.

Mostly because, as Mirai shows, the costs are external to the consumer of the broken device.


This is not a failure of the security industry - the security industry is targeted at the enterprise, largely not the host of the vulnerable IoT devices involved.

Don't get me wrong, there are tons of ways in which the security industry fails (the biggest IMHO is buying/selling things that only get implemented in a half-@$$ed manner or not at all), but this is like blaming the Airline industry for a train wreck.

Perhaps the real problem is that for home users there really is no security industry to speak of? A handful of features on WiFi APs that get turned off if they break your XBOX games, and maybe some desktop AV. That's pretty much it - and I'm not sure we can ever expect much more..


I think you are making a great point here: different sectors have different security needs.

At the consumer level, I think users really need help from their ISPs. My provider (Cox.net) already emails me if they detect outbound activity matching virus activity. Though I can't say it doesn't make me at least a little nervous about their inspection of my traffic habits, this ISP level intervention is at least something concrete that could happen in the near term to blunt these types of attacks.

For Enterprises, I see the big failing is mostly around the focus on external threats (APT+Scary Hackers) with no consideration for the much greater danger of internal threats [1]. This is the "dumb" stuff like someone quitting to move to a competitor and the day before they give notice they copy every file off the file server to a thumb drive "just in case they need it". Or even the new scary forms of user assisted ransomware.

1 - The guy that runs HaveIBeenPwned.com has a free course on this I've been sending around https://info.varonis.com/the-enemy-within that explains it in terms an executive might understand.


Blaming the security industry is wrong, but so is blaiming the users. If a faulty lamp catch fire, then it is the one who made the lamp that is at fault, not the user. Make the vendors responsible for the damages that their products create!


Completely incorrect claim, the IoT industry doesn't spend a penny on security, and therefore will be vulnerable to these type of attacks.

If anything this is proof that the security industry does work, these attacks are happening on devices where there is no security budget - not on servers with large investments in security.


How many pennies would've been needed to insert a simple page forcing you to change user/password combo and to choose a reasonably strong password after first boot ?

In the case of Mirai it's not even a cost issue, just lacking good practices.


"How many pennies would've been needed to insert a simple page forcing you to change user/password combo and to choose a reasonably strong password after first boot ?"

These are written by outsourced developers who don't know anything about security. They wouldn't even think to develop something as simple as that.

You are obviously unaware of how this works, companies would have to hire consultants/penetration testers to assess the product first. Then they would spend even more money making the changes suggested by the consultants. So it would cost a lot of pennies.. actually.


Ok, I am aware of how it works, but I'm not talking pentests or hardening. I'm talking simple, cheap design choices in this case, that could've eliminated the whole Mirai debauchery.

In your app you already have a setup wizard, right ? Add one more page to the end "Hey, we're almost done! We just need to make sure your device is secure. Please choose a username and (strong) password." Edit: Because if you have a login, you already have the components in place, you are not developing a new feature.

This one simple, design choice would have cost very little, both in terms of development time and increase in support costs, because Support is a cost center that scales with your user base and your knowledge base. Obviously not pennies, but still small costs.

There is the classical point of diminishing returns from security investments, problem is for most IoT products, we are significantly left, towards zero investments and, at this point, small investments and a few smart design choices would yield significant returns in security.

And with developers that's exactly what I don't get. How has it not become internalized that allowing users to run the default user/pass combo is very poor idea ? I'm not asking for much, I don't expect them to know a lot about security, but not even adhering to some basic good practices of security is killing me.


" Ok, I am aware of how it works, but I'm not talking pentests or hardening. I'm talking simple, cheap design choices in this case, that could've eliminated the whole Mirai debauchery."

Then you are not talking about the security industry or its failure to work are you? Its a failure in the development industry to have basic security awareness.

If you don't engage the security industry for pentests or consulting. You can't go any blame them when you get hacked.


>Its a failure in the development industry to have basic security awareness. //

Is that really it? Surely even a high-school level developer will realise that having a device connected to the wild web with a default user:pass will be hacked easily.

I'd have thought the problem is not wanting to support customer calls saying "we changed the password and now can't access our device". So default user:pass and no prompt to change it (and a backdoor just in case) means lower support costs.


No, that dev will say "It will be behind a NAT, so it's fine to have a default user:pass"


> I'm talking simple, cheap design choices in this case, that could've eliminated the whole Mirai debauchery.

This is utterly ignorant of the facts, Mirai took advantage of weak passwords to spread, but was not dependent upon them.


Anything that adds any interaction with the user will cost support time, thus dollars. Its easier for these companies to hard code a password in and have it "just work" with their mobile app or web interface than actually do security correctly.

Until there are regulations in place to make them do this, they will not care.


    > the IoT industry doesn't spend a penny on security,
Citation sorely needed [and not to be found].


So, "companies think that security is unnecessary" is a sign that the security industry is working?


nope it's a sign of the strong market for lemons in IT products.

There's no adequate way for consumers to differentiate between well secured products and badly secured products (every company will tell you "security is their top priority" if you ask them).


TBF the comparison is against a well defended server of which there is a great many examples.


My toaster has to be certified that it meets certain minimum safety standards. It really seems that IoT and safety critical software/firmware should be required to pass a similar (bare minimum) certification.


Toaster is required to pass safety standards because of the there is strong economic incentive (UL requirements) to do it. Without UL, it can't get on the shelf on any stores in US.

There are no such thing and UL security requirements for IOT device.

Time for such regulation?

But "internet + regulation" normally raise a lot of objections internally from the IT industry.

If someone (MSFT) proposes secure boot are required for all IOT devices, the first one to oppose it likely be EFF. :-)


UL isn't a regulatory body. UL testing is voluntary. You may know this, but perhaps many others don't.

I think a UL for internet connected devices is a fantastic idea. Just need to figure out how to get companies to volunteer for such testing. The way it works for UL is that they provide some insulation from litigation. Perhaps if users could litigate IOT manufacturers for inadequate security testing, something similar would materialize for that industry as well?


Retailers (Walmart) require UL for insurance propose.

UL is created for by Insurance companies to gauge the safety of the products.

At the end, the real cause is "The force of Lawyers" is strong for product safety in US. :-)

"The force of the lawyers for IOT" is still weak. :-)

The force of the Jedi (IT, hackers, SW Dev, EFF, OSF) still strong, for now....

The Empire will win when and if enough Jedi (SW Dev) turn to the dark side - team up with the lawyers start suing IOT startup, devices, creators.

No more IOT, raspberryPI, OpenWRT.... only Intel/Qualcomm/MSFT licensed UEFI controlled SecureBoot (Windows CE) devices, Lock down Chromebooks from Google will be allowed.

Feel the power of Dark Side!!!..... :-)


> No more IOT, raspberryPI, OpenWRT.... only Intel/Qualcomm/MSFT licensed UEFI controlled SecureBoot (Windows CE) devices, Lock down Chromebooks from Google will be allowed.

Silly FUD like this does nothing to help.


idk, seeing a comparison with UL gives me hope that consumer device security doesn't actually have to mean totalitarian dystopia.

Nothing is stopping anyone from building their own non-UL electronic devices, and even distributing them to tinkerers and early adopters. This is much preferable to some steep liability/mandatory insurance regime like automobiles where you've got to Soviet-style register your car and even yourself!

Sensationalists push a panicked narrative about insecure devices, but any disruption of third parties is entirely due to scale. Simply making it so the enormous group of low-effort consumers won't end up with negligently insecure devices would basically erase the problem.


I am not sure, but it might be that it is difficult to gain 'standing' in a civil suit.

It is really hard to show you were directly hurt, and even harder to show whom by.


> There are no such thing and UL security requirements for IOT device.

UL 2900-1.


404 for every link to the standards. Awesome!



Yeah, I picked that up from reading the press release[1] that OP had originally included in the comment. What I was surprised to discover was the 404 error page when I clicked the individual links for the different standards. My expectation is that I would have been directed to a site to purchase them.

[1]: http://ulstandards.ul.com/downloads/news-announcing-ul-2900-...


Holy cow. That's awesome! Not sure if it's a good spec, but at least UL is trying to take this on. This is probably the best approach.


Restrictions are easy to push too far, but a basic safety checklist that includes things like factory randomised logins shouldn't be too heavy handed.


> If someone (MSFT) proposes secure boot are required for all IOT devices, the first one to oppose it likely be EFF. :-)

Rightfully so. Anyone who argues signing without user override is necessary to achieve devices secure against external attackers is outright lying.


But most of those safety standards are meant to protect the device and its user during a normal use, not against malicious attempts to destroy it or pervert its usage.


If my toaster exposed its wires and buttons to the outside world, there would certainly be added measures to prevent malicious usage.


I work in the infosec field and I think it is unfair to blame the whole industry. I think the whole technology field is to blame here (although I really don't like to play the blame game). By the way, I have been around the security industry for around 10 years, and the same exact conversation has been going on. 10 years ago it was the Web, then around 7 years ago it was Mobile, now it is IOT, several years from now, we are going to have the same conversation regarding a different technology unless we do something regarding the root causes.

The root causes are the following: 1- Security more often than not is an afterthought. When you are trying to go to market, under tight deadlines, burning the night oil, nobody has time, energy or money to think about security. 2- The lack of security education by most of the stakeholders (upper management, product managers, engineers, etc) does not help and keep security a taboo, in most organization, nobody has the title of making the software secure. So it falls into nobody's lap 3- While, I have all the respect to the profession of honest sales, some salesmen ruined it for all of us, feasting on the lack of education mentioned above. Trying to sell tools/services as the silver bullet to the security problem, an idea that is very well received by someone who does not understand the problem and really looking for a silver bullet 4- At the end of the day, the real issue is that security is a cost center, there is no ROI for the business for doing security other than avoiding problems that "could" happen in the future.

That being said, there are three classes for clients I have seen doing security: 1- Heavy losses: for banks for example, the risk of losing money is quiet real and tangible. Besides they (at least in the U.S) under heavy regulations to do so. But their real motivation is risk mitigation. 2- Regulations (worst reason to do security): such as the PCI industry, they have to do security checks to avoid fines. This category usually try to do the minimum to get by. 3- Proactiveness: hats off to this category, as they don't really have to do it other than they think that this is something that must be done.

Solutions: 1- More education 2- More education 3- More education 4- Implement more security controls natively into frameworks (output encoding, entity frameworks, etc) and browsers (such as CSP policy, etc) 5- More fines for companies that don't really take the minimum amount of steps to ensure data confidentiality and integrity.


I'm fond of the idea of fines. Having seen what audits mean, I trust them about as far as I can throw a full-size African elephant bull.


Fines that fund a Red Team, that goes looking for exploits.


Apparently the author of Mirai leaked the source code and even provided comments and build instructions. I found this a bit baffling.

He seems immature and vain, because his motive is apparently to taunt someone with how smart he is, but the code is indeed pretty awesome and educational. It's a little sad that commercial software is so ugly and that black hat software is elegant (though I guess it has to be, because it's under rather severe "environmental pressures").

https://github.com/jgamblin/Mirai-Source-Code/blob/master/Fo...

At first, I was also kinda shocked that it had this simplistic list of hard-coded user names and passwords (mentioned in the article). But I guess I've worked in the software industry long enough that it makes sense. Computers are so ubiquitous and on reflection it's not a surprise that you can pull down hundreds of thousands of machines with this technique!!!

Can anyone shed light on the economics of releasing source code? I would think this would make your botnet much less valuable. Apparently someone found a vulnerability in his HTTP parser, which I don't think would have happened without the source code.

So did the author shoot himself in the foot for reasons of pride, or is there something else going on?

https://github.com/jgamblin/Mirai-Source-Code/blob/master/mi...

    // Set up passwords
    add_auth_entry("\x50\x4D\x4D\x56", "\x5A\x41\x11\x17\x13\x13", 10);                     // root     xc3511
    add_auth_entry("\x50\x4D\x4D\x56", "\x54\x4B\x58\x5A\x54", 9);                          // root     vizxv
  ...


The best theory I've heard about the authors motivations is that after knocking Krebs offline, with a world record dos, they wanted to muddy the waters a little.

By releasing the source code and letting everyone else fight for control of the botnet, it would be much harder for anyone to trace the original attack back to them.


The security industry has been "broken" for as long as there has been a security industry. When I left Network Associates in 1999 to start a chat company, leaving the security industry to do something non-security was already a cliche.

It's true, the 1U rackmount netsec industry does virtually nothing to prevent consumer electronics vendors from shipping terribly insecure code. I don't like the netsec industry either. But: so what?

The reality is, very few companies are buying 1U rackmount snake oil (or Nth generation antivirus products like endpoint protection tools) to stop things like Mirai. We're not even talking about the same budget. The "security industry" is not in fact chartered with stopping things like Mirai. So Mirai is a weird complaint to level at it.


> When I was your age I was leaving the security industry before it was cool

I'm not interested in leaving the security industry. I'd rather work to change it. I'm a millennial, I guess :)

> The "security industry" is not in fact chartered with stopping things like Mirai. So Mirai is a weird complaint to level at it.

No one "chartered" the industry to do anything.

You're right, a metal box or an AV isn't going to prevent your IoT product from shipping with default creds on telnet, but don't you think the industry is complicit in drowning out the good advice to not do that in a sea of noise and then blaming the victims?


No? I am having a hard time seeing any intersection between either of the major two branches of the security industry (PC security and network security) and IoT botnets.

There's a major failure happening, but it's not attributable to the security industry. It's a failure of the computing industry as a whole.


If there will always be crap out there, everyone's going to need an immune system. Long term, we can wish for a fancy adaptive one. In the short term, we probably just get an analogue to inflammation---something like the DDoS network scrubbing industry.

That doesn't try to stop IoT botnets; it just extracts rent for keeping some people safe from them.


The "security industry" was never significantly involved in improving product security and software quality. They have roots in profiting from the deplorable state of PC security. Centralised firewalls, "intranets", and anti-virus products are not sustainable solutions to any of these problems - they're just so ingrained in the mindset of IT profiessionals that they self-perpetuate.


Endpoint security traces back to antivirus and PC security. Firewalls do not --- firewalls trace back to the Unix culture.


[citation needed]


Not really, no. But: the first commercial firewall was Ranum's DEC SEAL. Ranum is an old-school Unix programmer. The first book on firewalls is "Firewalls and Internet Security", by Bellovin and Cheswick, who previously created one of the first firewalls (predating DEC SEAL) and possibly coined the name. Both of them are, as you'd expert, Unix nerds.

People were building firewalls in 1988, several years before there was even Trump Winsock, let alone a reason for PCs to need to filter Internet traffic.


I get what you're going for but you're actually wrong. Both firewalls centralised or not (configure to prevent all access outbound apart from approved nodes) and intranets (put IoT in isolation) would actually prevent Mirai from both spreading and attacking anyone. But most people are not implementing either, because home router is all you need...


To prevent Mirai you have to convince the least competent security camera owners running their corner supermarket or whatever to spend the money and effort. The people who are self-selected technologically illiterate cheapskates because they bought the cheapest Chinese IP security camera.

Yeah, you can always make this argument. "If only there had been sufficiently advanced and non-buggy firewall rules in place in firewalls optimally placed in the topology, looked after by competent networking staff"

It just doesn't work in most environments because you don't have firewall experts on staff and you have no incentives to spend money and effort on firewalls.

(And even for professional firewall wranglers it's hard in normal semi-chaotic IT organisations, you don't intrinsically get any assurance things are set up tightly, or that it remains so after network configuration changes, and you keep breaking working systems, you just get feedback for angry users that you've inconvenienced and you never get feedback from successfully defending against attacks. etc etc. And at the end of the day for the same money you could have gotten better security improvement by spending man hours (and hours lost by impeding your users) elsewhere, in line with end-to-end thinking.)


You're talking about people, I was only talking about technologies.

> If only there had been sufficiently advanced and non-buggy firewall rules in place

We don't even have to look for ideal or advanced solutions. First forbid spoofed IPs, then kill incoming low port traffic to residential users (but let them opt out of that, just like we already often do with outgoing port 25). This applied globally would kill a large chunk of DDoS attacks these days. But today's ISPs are not doing even that.

I don't know how we can convince people to care enough.


Good firewalls can make good security easier.


I like how the author complains about cyberpocalypse conference talks then goes on to say the security industry is broken...

Hard coded creds and the allowance of default creds isn't the security industry, it's the manufacturer.


"You didn't fix this problem a third-party created! You are at fault!"


This could be fixed by legalizing purely destructive hacking of IoT devices. To gain immunity from prosecution the hacker would need to demonstrate that the device is completely bricked and no remote access is possible. IoT manufacturers would then be able to post bounties for destruction of competitor's products and the free market would solve the problem very quickly.

This will result in harm to third parties who did not act maliciously, but that's already happening now. With this change in law the total harm will probably be less because the problem will be solved for real, which will dramatically reduce or eliminate the possibility of "black swan" events causing very serious harm (eg. shutdown of critical infrastructure).


"As long as the thief drives my car straight into the ocean, it's ok for him to take it"

I'm glad you're thinking outside the box, but that kind of "immunity", if it were ever to be authorized in an emergency (attacks on power grids lasting hours or days), it should only be carried out by the government with a warrant, and with the understanding that people may die or lose property due to the sudden, public destruction of millions of devices.


If you wait for that emergency then it's already too late, because any attacker competent enough to carry out that attack is likely competent enough to close the vulnerability they used to get access.


I don't understand why the lack of security in embedded devices is an indictment of.a 'security industry.' That's like 'drunk driving proves failure of the seatbelt industry.'


The Moral botnet doesn't have anything to do with the "security industry", largely because the vendors involved ignored each and every recommendation made by said industry since at least the 90's if not earlier.

The blame for this debacle falls squarely on the heads of the vendors who produced these trusting (if not downright gullible) devices in the first place.


> The major botnet of 2016 is simpler than the botnet of 1988.

That, right there, is a damning indictment not only of our industry but also of our culture. We know how to secure systems. It's not magic. But — unlike for example physical hygiene — we haven't made the decision to make computer hygiene part of our culture. We look down on people who don't wash their hands, but we don't look down on people who use poor passwords. We teach children to cover their mouths when they cough, but we don't teach children not to plug a Windows machine into a network.


"That, right there, is a damning indictment not only of our industry but also of our culture."

Who is "our culture"?

Are you part of the same "our culture" as the people who made and shipped these things?

Since you're pretty much rhetorically constrained to answer "yes"... would those people agree with the answer you give?

The culture I am actually part of is not perfect, certainly. The people who are in my culture still write the simplest cross-site-scripting attacks like they get paid bonuses for them. But the vast bulk of the people involved in the production of the hardware that the Mirai botnet took over would probably not agree that I am in "their culture", regardless of what I claim about it, and they'd probably feel the same way about you is my guess.

Even if "we" do everything you would suggest, perfectly, Mirai still happens. Therefore, those suggestions can't be the solution. We must still address these problems via other mechanisms.


I don't necessarily agree. Yes, we teach children to wash their hands, but first of all there's always a threshold over which people stop giving a damn. How many people are actually following their doctors' advice about diet and lifestyle? Heck, how many people actually take their medicine as prescribed? And there's also a danger in the other extreme. It's pretty bad that many people these days are self-medicating, many times with antibiotics or avoiding vaccines and it's even worse when they do it to their children.

Something to think about. Don't get me wrong, I'm all for education. And just like sexual education, I'd introduce classes about online privacy in elementary school.


In this case, it's more like we look down on people who don't wash their hands, but don't care if the companies making soap also handle raw sewage. The tragedy of the modern botnet is that it wasn't even the fault of dumb consumers, but companies so careless they didn't give people the ability to set passwords in the first place!


Worst of all, they think people who haven't learned about security deserve what happens to them...


But when you tell someone "That's not secure, you can easily get hacked. You need to [insert good security practices here]", what response do you get ?

In my experience, most answer along the lines of "So what ? What could they get ? I have nothing important." or "Why would anyone ever hack me ?" or "But I have an antivirus, doesn't that make me safe ?".

And then spend the next 15 minutes explaining to them how things actually work and why they need to take it seriously and offer to help. 9 out of 10, they never reach out. And it's not their fault, but the way security in general is perceived.


I think it's just a general misunderstanding of what privacy means. I've explained several times and even convinced a few people that just because they think they have nothing to hide, they generally do have something they don't want someone to know about or see. At best they will just revert back to the "I have nothing to hide" mentality after a week. I think people outside of tech just don't see how damaging it can be when you loose privacy.


You implement security in order to have privacy and I agree it's poorly understood in the digital realm, mostly because it's "out of sight and out of mind". I like to use an analogy I can't remember where I picked up and reductio ad absurdum to get them past this automatic response, because that's what it is and it's based in the horrid and dangerous "Nothing to hide, nothing to fear" saying.

- The usual conversation - I ask them: "Do you have curtains ?" and they say: "Yes, of course" and I ask "Why ? I mean you have nothing to hide right ? What does it matter if someone can see what you are doing inside your house ?", usually they freeze for a second, "Because it's creepy". I continue "Well if it's creepy that someone would watch you in your house, isn't it just as creepy if they watched you online, what you read, what porn you watch, what you talk to your friends about ? Which do you think tells more about who you are ?". At this point silence and an increasingly worried look is the norm. I keep going: "It's not about hiding anything, it's about what is private. Otherwise why not tell everyone your darkest secret, your greatest fears, the thing you are most ashamed of doing in your life ? And that's why you should do [this or that]"

But even so, it's true most default back quickly. Still a few call, ask, improve their practices. People only seem to take it seriously after they have been directly impacted in a powerfully damaging way.

Edit: I have obviously had this conversation enough times to make this script in dealing with it. If you have to do it more than twice, automate it. :)


For the sake of argument, why don't they deserve what happens to them?

Most people don't understand how their car works. But if you own a car and you neglect to change the oil for 50k miles, or you put diesel into your tank and ruin your car, we don't blame the automotive industry for not informing you on proper maintenance. Just like with computers, the information is out there, and it's not the job of the automaker to make sure you know it.

Why are computers different?


However, if it is a very wide spread problem then they will eventually install a light on your dash to notify you to change your oil. My wife's car currently does this. Since this is the first car she's ever owned, it's good because I don't think she would have known. We also have check engine lights and indicators for when a turn signal light bulb stops working. New cars even keep track of things like tire pressure.

I don't expect the mechanic to take advantage of me and instead offer me fair service at a fair price. Luckily enough people have a natural interest in it that the competition keeps prices down.

My argument is in the similar vein of those who aren't physically fit to win a fist fight. Doesn't mean they deserve to pushed around, robbed or beaten just because I'm stronger and a better fighter.

No matter the analogy, if someone doesn't know better it doesn't automatically mean they deserve it. Peoples lives are busy, complicated and not everyone starts on the same ground or with the same capabilities.


> However, if it is a very wide spread problem then they will eventually install a light on your dash to notify you to change your oil. My wife's car currently does this. Since this is the first car she's ever owned, it's good because I don't think she would have known. We also have check engine lights and indicators for when a turn signal light bulb stops working. New cars even keep track of things like tire pressure.

This is getting off subject, but I'm of the opinion that this trend is primarily motivated by locking people into a dealer for maintenance, not helping people maintain their vehicles. For example, I think BMW dealers are the only ones who have the ability to calibrate tire pressure sensors on bimmers, and some new cars are abandoning OBD-II ports.

http://www.roadandtrack.com/car-culture/a30505/new-car-servi...

> My argument is in the similar vein of those who aren't physically fit to win a fist fight. Doesn't mean they deserve to pushed around, robbed or beaten just because I'm stronger and a better fighter.

I don't think it's fair to compare these things. Of course no one deserves to be assaulted. And likewise, if someone sabotages your car or has a remote exploit for your computer, I find it hard to dish out blame. But beyond this, I think the only person who could possibly be responsible for the condition of their possessions is the owner, and I don't see why computers should be any different. Not knowing better or being too busy is not an excuse to be a party to a DDoS attack.

You bought this computer, you plugged it in, and it was setup in a way where it was able to receive signals that made it send out signals that violate the contract you signed with your ISP and violate the laws that your representatives passed. "I didn't know" isn't an excuse in any other comparable situation. Just because computers are hard doesn't mean we should rework our entire legal framework. We shouldn't codify into law the idea that some subjects are obvious and should be enforced consistently, while some are beyond understanding (for most, for now), and ignorance is a viable excuse. It will inevitably become outdated.

I'm held responsible if I have an old car rusting away in my backyard and it pollutes my neighbors well-water. And sure, it's harder to claim ignorance about a rusty car than it is to claim ignorance about a misconfigured computer, but I think the law has to be impartial about that.


> And likewise, if someone sabotages your car or has a remote exploit for your computer, I find it hard to dish out blame.

> Not knowing better or being too busy is not an excuse to be a party to a DDoS attack.

I feel you contradicted yourself here. In one way you excuse it, but you also claim users should know better. When it comes to having a strong password, I feel this is where it's acceptable to place blame. When, for example, they went to their favorite website which has a malicious ad that takes advantage of the latest exploit. Can they really be blamed? Average end users expect their smart thermostat to give them capabilities advertised. Some can't even imagine that it's basically a small computer.

> that violate the contract you signed with your ISP

How many people really read these agreements? It's assumed that if your computer works it's in spec. Whatever arbitrary clause they came up with to allow them to legally track your every move is a different conversation.

I won't divulge into throwing analogies back and forth. I'll just say this, I know plenty of Dr's, people who are much smarter than me that don't know a thing about networking. They're running their own practice and stay concerned about being up to date and not getting sued. That's their job. Building software that is easy enough to use while keeping people secure is ours. There's no excuse, it's hard and it's yet another aspect programmers need to learn. But it rests on our shoulders.


Sorry for the late reply. I think I did contradict myself there. And I don't see any way I could fix that contradiction.

There might be a meaningful difference to me between a remote kernel hole versus using a default password, but for most people there is no difference there.

So you've changed my mind, to an extent. I don't think that we should "blame" them, but at the same time, if you entrust a large part of your life into computers and are not aware of the risks you're putting yourself in, I do think you deserve some blame for believing the advertising pitch without researching on your own-- and that kind of blame is relevant for everything, from cars to tablets to vacuum cleaners. I think doing your due diligence is relevant to any topic, and people who don't put it in will reap what they sow. But that blame is more superficial-- you shouldn't have to become a mechanic to buy your car and you don't have to be a programmer to buy an IP camera.


I guess it's important for some people to keep flogging Microsoft hatred but what does Windows have to do with anything here?


Yeah, no.

Mirai doesn't have shit to do with the security industry. The security industry are the people who you hire to secure your things, victims of Mirai did not take advantage of the services provided by the security industry.

More like, The Mirai Botnet Is Proof the Security Industry Is Going To Be Doing Fucking Great


Amen, the Security Industry is doing great, security is doing poorly.


On the very sub topic of "we don't know how to write secure code"; yes, we actually do.

Of course we know how to write secure code, code that meets a rigorous and well engineered design that eliminates invalid outcomes as a result. The problem is such code is slow and expensive to produce.

Good, Fast, Cheep; pick (at most) two. Security cameras optimize for Cheep first and fast second, so of course we see issues like this.


"Seatbelts don't work!" Says widow of man killed in car crash while not wearing a seatbelt.


Maybe we need liability for software vendors? With exemption for those who provide full source code.


Maybe we need liability for software vendors?

That's a common suggestion, but since no-one knows how to make completely secure systems yet, I don't think it's that simple.

If you're talking about a general presumption that anyone selling software that has a security vulnerability becomes liable for any consequential losses, then it seems likely to result in only large businesses with the war chest to fight a liability action being able to make any sort of remotely risky software and/or in a new insurance industry popping up so that the problem reduces to money and the cost is ultimately passed on to software users in higher prices. While there might be some pressure to improve security as a result, the negative side effects could be far worse for the software industry as a whole.

The next logical step is some sort of penalty for gross negligence or a repeated pattern of failures, where a supplier making reasonable efforts and following generally good security practices isn't at risk of being sent under instantly because of some new type of 0-day that no-one had seen before. But then you have to figure out what constitutes good practice and paying due care and attention, and that in itself is not an easy issue.

With exemption for those who provide full source code.

I don't see why that should make any difference. Having access to a huge amount of source code is only a benefit for security if you have the skill and resources to perform a detailed audit of your own, and if it's practical to spend that kind of time and money, and if you also have the authority to do something useful about any vulnerabilities you do find.

If someone is giving software away for free as a kind gesture, that's one thing, but I don't see why anyone supplying software on a commercial basis should get out of jail free on security just because they provided source code access. The FOSS world provides ample evidence that many eyes do not, in fact, make all bugs shallow.


> and/or in a new insurance industry popping up so that the problem reduces to money and the cost is ultimately passed on to software users in higher prices

The price of providing a basic level of security should be priced into the product! The fact that someone can go out there and buy an IOT camera that will be used to DDOS my server is a negative externality that constitutes a market failure. I want sketchy IOT manufacturer 32XB123 to be forced to buy liability insurance for that.


The price of providing a basic level of security should be priced into the product!

The question is whether that is all that would be priced in, or whether the insurance industry, given a rich new feeding ground, would charge huge rates for many types of device in case of catastrophic failure.

My car insurance probably costs me several times the value of my car each year, because I am required to have cover for third party losses as well. That remains true even though I've been driving for a long time and never made a claim so far, because in the nature of insurance, they are guarding against the relatively rare possibility of a relatively high payout.

What happens when your $100 office software package now has mandatory insurance in case each installed instance costs the business $200 in lost revenues from downtime after a breach, or $2,000 in average compensation when a vulnerability leads to personal data being illegally disclosed?


> That's a common suggestion, but since no-one knows how to make completely secure systems yet

We also do not know how to make completely secure cars, but still car makers are liable for faulty construction.


But not for someone who robbed a bank and used the car as a getaway vehicle.


A proposal I saw and liked was liability for software vendors based on what they charge for the software, so open source software doesn't have the problem, but people who bundle a load of open source software together, slap a management interface on it and charge loads of cash for that, do.


What if I give away the software and then sell support contracts?


X writes secure code, Y writes secure code, Z integrates both parts in a secure way. X creates a secure update. X releases an update which makes a race condition with Y leading to elevated privileges possible in Z's product. Who's liable for the issue now?


A hard problem - but what if that race condition kills someone? This used to be theoretical problem - there were not so many systems that could fail in so catastrophic ways - but we are now putting software into everything.


So if I give away some software for free-as-in-beer, but without source, I'm liable.


How about a law that requires computerized devices to be shipped with unique passwords.

That would be a start.

Second, any computerized device must pass FTC/FCC/UL (pick one) tests for computer security before going on sale.

There's more that can be done, but let's go after the simple stuff first.


If you want to go after the simple stuff then blocking significant outbound traffic at the ISP level from a home user account until they agree it's something they want to do is the most straightforward solution. No need to change much infrastructure, no need to test devices, and no need to have costly manufacturing processes. You could even let specific traffic through (Facebook live streaming, online gaming services, etc).


An attack can be quite effective even if the traffic from a single home is insignificant.

I don't need ISPs to mess with my connection any more than they do already.


An attack can be quite effective even if the traffic from a single home is insignificant.

In a recent attack there were 145607 devices sending >1.5Tbps[1]. That's a lot from individual connections.

(From: https://twitter.com/olesovhcom/status/779297257199964160 )


About 1.4 megabytes on average is not that much.

Even so, this one wasn't a particularly sophisticated attack. Even if we could weed out this particular type, many a DoS rely on amplification or asymmetrical computational cost with respect to the size of the request. So if we concluded that 1.4 MB/s of HTTP or DNS or plain TCP were a lot and managed to block these senders, it would be far from a universal solution against DDoS originating from domestic hosts. And there's reason to believe we'll see much larger botnets unless manufacturers of IoT devices start taking security seriously. So tomorrow 0.14 MB/s per home could be plenty.

What cost would this blocking incur? How often would I have to call my ISP because they blocked perfectly legitimate traffic? Unfortunately maybe not that often, because after the first few calls I'd probably figure that they are too big to give a shit about my problems.


That sounds like a terrible idea. You think every end user, everywhere, needs to manually verify with their ISP for every single outbound connection? How could that possibly be an acceptable solution?


I think you misread or misunderstood what I meant by "blocking significant outbound traffic". 99.9% of what home users do doesn't generate significant amounts of outbound traffic. If someone typically browsers the web, sends a few emails, and then their traffic starts doing 20Mbps/s on an undocumented port, blocking that traffic until they verify what it is wouldn't be a bad idea. It wouldn't affect most users.


Almost, it would be a better idea to block incoming telnet traffic. I am having trouble coming up with a good reason for having telnet open to the internet.


Not sure that solves the problem either, perhaps UK has more stringent laws, or perhaps the US does. But if it's not universal a sufficiently large market can still be exploited to attack another. The internet is global. We need global regulations.


Agreed. Given consensus will take forever, I'd like to see the US or EU, somebody at lest, take the lead. Over time this will probably get worked into trade agreements anyway.

But if we wait 10 years, it'll be too late, and we'll have killed the internet or given into draconian measures to stop the problem - which is what none of us want.

Frankly, I don't even care if a given government has great security review - as long as they put a process in place it's a start that can be improved upon.


The law just needs to apply in a large enough region. Here in the US we got high efficiency switching wall warts instead of the slightly cheaper ones that idled away $4/yr because the EU demanded them, so manufacturers had to upgrade.

Similar with lead free solder.


If the US or EU did that, it would still be a very good start simply due to the size of those markets.


That'll work until the database of device credentials is breached from company X which provides call center support services for manufacturers A through Z and needs said credentials at hand.


Pretty much every home router sold in the UK is shipped with a unique default wifi network name and password printed on a sticker and stuck to the device.

Manufacturers don't need to keep a credential database, as legitimate users can simply look at the label.


Which gives the appearance of security, but do any of those routers have backdoors or superuser accounts with defaults or have access via remote management (TR-069?) that's not properly set-up? Should we shoot so low as just to get unique passwords?


That is a failure condition, but it is strictly better than what we commonly have now: universal default passwords.


The success of Mirai is hardly the fault of the security industry. The security industry has been howling about lax default device security for decades, and how dumb it is to put your TV directly on the Internet, much less your refrigerator or your lightswitch. The electronics industry is the correct target.

The only way out of this mess is regulation of what types of devices can be sold and how they must be secured. The electronics industry and online retailers need to get together and figure this out and come up with a UL for IoT, or the government will step in and make them all a lot more unhappy.


Great example of how to promote your company. Provide genuinely insightful and useful information that will help people even of they don't use your product. It's almost like good karma.


I prefer when people do that without trying to discredit whole industry which wasn't even involved in the problem. If anyone remotely interested in security was included in the IoT production, we wouldn't be talking about Mirai.


It's actually proof that internet architecture in general is broken. Well, not broken; it was broken, and then healed in a weird way so there's extra cartilage sticking out causing annoyances and won't move as easily anymore.

The security industry has absolutely nothing to do with the existence of a botnet that can take down massive internet infrastructure. The security industry just puts bandaids on shitty products. It's the internet architects/designers that are responsible for botnets.

In order to make the internet very simple, very compatible, and decentralized and distributed, the design allows a baby monitor to send arbitrary traffic to any device on the global network. There is no good reason for this. The reason is, anything else would be complicated, and complicated things become expensive and troublesome. But that's not a good reason to allow baby monitors to take down internet services.

The solution would be to segregate critical equipment address and protocol by function, and to put in strict controls in all routers to prevent illegitimate traffic from reaching the wrong equipment. This would not only improve security, it would make allocation of address space and application ports make some kind of practical sense, and allow for improvements in the way applications communicate over the internet, to say nothing of improved management of traffic.

But nobody's going to change the design, so whatever.


The thing is, that's the opposite of the net neutrality world; it's the telco world, where the monopolist gets to segregate your traffic and charge arbitary prices for it.

I'm old enough to remember when UK modems had to be "BABT approved", adding considerably to the price. See (1993) https://groups.google.com/forum/#!topic/uk.telecom/6j1bVHcq1...


We already know what customers are sending what traffic to what providers; net neutrality is purely a political thing, there is nothing technical stopping it. Baking the differences in traffic into the address space and protocols would just more clearly define what we already define very loosely with things like port numbers and loose firewall rules.


What "differences in traffic" do you mean exactly? Who gets to decide them - that's a political thing, no?


Traffic in general can be (roughly) summarized as application, infrastructure, and signaling. On top of this, it's clear that different address space is used by different organizations for different purposes. Classify the traffic based on these differences and carve up address space to suit the differences, and perhaps differences in the transport protocols that match the practical differences in how the traffic is used.

For example, bgp traffic shouldn't work on non-routers. Certain signaling (icmp and udp traffic flags for non-peer traffic) isn't needed by most customer equipment. And it's stupid that IP spoofing works at all, much less on robust servers on internet backbones. It's clear there is traffic allowed on parts of the internet it shouldn't be allowed on. Changes could be made to correct this, and no, they are not political.


None of those examples are relevant to the Mirai case, though? I don't think it even relied on IP spoofing. It was just an enormous HTTP flood.


.....source? They hit a DNS provider, and the bot has a dozen different capabilities. In any case it doesn't matter, DoS often relies on spoofing and the attack still stands


How do you tell illegitimate and legitimate traffic apart?

In many cases the only difference between a DDoS and normal operation is the volume of traffic at the victim host.


I'm not sure, but like I said, separate first by address and function. This could work a hundred different ways. I could give examples but they'd be off the top of my head and not properly designed.


I'm sure you could come up with a hundred different improperly designed ways off the top of your head. And it wouldn't work.

And trying to design it properly, you'd probably come to the conclusion that it won't work (without causing massive disruption and breaking everything we've built so far).


Causing massive disruption is what I'm proposing.


I don't see this as a failure of the "security industry" (I put that in quotes as it's very hard to say who is and is not part of that group)

The simple fact is that there are very limited economic incentives for a company in the IoT space to spend money on security, and as a result they don't.

It's not easy for an ordinary consumer to differentiate between a company who just says "security is our top priority" and one who puts meaningful effort behind that (e.g. there is a strong market for lemons here).

Also there's no effective regulation which could substitute for that information. In other markets (property, consumer goods, food and drink) we have safety regulations as it was recognised that consumers can't effectively differentiate. In IoT and other areas of IT this doesn't exist, so there's nothing to stop insecure devices being sold.

As to the "security industry" well there have been enough practitioners warning about this, to limited effect. Realistically there's a limited amount that can be done without some form of top-down intervention.


Speaking of a voice in the wilderness, any way we can stop saying stuff "is broken?" It's glib, imprecise, far too easy to say, and is becoming cliché. It's provocative kind of like clickbait. And it's self-evident - Everything exists on a continuum of "brokenness" a.k.a. entropy and is therefore at least partly "broken" at all times.


I suspect we've already lost at "Security Industry".

Obviously defence in depth and dedicated security tools have their place in a networked environment, but you can't just outsource the problem or fix it with some bolted on extra.

Some concerns simply have to be addressed as an integral part of whatever software or device is being made. If we don't do that, well, we've just seen the result.


Maybe someone with a public level of accountability--say the government--should start an adversarial inspection and certification program. Think about how we don't let cars on public roads unless they pass inspection, to verify that they aren't a ticking time-bomb in the middle of the highway--or no more so than usual.

Unlike vehicle registration, it wouldn't require you to have to do anything other than keep your system maintained. If you want to put your computer on the internet, be prepared to get port-scanned by the US Digital Service once a year/month/week/whatever, attempting to take your computer off the 'net. If it succeeds, then that's one machine that could have been--but now won't be--part of a botnet.

ChoasMonkey as a public works project.


Im not a security guru, but I've thought of a couple solutions to the problem of botnets.

1) A consortium of manufacturers of IoT devices banding together and signing an "autopatch" or "autohack" agreement. This would be an open source, public hack-and-patch society that freezes out any manufacturers that don't agree to it. All customers would simply sign in their EULA that their devices are authorized to be "patched" by any means necessary if found to be insecure by the auto-hackers.

2) As botnets at the Mirai scale are now a matter of national security, make the NSA do its job and do roughly what is outlined in 1. Controversial, sure, but you can be damn sure that they already know about these unpatched devices and how to exploit them.


But is IS dumb programmers (or more likely, dumb programmer management) causing this problem. Every IoT company has the same workflow: you take thing out of the box, hook up thing, use your smartphone to connect to thing with some app, and then it works. Everyone expects this experience and it's stupid because somewhere there is a hardcoded password.

This is made more asinine by the fact that we've had extremely easy to use methods of establishing trust between devices on a permanent basis, but because that would add three steps to the setup process the marketing people refuse to let it happen.

Nobody wants to spend the money to do it right, and nobody wants to spend the money on devices that do it right so here we are and I see no way out of this situation.


How about both.

Programmers for really stupid shit like this: https://news.ycombinator.com/item?id=12756006

And managers for... well to this day I have found absolutely ZERO use for management.


What "security industry"? In how many companies nowadays is sitting and thinking things through an encouraged approach? It goes against the current economical values. The problem are not IoT vendors, the problem is money-driven economics.


Just remember how Feynman described how he was opening military safe during los alamos project (one of the most super highly sensitive project of WWII) : 25% of the safe where having default combinations.

I guess we can draw a conclusion here: security assumptions about who the users are is not in sync with human nature.

Security is failing the same way as architects would fail making the assumption stairs with one meter high steps are okay.

IT security is failing because their model of human beings is plain and flat wrong, hence, computer security as designed by our brightest mind is wrong.

Don't force feed to human requirements of fuck given they don't have.


The security industry? Try the software industry. We produce software and systems that are insecure. Until someone assigns a cost to failing to provide secure software and systems it will continue to happen.


I have observed that "proof" should be translated as "evidence", and I generally think such article titles lead to pointless look-at-me hyperbole. Authors who fail to understand the important difference between those words will likely have nothing critically interesting to add to most discussions.


If someone reputable was testing all these devices for such basic flaws and published the results, then IT managers could use that to back their buying decisions. That would actually cause vendors to listen. I don't think it is impossible to do, or even monetize such service.


I think it's erroneous to blame the security industry wholesale, tempting as it may be.

Let's set blame aside for now. What caused this botnet?

  - The tendency of IoT/smart-device vendors to eschew engineering discipline
  - The tendency of _all_ companies to eschew security as an optional extra
    rather than the cost of admittance to the marketplace
    - The historical tendency of big companies /not/ being burned to the
      ground after a massive hack makes security a lower priority to
      many businesses
  - The lack of a secure automatic update infrastructure (which also led to a
    recall), for which the vendor could have mitigated the vulnerabilities used
  - General ignorance about the risks associated with default/weak/hard-coded
    security credentials (e.g. passwords)
Now let's look at each line item and discuss possible solutions:

  + Regulation could help here. Require third party security assessments on
    IoT/smart devices to be sold? It's not the most elegant solution, but it
    would be a vast improvement over the current state of affairs.
  + This is a cultural problem that makes application security painful in
    every business vertical. It takes a lot of one-on-one communication to
    resolve. Seeing large companies lose their shirts over security negligence
    might change the conversation.
  + This is a huge problem for all software. (See link below.)
  + Education.
Regarding secure automatic updates: https://paragonie.com/blog/2016/10/guide-automatic-security-...

Now let's circle back to blame. What is the security industry responsible for? In my view:

  - Failure to communicate with other industries and professions,
    such as electrical engineering.
  - Failure to communicate with developers in general.
  - Failure to educate people outside the industry of our own
    conventional wisdom.
  - Failure to learn the challenges that others are trying to overcome
    so security can be on the same team rather than yet another obstacle.
Through the blog posts on my company's website and a concerted effort to clean up Stack Overflow, I've been trying to educate PHP developers about better security practices for the past couple of years. It pays forward in spades. The rest of the security industry could do a lot of good if they did the same for their own respective communities.

The only problem with doing that is: There's no effective and ethical way to monetize it. I make more money from helping e-commerce sites recover from being hacked by easily preventable mistakes than I ever have from making the software that powers 30% of the Internet more secure. https://paragonie.com/blog/2015/12/year-2015-in-review

Solving the core problems is good for society, but society doesn't reward this behavior.

The security industry is broken because society is broken.


> Solving the core problems is good for society, but society doesn't reward this behavior.

I'd like to think that we can solve collective action problems like this as a society and be rewarded for it.

We have to figure out how to fight this problem the way we fought smallpox and polio.


Large international state-funded scientific collaboration?


Sure, I'd like to see CERT get more funding...


Though I agree with you, and really admire what you're trying to do. I find most security researchers admire too much their "rock star" status to care about the rest of the industry.

Maybe a good starting point would be an attitude change?


I probably know too many folks who don't have that "rock star" attitude to see it as an immediate problem, and therefore am not qualified to provide an informed suggestion here.

But I would agree that, where the attitude does exist, it needs to be changed.

Less rock stars, more janitors/mechanics.


The way i see it is that we are using general purpose computers to do the job of single purpose electronics.

But a GPC will always remain a GPC, and thus they are susceptible to being re-purposed no matter the number of "safeguards" we put in place to prevent it.


The basic idea of the state distributing policing to public vendors, who should apply it as they see it fit, after the customer-trader relationship already ended is broken.


Everyone seems to be making money to me?


The free market has decided security of IoT doesn't matter.


Markets consistently under-assess complexity, in both value and const components.

Simple systems are easy to assess and communicate. Complex systems are hard (expensive) to assess and communicate. This results in several asymmetries:

1. Complex systems are communicated in an oversimplified mode.

2. Asymmetries exist between buyers and sellers of products (Akerloff's "Market for Lemons".

3. Asymmetries exist for all parties over time in realising the long-term costs (or benefits) of systems. In the most pathological instance, a party (or parties) actively frustrate the process of widespread awareness of these costs -- lead, asbestos, tobacco, sugar, CO2, etc., etc.

Corollary: security is a complex product.


s/const/cost/


LANGSEC


There is no cure for weak passwords.


Of course there is, in cases like this anyway. The problem here wasn't weak passwords, it was that all the passwords were the same, so it was trivial to automate the attack (well that and the devices had a command injection vuln.)

That can be fixed by requiring users to set a unique password on setup, or shipping each device with a different password. This exact problem occurred in the UK where ADSL router manufacturers used to ship the same wireless WEP/WPA key on all devices, many got compromised, and now they ship with a unique key per device.


This may be the cure in this case, but unique/automated passwords can go horribly wrong too. I once was the owner of a Vodafone EasyBox, a cheap and crappy router with pre-configured wlan wpa-keys. They looked randomly enough, but were a crude mixture of your mac-adress and router serial number[0]. It turns out, the bits of the mac adress (wich were always on the same digit) reduced the length of the unknown parts of the key to 16. The rest 65535-something key-bits could easily be brute forced.

I just had the good fortune to configure my wifi- network myself, so I had to put in my own keys. Many other people didn't, and who could blame them, they were probably happy the thing worked in the first place...

[0]: https://www.wardriving-forum.de/wiki/Standardpassw%C3%B6rter...


Sure there are bad implementations, but the principle that you can fix this kind of problem is there, just need to give the manufacturers enough incentive to spend the effort to a) implement this process and b) get a compentent security review to ensure it's not easily broken.


Really? Assuming the user will choose a very secure password! :) That totally works in real life.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: