There's a problem at the China end with crap low-end devices driving out the good ones. Here's a good example: solid state relays, useful little devices for safely switching AC power with a logic level signal. Look at this Fotek solid state relay on Amazon. That's a counterfeit. Fake manufacturer name. Fake UL and CE marks. Here's UL's warning notice on counterfeit Fotek solid state relays, and how to recognize fakes. There are lots of unhappy customers; the fake ones have been reported to overheat, melt, or stick in the ON condition. Every Fotek relay on Amazon that I can find is fake.
The fakes are real solid state relays with grossly exaggerated power ratings. For real ones, cost goes up with power. The fakes all cost about the same regardless of nameplate power rating. Here's an especially bad one: a "100 amp" version. The real Fotek, in Taiwan, doesn't even make a 100 amp version in that form factor - the terminals aren't big enough for 100 amps.
The result is that nobody is selling legit solid state relays on Amazon. They exist; you can buy them through Digi-Key or Mouser. They cost about 2.5x the fake price. But Amazon has been totally conned. (The ones on eBay are fake, too.) Worse, if you're a legit solid state relay maker in China, you have a hard time selling. The counterfeits have pushed the price down too far.
Back to hoverboards. There are now UL-approved hoverboards. They don't catch fire. Heavy pressure on China suppliers worked. That needs to happen with insecure IoT devices.
IANAL, so I wonder if that would actually work. Especially since everyone always attaches the standard no-warranty disclaimers to software (and we're talking about vulnerabilities in the software on these devices).
Maybe we need to give the FCC power similar to the CPSC.. they can issue recalls of unsafe products (or in the case of the FCC, products that interfere with our communications infrastructure), and they can even have them stopped at the border by customs.
Losing the ability to sell your product in the US is a pretty powerful incentive to get it right
And then you look on their marketing pages and call their sales people, and they tell you all the particular purposes for which their software is a perfect fit.
If you want cheap SSRs, it may be better to order them directly from, say, LIRRD in China. They make solid-state and mechanical relays under their own name, and have UL certification in their own name. There's a minimum order (40 units), but they will send samples. The prices are about as good as the fakes.
I'm a hobbyist, I have Amazon Prime, and I would like to use components in a project. What's ludicrous about that?
As a hobbyist I buy stuff from ebay, amazon, and aliexpress all the time, but not for anything mains-powered or safety-critical.
Let me play the devil's advocate:
How can you be sure  is fake? To me the picture looks like the real one reported in . It is also 2.5x the price of the fake one reported in . You also say there are lots of unhappy customers, but  has only 1 five stars review. If there were unhappy customers wouldn't there be more reviews?
> They exist; you can buy them through Digi-Key or Mouser. They cost about 2.5x the fake price
Do you have a link by any chance to compare the prices? I didn't find any on either Digi-key or Mouser.
Also how are solid state relays related to large quantities IoT devices? Most buying IoT devices aren't buying raw components, but a finished product instead (Hue, Nest, Cameras, Baby monitors, etc.)
Here your main point is against counterfeits, but the issue mentioned in the article is not about counterfeits, it's about bad security. Those IoT devices with low (or non-existent) security unfortunately aren't even fake ones.
Read the UL warning notice. Note that the real ones have a bevel in the plastic frame outlining the product label,
(this prevents putting the label in upside down) while the fake ones do not, even though some have a corner cut on the label. They also say "Taiwan Made", not "Made in Taiwan".
Here's a 5 amp Omron solid state relay in the same form factor on Digi-Key. $24.75. Digi-Key doesn't carry Fotek.
Here's a teardown of a counterfeit relay on Instructables. The author had designed these relays into a larger system and was getting warranty returns of the entire product. It has components sized for maybe half the nameplate current rating. There are complaints on forums from people using these for 3D printer hot bed control, brewing control, freezer control, and "grow rooms". They're a popular way to get an Arduino to switch large AC loads. Under higher loads, some units have been observed to melt. Others failed, sometimes into the ON state.
That is the connection between the two topics. You go on Amazon and no one is selling quality. A lawsuit would kill the crap products and you are left with the certified / branded / or otherwise 'proven' good products.
Every Fotek relay on Amazon that I can find is fake.
The control points are rapidly becoming retail and payment systems. Suing Amazon, Walmart, Visa, MasterCard, Paypal, etc., for facilitating the commerce of counterfeit and manifestly harmful products seems to be the logical evolution.
Payment processors are another matter IMO, and should not be made into product police or underwriters.
The author of this article compares the complexity of the Marai code to that of the Morris worm. It's an apt comparison but not because the state of IoT has gotten worse -- rather it's because we haven't gotten that much better at securing our code, on the Internet at large, since 1988.
What I'm trying to say here is negligence suits for IoT will probably just bring a lot of negligence suits and not a more secure Internet. I believe they would only address the symptoms and not the root of the problem and at great risk to innovation.
You could have no security and just get lucky and never get hacked. Or you could have great security and just get really unlucky and have a determined hacker. Or you could be spending uselessly and still getting lucky, although you (and your vendor!) attribute your good fortune to the product. This kind of information failure makes it really hard to have a functional and efficient market, even when everyone involved is honest.
I don't have a good solution for this, which I why I hope someone smarter than me brings it up.
You've got events that occur rarely enough to an individual (company) that it's hard to work out what the best course of action is using the limited data that an individual can gather.
If there were, say, a half dozen major security companies that everyone subscribed to one of, and they each published their statistics showing how many of their customers suffered various kinds of security breaches, this market might look much less broken.
You could potentially get to that state by having some major existing insurance companies offer "digital threat insurance", and require anyone who claims on it to be using the security services provided by one of their vetted providers.
Yes, you'd be building the sort of major corporate bureaucracy that HN hates, but you'd also be properly aligning everyone's incentives, and shifting the responsibility for the Hard Problem of determining who was selling real security software to a smaller number of better-resourced people. Perhaps there's a reason why major corporates are the norm in big, serious markets.
Edit: disclaimer, I'm an armchair theorist and have no skin in the game - perhaps this already exists or there's a good reason why it doesn't work
Actually being trustworthy is certainly the easiest way to do this, at least up to a point. But it tends to push people toward public, visible security measures over private, invisible ones, regardless of their relative effectiveness.
It cuts both ways, too. Even if you do everything right, if you do get hacked that trust is gone and no insurance payout can buy it back. And I'm not sure any customer is going to react well to "Yeah we lost your data, but Goldman Sachs claims it's not our fault".
If someone company gets hacked, a consumer's gets leaked, and 3 years later that info is used to steal that person's ID, how is that consumer supposed to determine the root cause?
The real problem is that the effects of bad security are very far downstream from the initial problems in space, time, and individuals affected, so proper feedback to those responsible happens very slowly or not at all.
Like you, I have no idea what I'm talking about, but as OP demonstrated you can do everything right and get unlucky, or do nothing right and get lucky. Sounds perfect for some kind of insurance scheme.
That shit will bring out the snake-oil men harder than anything. It means those peeps will do all they can to get the auditors to think you are more secure.
Instead, have a requirement of some compliance, with penalties for breaking compliance.
You need some kind of incentive that derives directly from the end goal (less breaches), rather than some derivative (better standards compliance). Auditors certainly have their place, but we need more than them.
Also, you probably meant 'onus' rather than 'honus'.
I'm pretty sure this is required because the card industry can't insure against risk accurately without a base level of assurance that your company isn't some fly-by-night IT hellhole. The same concept would likely apply towards any security insurance that was put forth. You would need to certify that certain steps had been taken, and certain future actions would not be taken, for it to be valid.
However if you had an independent entity that rated the security companies' products that might work. Or the insurance company has a division that rates the security products and provided different rates based on which product a company decides to use.
However, maybe a system like that would have improved at least the prevalence of the most glaring security holes.
* Netsec events are black swans: it's very, very hard to model how often a security breach will occur. One could checklist all the ways by which we know currently sites are getting hacked, and would still have to pay out, _because hacking exploits things we don't already know_.
* When a hack occurs, it can happen at scale. Unlike eg life insurance, where you have a single payout for hard-to-predict events, the better the hack, the higher the potential for damage, and so the higher the total payout.
These two together means an IT-security-insurance company might do well for a few years, then file for bankruptcy at the first event that hits it, due to inability to pay.
I think it more likely that it's a few honest, knowledgeable people screaming to be heard above many honest, well-intentioned, but misguided people screaming to be heard above the din of snake oil salesmen.
It's much easier to believe the wrong thing and sell it yourself than it is to be a confidence man. And on the same note, it's much easier to crucify a malicious hacker than it is to ruin the life of a guy who supports his wife and kids who just happened to make the economic call that running everything as root was okay.
To your point on economics, I think the old joke about two guys being chased by a bear applies to the economic mindset. One company is seen tying its shoes (for the sake of the metaphor let's add to the joke that he's otherwise naked), and the other company says slyly "I don't think you're going to outrun that bear." "We don't have to," replies the naked company, "we just have to outrun you."
It's going to be hard to justify the extra 80% effort on the remaining 20% when management and sales only want to go 21% just so they can outlast their competitors.
Makes me wonder who makes the snake oil for the salesmen. I'm trying to say this is a plausible caricature that distracts us from a more complex (and way more interesting) reality.
The developers (not me) are just writing the best product they can. The sales guys are trying to get sales. The pricing guys are trying to make it profitable. And no one is talking to anyone else, so the developers don't know what features sales is claiming, sales doesn't know the developers roadmap, and pricing... I don't even know who sets the price.
What possible incentive do IoT manufacturers have to invest money on initiatives that bring no benefits to themselves, or their customers? What possible incentive do users have to follow "proper security protocols", when they can just do something simple/convenient, and if something goes wrong, some random internet website pays the price.
As idiotic as this sounds, the only non-regulatory solution I can think of is for all potential BotNet victims to collectively "bribe" the IoT companies into following proper security protocols.
If there are actual consequences for people's computers and IoT devices being 'hacked', they will start to look for ways to avoid the inconvenience in the future. They might start using better passwords, not installing every toolbar under the sun, demand better products (and accountability) from the companies they purchase items from.
The only way to get some of these problems fixed is to introduce some level of pain, somewhere in the system. I say penalize the consumer who decided to buy the cheapest thing online - turn off their Internet connection when an infected device is found.
If customers or ISPs had tools to find out which devices were at fault then it may be more feasible.
But since we can't really know in advance what product will/won't be secure (or at least it's incredibly difficult in the constantly goalpost moving security industry) the only way I can see this happening is if, say DYN could sue the IoT manufacturers for negligence or something of the like.
If your toaster attacks Dyn, that's an indirect failure. But if your lights decide to shut off permanently because some kid thought it would be funny, you're going to throw them away and buy some dumb lightbulbs.
Short of regulation or bribery, I can only hope that we'll see progress after some IoT products are directly attacked instead of repurposed into botnets.
Well... Perhaps once IoT webcams are used for blackmail - but even that is a much more difficult task than "really big botnet attack really big sites" so the incentive isn't there
I know profit motivated attacks have eclipsed entertainment motivated ones, but that's partly been about the work required to get a 'strong' exploit going. Mirai used an easier access vector than Morris to get full ownership of target devices, so it wouldn't be hard for one person to copy and abuse.
I don't think this is going to become the norm - messing up a refrigerator isn't a lucrative goal - but I expect that it'll happen occasionally if systems stay this weak.
I'm not sure we know what a similar framework would look like with consumer devices. I can see the utility, but I would also worry about regulatory overreach, and giving big brother another point of control to latch onto.
It's also worth noting had the IoT companies simply done the baseline level of security of making the user change the password that such regulation wouldn't even be needed, so forgive me if I'm unable to care about their stake in things.
Is there some other way to achieve the same results without formal regulations - to review bad designs and keep them from being sold, and taking bad security designs/implementations off the internet?
It's been a decade and more since that was even remotely true (if it ever was). Back in 2008 the average survival time of an unpatched internet facing XP box was around 4 minutes . Pwning an unpatched XP box was (very marginally) harder than pwning a system with no security at all. The Morris worm (Edit: which is heavily mentioned in TFA, my bad)  got loose in 1998. There's some interesting stuff regarding whether such risks are comparable with things we use insurance to hedge at .
Yep. And that's just the average. It was much, much lower for high-value targets, such as universities. The first thing we did at my Network Security class back in 2006 was to hook up an unpatched XP machine to the Internet. It got pwned in about 30 seconds.
Wouldn't multiple attackers have to be effectively flooding the network with pings or service/port access attempts to find a new computer so fast?
You'll still get hacked, but you'll be far better off.
Any better ideas of how to structure a bounty to get bugs and not confused users?
But in the security industry there are multiple levels of this problem:
- The end user doesn't generally know how hardened the product he buys is
- The manufacturer is rarely certain how high quality the security auditing/services he is buying
- How much to invest in securing a product is not an easy decision.
I'm of course simplifying a lot here, but you asked for the economics of those problems, and hopefully this was interesting to some.
That's the problem. Security is either the default, or ignored.
Makes me feel bad for the competent security researchers having to share their field with glorified PowerPoint designers.
The first thing that comes to mind is a governmental agency, but then that opens the government/industry watchdog collusion ball of worms.
I'd like to think that the majority of people involved with the industry in some way have good intentions, but once you start involving, sales, marketing, and certain executives...
At the end of the day a few people who can't afford super lawyers go to jail, and meanwhile ubercorp producing cheap routers with bad security continue with business as usual.
It doesn't cost more to not write a SQL injection vulnerability. It just takes a programmer who has a basic understanding of internet 101, and who while writing any line of code involving user input will ask himself "how will these assholes use this to fuck with my system". As long as one line of code can take down your whole infrastructure, and unless all devs of anything serious have a minimum competency level, we are doomed to continue the current path, with a major data leak pretty much every week.
So it will probably take a combination of new, safer programming languages, and minimum proficiency levels, enforced with regulations. I don't like it, but I don't know a better solution, and the statu quo is unacceptable.
> It just takes a programmer who has a basic understanding of internet 101
Most likely this programmer will cost more than a $10/hr contractor from a third-world country that worked on the software before.
On third world country programmers, first I wish only third world country programmers wrote SQL injections vulnerabilities and other easy to avoid mistakes. Then, if you need to audit your accounts or get some engineers to review the structure of your bridge, you will insist on a good reputation / demonstrable competency. If vulnerabilities will land you in jail, you start looking at quotes from software vendors in a different way.
Good, then maybe you can see a path forward to stop arguing the opposite?
Yes, it is possible to pay a lot for a little. Developed country, less developed country, wherever. It remains, nevertheless, relatively less expensive to hire an inexperienced coder than it is to hire an experienced one, who has a greater likelihood of being security-conscious. But no formal mechanism prevents the inexperienced coder from finding work cranking out unreviewed programs.
Normal folk want to consume new gadgets because that's the culture we have. So it's a race to put new gadgets with new features in front of people. Sure, as a customer I could insist on my manufacturer having taken security seriously and having their products thoroughly tested and reviewed and hardened and patchable and all that good stuff, but then I'm going to have to pay more money for my gadget than my buddy here who just wants to be able to flush his toilet from his smartphone.
There is literally no consequence for manufacturers of poor quality products where the impact isn't directly impacting their own consumers, and so there's no market force that is going to address this.
When viewed this way, it's a classic case of where we need government/legislature involvement.
For example, the NSA's security -- not some underfunded, minor agency, but the NSA itself, the world's leading cybersecurity agency -- has had its security breached on a large scale basis, multiple times.
And that is just the beginning of the very long list. It's not unique to the US, either; other governments are the same or even worse in terms of security.
Given that government cannot even create working policies to secure systems that it directly controls, even in agencies with practically unlimited budgets and the strongest possible security mandate, how on earth can it be expected to create policy for anyone else, to supervise systems it does not even control, for commercial users with tiny or no budgets?
Issuing nice-sounding legal regulations that say 'go forth and be secure' will accomplish nothing.
How exactly would you insist on that? Ask them? Aren't they going to tell you, "Yes, it's very secure, no worries"?
How about "show me three different independent security audits by researchers or firms I trust who didn't find major issues in your product"? Sure, there needs to be a sizable group of people demanding that (and be willing to have it be the difference between a $500 and a $5K smart TV), but it is possible. For corporate IoT in certain settings, it might even be plausible.
Note that open-sourcing the firmwares would go great lengths in building a better world: Less spying, more upgradeability, more confidence in the tools, easier pentests and a legacy of new code for future generations.
Mostly because, as Mirai shows, the costs are external to the consumer of the broken device.
Don't get me wrong, there are tons of ways in which the security industry fails (the biggest IMHO is buying/selling things that only get implemented in a half-@$$ed manner or not at all), but this is like blaming the Airline industry for a train wreck.
Perhaps the real problem is that for home users there really is no security industry to speak of? A handful of features on WiFi APs that get turned off if they break your XBOX games, and maybe some desktop AV. That's pretty much it - and I'm not sure we can ever expect much more..
At the consumer level, I think users really need help from their ISPs. My provider (Cox.net) already emails me if they detect outbound activity matching virus activity. Though I can't say it doesn't make me at least a little nervous about their inspection of my traffic habits, this ISP level intervention is at least something concrete that could happen in the near term to blunt these types of attacks.
For Enterprises, I see the big failing is mostly around the focus on external threats (APT+Scary Hackers) with no consideration for the much greater danger of internal threats . This is the "dumb" stuff like someone quitting to move to a competitor and the day before they give notice they copy every file off the file server to a thumb drive "just in case they need it". Or even the new scary forms of user assisted ransomware.
1 - The guy that runs HaveIBeenPwned.com has a free course on this I've been sending around https://info.varonis.com/the-enemy-within that explains it in terms an executive might understand.
If anything this is proof that the security industry does work, these attacks are happening on devices where there is no security budget - not on servers with large investments in security.
In the case of Mirai it's not even a cost issue, just lacking good practices.
These are written by outsourced developers who don't know anything about security. They wouldn't even think to develop something as simple as that.
You are obviously unaware of how this works, companies would have to hire consultants/penetration testers to assess the product first. Then they would spend even more money making the changes suggested by the consultants. So it would cost a lot of pennies.. actually.
In your app you already have a setup wizard, right ? Add one more page to the end "Hey, we're almost done! We just need to make sure your device is secure. Please choose a username and (strong) password."
Edit: Because if you have a login, you already have the components in place, you are not developing a new feature.
This one simple, design choice would have cost very little, both in terms of development time and increase in support costs, because Support is a cost center that scales with your user base and your knowledge base. Obviously not pennies, but still small costs.
There is the classical point of diminishing returns from security investments, problem is for most IoT products, we are significantly left, towards zero investments and, at this point, small investments and a few smart design choices would yield significant returns in security.
And with developers that's exactly what I don't get. How has it not become internalized that allowing users to run the default user/pass combo is very poor idea ? I'm not asking for much, I don't expect them to know a lot about security, but not even adhering to some basic good practices of security is killing me.
Then you are not talking about the security industry or its failure to work are you? Its a failure in the development industry to have basic security awareness.
If you don't engage the security industry for pentests or consulting. You can't go any blame them when you get hacked.
Is that really it? Surely even a high-school level developer will realise that having a device connected to the wild web with a default user:pass will be hacked easily.
I'd have thought the problem is not wanting to support customer calls saying "we changed the password and now can't access our device". So default user:pass and no prompt to change it (and a backdoor just in case) means lower support costs.
This is utterly ignorant of the facts, Mirai took advantage of weak passwords to spread, but was not dependent upon them.
Until there are regulations in place to make them do this, they will not care.
> the IoT industry doesn't spend a penny on security,
There's no adequate way for consumers to differentiate between well secured products and badly secured products (every company will tell you "security is their top priority" if you ask them).
There are no such thing and UL security requirements for IOT device.
Time for such regulation?
But "internet + regulation" normally raise a lot of objections internally from the IT industry.
If someone (MSFT) proposes secure boot are required for all IOT devices, the first one to oppose it likely be EFF. :-)
I think a UL for internet connected devices is a fantastic idea. Just need to figure out how to get companies to volunteer for such testing. The way it works for UL is that they provide some insulation from litigation. Perhaps if users could litigate IOT manufacturers for inadequate security testing, something similar would materialize for that industry as well?
UL is created for by Insurance companies to gauge the safety of the products.
At the end, the real cause is "The force of Lawyers" is strong for product safety in US. :-)
"The force of the lawyers for IOT" is still weak. :-)
The force of the Jedi (IT, hackers, SW Dev, EFF, OSF) still strong, for now....
The Empire will win when and if enough Jedi (SW Dev) turn to the dark side - team up with the lawyers start suing IOT startup, devices, creators.
No more IOT, raspberryPI, OpenWRT.... only Intel/Qualcomm/MSFT licensed UEFI controlled SecureBoot (Windows CE) devices, Lock down Chromebooks from Google will be allowed.
Feel the power of Dark Side!!!..... :-)
Silly FUD like this does nothing to help.
Nothing is stopping anyone from building their own non-UL electronic devices, and even distributing them to tinkerers and early adopters. This is much preferable to some steep liability/mandatory insurance regime like automobiles where you've got to Soviet-style register your car and even yourself!
Sensationalists push a panicked narrative about insecure devices, but any disruption of third parties is entirely due to scale. Simply making it so the enormous group of low-effort consumers won't end up with negligently insecure devices would basically erase the problem.
It is really hard to show you were directly hurt, and even harder to show whom by.
Rightfully so. Anyone who argues signing without user override is necessary to achieve devices secure against external attackers is outright lying.
The root causes are the following:
1- Security more often than not is an afterthought. When you are trying to go to market, under tight deadlines, burning the night oil, nobody has time, energy or money to think about security.
2- The lack of security education by most of the stakeholders (upper management, product managers, engineers, etc) does not help and keep security a taboo, in most organization, nobody has the title of making the software secure. So it falls into nobody's lap
3- While, I have all the respect to the profession of honest sales, some salesmen ruined it for all of us, feasting on the lack of education mentioned above. Trying to sell tools/services as the silver bullet to the security problem, an idea that is very well received by someone who does not understand the problem and really looking for a silver bullet
4- At the end of the day, the real issue is that security is a cost center, there is no ROI for the business for doing security other than avoiding problems that "could" happen in the future.
That being said, there are three classes for clients I have seen doing security:
1- Heavy losses: for banks for example, the risk of losing money is quiet real and tangible. Besides they (at least in the U.S) under heavy regulations to do so. But their real motivation is risk mitigation.
2- Regulations (worst reason to do security): such as the PCI industry, they have to do security checks to avoid fines. This category usually try to do the minimum to get by.
3- Proactiveness: hats off to this category, as they don't really have to do it other than they think that this is something that must be done.
1- More education
2- More education
3- More education
4- Implement more security controls natively into frameworks (output encoding, entity frameworks, etc) and browsers (such as CSP policy, etc)
5- More fines for companies that don't really take the minimum amount of steps to ensure data confidentiality and integrity.
He seems immature and vain, because his motive is apparently to taunt someone with how smart he is, but the code is indeed pretty awesome and educational. It's a little sad that commercial software is so ugly and that black hat software is elegant (though I guess it has to be, because it's under rather severe "environmental pressures").
At first, I was also kinda shocked that it had this simplistic list of hard-coded user names and passwords (mentioned in the article). But I guess I've worked in the software industry long enough that it makes sense. Computers are so ubiquitous and on reflection it's not a surprise that you can pull down hundreds of thousands of machines with this technique!!!
Can anyone shed light on the economics of releasing source code? I would think this would make your botnet much less valuable. Apparently someone found a vulnerability in his HTTP parser, which I don't think would have happened without the source code.
So did the author shoot himself in the foot for reasons of pride, or is there something else going on?
// Set up passwords
add_auth_entry("\x50\x4D\x4D\x56", "\x5A\x41\x11\x17\x13\x13", 10); // root xc3511
add_auth_entry("\x50\x4D\x4D\x56", "\x54\x4B\x58\x5A\x54", 9); // root vizxv
By releasing the source code and letting everyone else fight for control of the botnet, it would be much harder for anyone to trace the original attack back to them.
It's true, the 1U rackmount netsec industry does virtually nothing to prevent consumer electronics vendors from shipping terribly insecure code. I don't like the netsec industry either. But: so what?
The reality is, very few companies are buying 1U rackmount snake oil (or Nth generation antivirus products like endpoint protection tools) to stop things like Mirai. We're not even talking about the same budget. The "security industry" is not in fact chartered with stopping things like Mirai. So Mirai is a weird complaint to level at it.
I'm not interested in leaving the security industry. I'd rather work to change it. I'm a millennial, I guess :)
> The "security industry" is not in fact chartered with stopping things like Mirai. So Mirai is a weird complaint to level at it.
No one "chartered" the industry to do anything.
You're right, a metal box or an AV isn't going to prevent your IoT product from shipping with default creds on telnet, but don't you think the industry is complicit in drowning out the good advice to not do that in a sea of noise and then blaming the victims?
There's a major failure happening, but it's not attributable to the security industry. It's a failure of the computing industry as a whole.
That doesn't try to stop IoT botnets; it just extracts rent for keeping some people safe from them.
People were building firewalls in 1988, several years before there was even Trump Winsock, let alone a reason for PCs to need to filter Internet traffic.
Yeah, you can always make this argument. "If only there had been sufficiently advanced and non-buggy firewall rules in place in firewalls optimally placed in the topology, looked after by competent networking staff"
It just doesn't work in most environments because you don't have firewall experts on staff and you have no incentives to spend money and effort on firewalls.
(And even for professional firewall wranglers it's hard in normal semi-chaotic IT organisations, you don't intrinsically get any assurance things are set up tightly, or that it remains so after network configuration changes, and you keep breaking working systems, you just get feedback for angry users that you've inconvenienced and you never get feedback from successfully defending against attacks. etc etc. And at the end of the day for the same money you could have gotten better security improvement by spending man hours (and hours lost by impeding your users) elsewhere, in line with end-to-end thinking.)
> If only there had been sufficiently advanced and non-buggy firewall rules in place
We don't even have to look for ideal or advanced solutions. First forbid spoofed IPs, then kill incoming low port traffic to residential users (but let them opt out of that, just like we already often do with outgoing port 25). This applied globally would kill a large chunk of DDoS attacks these days. But today's ISPs are not doing even that.
I don't know how we can convince people to care enough.
Hard coded creds and the allowance of default creds isn't the security industry, it's the manufacturer.
This will result in harm to third parties who did not act maliciously, but that's already happening now. With this change in law the total harm will probably be less because the problem will be solved for real, which will dramatically reduce or eliminate the possibility of "black swan" events causing very serious harm (eg. shutdown of critical infrastructure).
I'm glad you're thinking outside the box, but that kind of "immunity", if it were ever to be authorized in an emergency (attacks on power grids lasting hours or days), it should only be carried out by the government with a warrant, and with the understanding that people may die or lose property due to the sudden, public destruction of millions of devices.
The blame for this debacle falls squarely on the heads of the vendors who produced these trusting (if not downright gullible) devices in the first place.
That, right there, is a damning indictment not only of our industry but also of our culture. We know how to secure systems. It's not magic. But — unlike for example physical hygiene — we haven't made the decision to make computer hygiene part of our culture. We look down on people who don't wash their hands, but we don't look down on people who use poor passwords. We teach children to cover their mouths when they cough, but we don't teach children not to plug a Windows machine into a network.
Who is "our culture"?
Are you part of the same "our culture" as the people who made and shipped these things?
Since you're pretty much rhetorically constrained to answer "yes"... would those people agree with the answer you give?
The culture I am actually part of is not perfect, certainly. The people who are in my culture still write the simplest cross-site-scripting attacks like they get paid bonuses for them. But the vast bulk of the people involved in the production of the hardware that the Mirai botnet took over would probably not agree that I am in "their culture", regardless of what I claim about it, and they'd probably feel the same way about you is my guess.
Even if "we" do everything you would suggest, perfectly, Mirai still happens. Therefore, those suggestions can't be the solution. We must still address these problems via other mechanisms.
Something to think about. Don't get me wrong, I'm all for education. And just like sexual education, I'd introduce classes about online privacy in elementary school.
In my experience, most answer along the lines of "So what ? What could they get ? I have nothing important." or "Why would anyone ever hack me ?" or "But I have an antivirus, doesn't that make me safe ?".
And then spend the next 15 minutes explaining to them how things actually work and why they need to take it seriously and offer to help. 9 out of 10, they never reach out. And it's not their fault, but the way security in general is perceived.
- The usual conversation -
I ask them: "Do you have curtains ?" and they say: "Yes, of course" and I ask "Why ? I mean you have nothing to hide right ? What does it matter if someone can see what you are doing inside your house ?", usually they freeze for a second, "Because it's creepy". I continue "Well if it's creepy that someone would watch you in your house, isn't it just as creepy if they watched you online, what you read, what porn you watch, what you talk to your friends about ? Which do you think tells more about who you are ?". At this point silence and an increasingly worried look is the norm.
I keep going: "It's not about hiding anything, it's about what is private. Otherwise why not tell everyone your darkest secret, your greatest fears, the thing you are most ashamed of doing in your life ? And that's why you should do [this or that]"
But even so, it's true most default back quickly. Still a few call, ask, improve their practices. People only seem to take it seriously after they have been directly impacted in a powerfully damaging way.
Edit: I have obviously had this conversation enough times to make this script in dealing with it. If you have to do it more than twice, automate it. :)
Most people don't understand how their car works. But if you own a car and you neglect to change the oil for 50k miles, or you put diesel into your tank and ruin your car, we don't blame the automotive industry for not informing you on proper maintenance. Just like with computers, the information is out there, and it's not the job of the automaker to make sure you know it.
Why are computers different?
I don't expect the mechanic to take advantage of me and instead offer me fair service at a fair price. Luckily enough people have a natural interest in it that the competition keeps prices down.
My argument is in the similar vein of those who aren't physically fit to win a fist fight. Doesn't mean they deserve to pushed around, robbed or beaten just because I'm stronger and a better fighter.
No matter the analogy, if someone doesn't know better it doesn't automatically mean they deserve it. Peoples lives are busy, complicated and not everyone starts on the same ground or with the same capabilities.
This is getting off subject, but I'm of the opinion that this trend is primarily motivated by locking people into a dealer for maintenance, not helping people maintain their vehicles. For example, I think BMW dealers are the only ones who have the ability to calibrate tire pressure sensors on bimmers, and some new cars are abandoning OBD-II ports.
> My argument is in the similar vein of those who aren't physically fit to win a fist fight. Doesn't mean they deserve to pushed around, robbed or beaten just because I'm stronger and a better fighter.
I don't think it's fair to compare these things. Of course no one deserves to be assaulted. And likewise, if someone sabotages your car or has a remote exploit for your computer, I find it hard to dish out blame. But beyond this, I think the only person who could possibly be responsible for the condition of their possessions is the owner, and I don't see why computers should be any different. Not knowing better or being too busy is not an excuse to be a party to a DDoS attack.
You bought this computer, you plugged it in, and it was setup in a way where it was able to receive signals that made it send out signals that violate the contract you signed with your ISP and violate the laws that your representatives passed. "I didn't know" isn't an excuse in any other comparable situation. Just because computers are hard doesn't mean we should rework our entire legal framework. We shouldn't codify into law the idea that some subjects are obvious and should be enforced consistently, while some are beyond understanding (for most, for now), and ignorance is a viable excuse. It will inevitably become outdated.
I'm held responsible if I have an old car rusting away in my backyard and it pollutes my neighbors well-water. And sure, it's harder to claim ignorance about a rusty car than it is to claim ignorance about a misconfigured computer, but I think the law has to be impartial about that.
> Not knowing better or being too busy is not an excuse to be a party to a DDoS attack.
I feel you contradicted yourself here. In one way you excuse it, but you also claim users should know better. When it comes to having a strong password, I feel this is where it's acceptable to place blame. When, for example, they went to their favorite website which has a malicious ad that takes advantage of the latest exploit. Can they really be blamed? Average end users expect their smart thermostat to give them capabilities advertised. Some can't even imagine that it's basically a small computer.
> that violate the contract you signed with your ISP
How many people really read these agreements? It's assumed that if your computer works it's in spec. Whatever arbitrary clause they came up with to allow them to legally track your every move is a different conversation.
I won't divulge into throwing analogies back and forth. I'll just say this, I know plenty of Dr's, people who are much smarter than me that don't know a thing about networking. They're running their own practice and stay concerned about being up to date and not getting sued. That's their job. Building software that is easy enough to use while keeping people secure is ours. There's no excuse, it's hard and it's yet another aspect programmers need to learn. But it rests on our shoulders.
There might be a meaningful difference to me between a remote kernel hole versus using a default password, but for most people there is no difference there.
So you've changed my mind, to an extent. I don't think that we should "blame" them, but at the same time, if you entrust a large part of your life into computers and are not aware of the risks you're putting yourself in, I do think you deserve some blame for believing the advertising pitch without researching on your own-- and that kind of blame is relevant for everything, from cars to tablets to vacuum cleaners. I think doing your due diligence is relevant to any topic, and people who don't put it in will reap what they sow. But that blame is more superficial-- you shouldn't have to become a mechanic to buy your car and you don't have to be a programmer to buy an IP camera.
Mirai doesn't have shit to do with the security industry. The security industry are the people who you hire to secure your things, victims of Mirai did not take advantage of the services provided by the security industry.
More like, The Mirai Botnet Is Proof the Security Industry Is Going To Be Doing Fucking Great
Of course we know how to write secure code, code that meets a rigorous and well engineered design that eliminates invalid outcomes as a result. The problem is such code is slow and expensive to produce.
Good, Fast, Cheep; pick (at most) two. Security cameras optimize for Cheep first and fast second, so of course we see issues like this.
That's a common suggestion, but since no-one knows how to make completely secure systems yet, I don't think it's that simple.
If you're talking about a general presumption that anyone selling software that has a security vulnerability becomes liable for any consequential losses, then it seems likely to result in only large businesses with the war chest to fight a liability action being able to make any sort of remotely risky software and/or in a new insurance industry popping up so that the problem reduces to money and the cost is ultimately passed on to software users in higher prices. While there might be some pressure to improve security as a result, the negative side effects could be far worse for the software industry as a whole.
The next logical step is some sort of penalty for gross negligence or a repeated pattern of failures, where a supplier making reasonable efforts and following generally good security practices isn't at risk of being sent under instantly because of some new type of 0-day that no-one had seen before. But then you have to figure out what constitutes good practice and paying due care and attention, and that in itself is not an easy issue.
With exemption for those who provide full source code.
I don't see why that should make any difference. Having access to a huge amount of source code is only a benefit for security if you have the skill and resources to perform a detailed audit of your own, and if it's practical to spend that kind of time and money, and if you also have the authority to do something useful about any vulnerabilities you do find.
If someone is giving software away for free as a kind gesture, that's one thing, but I don't see why anyone supplying software on a commercial basis should get out of jail free on security just because they provided source code access. The FOSS world provides ample evidence that many eyes do not, in fact, make all bugs shallow.
The price of providing a basic level of security should be priced into the product! The fact that someone can go out there and buy an IOT camera that will be used to DDOS my server is a negative externality that constitutes a market failure. I want sketchy IOT manufacturer 32XB123 to be forced to buy liability insurance for that.
The question is whether that is all that would be priced in, or whether the insurance industry, given a rich new feeding ground, would charge huge rates for many types of device in case of catastrophic failure.
My car insurance probably costs me several times the value of my car each year, because I am required to have cover for third party losses as well. That remains true even though I've been driving for a long time and never made a claim so far, because in the nature of insurance, they are guarding against the relatively rare possibility of a relatively high payout.
What happens when your $100 office software package now has mandatory insurance in case each installed instance costs the business $200 in lost revenues from downtime after a breach, or $2,000 in average compensation when a vulnerability leads to personal data being illegally disclosed?
We also do not know how to make completely secure cars, but still car makers are liable for faulty construction.
That would be a start.
Second, any computerized device must pass FTC/FCC/UL (pick one) tests for computer security before going on sale.
There's more that can be done, but let's go after the simple stuff first.
I don't need ISPs to mess with my connection any more than they do already.
In a recent attack there were 145607 devices sending >1.5Tbps. That's a lot from individual connections.
(From: https://twitter.com/olesovhcom/status/779297257199964160 )
Even so, this one wasn't a particularly sophisticated attack. Even if we could weed out this particular type, many a DoS rely on amplification or asymmetrical computational cost with respect to the size of the request. So if we concluded that 1.4 MB/s of HTTP or DNS or plain TCP were a lot and managed to block these senders, it would be far from a universal solution against DDoS originating from domestic hosts. And there's reason to believe we'll see much larger botnets unless manufacturers of IoT devices start taking security seriously. So tomorrow 0.14 MB/s per home could be plenty.
What cost would this blocking incur? How often would I have to call my ISP because they blocked perfectly legitimate traffic? Unfortunately maybe not that often, because after the first few calls I'd probably figure that they are too big to give a shit about my problems.
But if we wait 10 years, it'll be too late, and we'll have killed the internet or given into draconian measures to stop the problem - which is what none of us want.
Frankly, I don't even care if a given government has great security review - as long as they put a process in place it's a start that can be improved upon.
Similar with lead free solder.
Manufacturers don't need to keep a credential database, as legitimate users can simply look at the label.
The only way out of this mess is regulation of what types of devices can be sold and how they must be secured. The electronics industry and online retailers need to get together and figure this out and come up with a UL for IoT, or the government will step in and make them all a lot more unhappy.
The security industry has absolutely nothing to do with the existence of a botnet that can take down massive internet infrastructure. The security industry just puts bandaids on shitty products. It's the internet architects/designers that are responsible for botnets.
In order to make the internet very simple, very compatible, and decentralized and distributed, the design allows a baby monitor to send arbitrary traffic to any device on the global network. There is no good reason for this. The reason is, anything else would be complicated, and complicated things become expensive and troublesome. But that's not a good reason to allow baby monitors to take down internet services.
The solution would be to segregate critical equipment address and protocol by function, and to put in strict controls in all routers to prevent illegitimate traffic from reaching the wrong equipment. This would not only improve security, it would make allocation of address space and application ports make some kind of practical sense, and allow for improvements in the way applications communicate over the internet, to say nothing of improved management of traffic.
But nobody's going to change the design, so whatever.
I'm old enough to remember when UK modems had to be "BABT approved", adding considerably to the price. See (1993) https://groups.google.com/forum/#!topic/uk.telecom/6j1bVHcq1...
For example, bgp traffic shouldn't work on non-routers. Certain signaling (icmp and udp traffic flags for non-peer traffic) isn't needed by most customer equipment. And it's stupid that IP spoofing works at all, much less on robust servers on internet backbones. It's clear there is traffic allowed on parts of the internet it shouldn't be allowed on. Changes could be made to correct this, and no, they are not political.
In many cases the only difference between a DDoS and normal operation is the volume of traffic at the victim host.
And trying to design it properly, you'd probably come to the conclusion that it won't work (without causing massive disruption and breaking everything we've built so far).
The simple fact is that there are very limited economic incentives for a company in the IoT space to spend money on security, and as a result they don't.
It's not easy for an ordinary consumer to differentiate between a company who just says "security is our top priority" and one who puts meaningful effort behind that (e.g. there is a strong market for lemons here).
Also there's no effective regulation which could substitute for that information. In other markets (property, consumer goods, food and drink) we have safety regulations as it was recognised that consumers can't effectively differentiate. In IoT and other areas of IT this doesn't exist, so there's nothing to stop insecure devices being sold.
As to the "security industry" well there have been enough practitioners warning about this, to limited effect. Realistically there's a limited amount that can be done without some form of top-down intervention.
Obviously defence in depth and dedicated security tools have their place in a networked environment, but you can't just outsource the problem or fix it with some bolted on extra.
Some concerns simply have to be addressed as an integral part of whatever software or device is being made. If we don't do that, well, we've just seen the result.
Unlike vehicle registration, it wouldn't require you to have to do anything other than keep your system maintained. If you want to put your computer on the internet, be prepared to get port-scanned by the US Digital Service once a year/month/week/whatever, attempting to take your computer off the 'net. If it succeeds, then that's one machine that could have been--but now won't be--part of a botnet.
ChoasMonkey as a public works project.
1) A consortium of manufacturers of IoT devices banding together and signing an "autopatch" or "autohack" agreement. This would be an open source, public hack-and-patch society that freezes out any manufacturers that don't agree to it. All customers would simply sign in their EULA that their devices are authorized to be "patched" by any means necessary if found to be insecure by the auto-hackers.
2) As botnets at the Mirai scale are now a matter of national security, make the NSA do its job and do roughly what is outlined in 1. Controversial, sure, but you can be damn sure that they already know about these unpatched devices and how to exploit them.
This is made more asinine by the fact that we've had extremely easy to use methods of establishing trust between devices on a permanent basis, but because that would add three steps to the setup process the marketing people refuse to let it happen.
Nobody wants to spend the money to do it right, and nobody wants to spend the money on devices that do it right so here we are and I see no way out of this situation.
Programmers for really stupid shit like this: https://news.ycombinator.com/item?id=12756006
And managers for... well to this day I have found absolutely ZERO use for management.
I guess we can draw a conclusion here: security assumptions about who the users are is not in sync with human nature.
Security is failing the same way as architects would fail making the assumption stairs with one meter high steps are okay.
IT security is failing because their model of human beings is plain and flat wrong, hence, computer security as designed by our brightest mind is wrong.
Don't force feed to human requirements of fuck given they don't have.
Let's set blame aside for now. What caused this botnet?
- The tendency of IoT/smart-device vendors to eschew engineering discipline
- The tendency of _all_ companies to eschew security as an optional extra
rather than the cost of admittance to the marketplace
- The historical tendency of big companies /not/ being burned to the
ground after a massive hack makes security a lower priority to
- The lack of a secure automatic update infrastructure (which also led to a
recall), for which the vendor could have mitigated the vulnerabilities used
- General ignorance about the risks associated with default/weak/hard-coded
security credentials (e.g. passwords)
+ Regulation could help here. Require third party security assessments on
IoT/smart devices to be sold? It's not the most elegant solution, but it
would be a vast improvement over the current state of affairs.
+ This is a cultural problem that makes application security painful in
every business vertical. It takes a lot of one-on-one communication to
resolve. Seeing large companies lose their shirts over security negligence
might change the conversation.
+ This is a huge problem for all software. (See link below.)
Now let's circle back to blame. What is the security industry responsible for? In my view:
- Failure to communicate with other industries and professions,
such as electrical engineering.
- Failure to communicate with developers in general.
- Failure to educate people outside the industry of our own
- Failure to learn the challenges that others are trying to overcome
so security can be on the same team rather than yet another obstacle.
The only problem with doing that is: There's no effective and ethical way to monetize it. I make more money from helping e-commerce sites recover from being hacked by easily preventable mistakes than I ever have from making the software that powers 30% of the Internet more secure. https://paragonie.com/blog/2015/12/year-2015-in-review
Solving the core problems is good for society, but society doesn't reward this behavior.
The security industry is broken because society is broken.
I'd like to think that we can solve collective action problems like this as a society and be rewarded for it.
We have to figure out how to fight this problem the way we fought smallpox and polio.
Maybe a good starting point would be an attitude change?
But I would agree that, where the attitude does exist, it needs to be changed.
Less rock stars, more janitors/mechanics.
But a GPC will always remain a GPC, and thus they are susceptible to being re-purposed no matter the number of "safeguards" we put in place to prevent it.
Simple systems are easy to assess and communicate. Complex systems are hard (expensive) to assess and communicate. This results in several asymmetries:
1. Complex systems are communicated in an oversimplified mode.
2. Asymmetries exist between buyers and sellers of products (Akerloff's "Market for Lemons".
3. Asymmetries exist for all parties over time in realising the long-term costs (or benefits) of systems. In the most pathological instance, a party (or parties) actively frustrate the process of widespread awareness of these costs -- lead, asbestos, tobacco, sugar, CO2, etc., etc.
Corollary: security is a complex product.
That can be fixed by requiring users to set a unique password on setup, or shipping each device with a different password. This exact problem occurred in the UK where ADSL router manufacturers used to ship the same wireless WEP/WPA key on all devices, many got compromised, and now they ship with a unique key per device.
I just had the good fortune to configure my wifi- network myself, so I had to put in my own keys. Many other people didn't, and who could blame them, they were probably happy the thing worked in the first place...