Mark this mindset as the beginning of the end of the open, inclusive programming world as we know it.
Schnier visited RIT (my alma mater) last spring, and his presentation revolved around the threat presented by IoT and the growing need for national legislation to encumber it. I asked him a pointed question about how this scaled to the _international_ level, which he decided mostly not to answer (focus on domestic policy first, and such). Because the answer is simple: _it doesn't_. Without global collaboration, this philosophy is the beginning of national internet feifdoms - moreso than what exists today - and the beginning of the end of the global collaboration we freely enjoy today. I value this freedom a lot.
I respect Mr. Schneier for his poignant responses to popular security issues and his ability to be a public face for computer security, but I strongly disagree with where he's lobbying we take the future to. Maybe I just can't accept the hard reality that "security isn't easy" and that government regulation is the only way to force security on people.
Another possible model would be something like having Underwriters Laboratories and other independent organizations check the devices.
This is never going to be perfect, but it doesn't need to be. The goal is to make sure that devices people buy at the store are reasonably secure. In previous eras, the goal of new regulation was to make sure that you can still listen to the radio and watch TV, and that people don't often get electrocuted by their appliances. By and large it seems to have worked.
For more: https://en.wikipedia.org/wiki/Nationally_Recognized_Testing_...
If iPhones cost $7000 instead of $700, there would be a lot less iPhones. Same for IoT lightbulbs (though they're pretty expensive already)
This is exactly what I was talking about, insurance companies will use the fact that you need access to something to erect a giant money gate in front of it, justifying it with advanced tech that helps in some ways but is almost always used out of context based on the insurance companies control policies that they force practitioners to follow.
There's around 9,000 CVE's this year so far. Should devices be checked against all of them? How about next week, does the vendor have to go back and check your fridge? Do they have to patch it? For how long? Who pays for all that? A $20 webcam suddenly needs $500/year just in ongoing maintenance and updates.
I don't have any answers, only questions.
Yes, but what is the negative externality the market is not currently capturing when these devices are assumed into botnets and used against others? Simple economics states that the $20 device is $20 because somebody else is paying for the security mistakes of the shitty device's developers.
On the subject of answers, rather than questions... I have a funny story. So the XBox one has this neat feature where you can control your console via an app on your local area network PC or phone. The default setting was that any device on the network the Xbox was connected to could control it. Imagine this, in a college dorm. I saw a lot of xboxes available to control. So, after testing with a friend (yep, I could easily interfere with whatever), I developed a key combo that I could rapidly input from any console state which would open the settings menu and disable the remote control feature, locking out my own access (And I'd know it had worked because I'd be disconnected). That's right, I effectively developed a virus which patched the vulnerability. If attackers have the advantage in this field, then maybe we should put more effort into thinking about friendly counter-attackers. If the silly IoT device can be pwned, then it can be pwned for good, as it were. Does anyone know of any groups working in this area, or any research done towards it? Pen-testing and other white hat hacking activities I know about, but does anyone officially do this kind of guerilla-patching?
Ultimately the idea was considered not good because of difficulties with getting it to work as expected, pressure and fear that the fix would introduce more issues, liability issues, and so on, and probably some ethics debates on computer intrusion even for the purpose of securing the device.
I'm not really sure what stance to take on such an issue, since the idea behind it is good intentions, but I feel like it can lead to unintended consequences that ultimately would have no one liable. For my personal machines I have fairly vanilla setups, but many of my friends and colleagues have rahter intentionally complex set ups and most definitely would object to someone accessing their set up and making changes without their permission.
1 - I'm sure it is in one of my undergrad textbooks. In other words, no way I'll find it again.
Wow. If a virus propergated like this on today's networks, would such traffic event make a noticable dent in the available bandwidth?
Anyways, I hadn't heard of this virus - it's super neat. Patching its own infection vector and even explicitly removing an existing virus from the target machine... The article loathes it for how overtly it affects machines (forced restart to apply an update) and networks (congestion), but the work it attempted to do was decidedly good. Sounds to me like it worked well, but had poor execution in accounting for the network effects it would have. (I doubt it was rigorously tested in a prod environment ;) ) If anything, I'd see this as a case study that this kind of offense-as-defense strategy has the potential to work... Its just nobody wants to take responsibility to do so.
my father worked for a company in the healthcare industry - I'd hear stories of multi-month long FDA audits of their hardware after a 'statistically significant' number of failures in the field.
I damn well hope so that such incidents are taken very seriously by the FDA.
The counterpoint is that, if the webcam is used in failure-critical situations, then it absolutely should be under that level of scrutiny. The problem is finding how you can define that operational scenario in law.
Meanwhile, people wanting to use higher security as a differentiator can get evaluated ahead of time as some do now.
Matter of fact, NSA's new scheme only requires 90 day evaluation at EAL1 (certified insecure).
As for the influence on other countries -- blocking their traffic is an effective way to convince them of the need to take action. For sure this still means temporary disruptions and maintenance cost for every operator, but it's part of the "cost of living" on the Net as long as the others don't catch up. Take it or leave it.
(yes, I am aware of the security risks of hijacking the updates, but it's still still a better control than no control at all)
It's not hard to find bugs and problems that were hidden in unexpected places or triggered by weird combinations of inputs. Simple mistakes (or well thought backdoors) like goto_fail and heartbleed can explode a good time after they were created. And then we will sue, get money, send to jail, but the damage will be already done.
He did answer that on his blog however:
"It's true that this is a domestic solution to an international problem and that there's no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change."
And I mostly agree with him. If major markets start require certification of security (maybe something like FCC and CE), the rest of the world will follow as they want to trade in these premium markets. Sure, it will not solve the problem overnight, but at least it could be made less severe.
One state moved the needle here just by virtue of being the largest component of what was the largest market. There is no reason the same could not be done for internet-connected devices.
> "simply because it makes no sense to maintain two different versions of the software"
Except this isn't true. European law already requires Microsoft to separate certain features from it's core for European distribution - which is why there's Windows N for them and normal Windows for the US. Other pressures (such as government mandated backdoors, or regional media licensing) could easily create a situation where a company would find it beneficial to shard their software by market. (Which is, again, a load only larger companies can easily take on or optimize for!)
Case in point - the recent IoT Dyn attacks.
Even if his proposal doesn't fix the rest of the world it would set an example for other countries.
It would also give other countries more that they can do themselves. Like with the DNS DDoS, the traffic originated in the US -- even if the perpetrator was in another country.
Getting the US locked down means that those countries can start taking responsibility (and I mean that in a positive way) for their own piece of the internet. Right now there isn't even a place for them to start.
Would it be so bad if the law required "any connected device sold in the US must permit firmware updates, at least for security patches?"
If the devices are on a US network, it seems reasonable to require them to meet certain standards. Other countries can set their own standards, just like we have FCC standards for wifi/spectrum usage in the US, and in the EU the CE sets the standards. This has not balkanized wifi; if properly done, I don't think it would balkanize IoT.
Often times regulations in one jurisdiction will impact behavior and equipment sold in other jurisdictions. This is especially true when the regulations are not particularly onerous, the jurisdiction with the regulations represents a large market relative to the total market, and the number of suppliers forced to comply with the regulations is limited.
I worked for a long time in the embedded networking equipment hardware business and was involved in the rollout of ROHS.
ROHS is a EU directive, but it had the larger effect of limiting hazardous substances in devices across the world. Manufacturers wanting to sell equipment into the EU had to get rid of stuff like lead solder and replace it with safer elements. It was simply too expensive for most hardware manufacturers to have two different manufacturing processes, one for the EU, and another for everyone else. So now in the USA you have a difficult time finding lead solder in new electronic devices, and it's because of the EU and ROHS.
Hardware device manufacturing these days is incredibly concentrated. These companies will abide by whatever regulations the USA or the EU forces them to abide by because they want to sell into these markets. And it's likely not cost effective for them to create parallel assembly for smaller markets.
Personally, I don't believe that it will, but that's just my opinion.
As with most things in this field, it's one thing to talk about it, another to do it. Schneier has been talking about software liability for a long time.
I just don't see what technological change is coming that will change that. Not only that, but because of the big data buzz, even privacy concerned companies feel the urge to collect and store ever more personal data. How is that going to end well?
I mean the only reason lawmakers and regulators are not all over this issue is because they don't realise how bad things are.
So, they're willingly covering their ears while sitting on their asses. At least those on major committees.
An analogy: if you produce a soda, you can keep the recipe secret. If you produce a potent regulated medicine, you publish detailed formulas and include them in every box of the drug.
You mean like a national network? I believe something like that existed in several countries. And the Internet of old grew over them, as a way to interconnect nodes within disparate networks (thus the name Internet).
Give it enough time, and the national networks will merge into a single global network again.
Not sure if it isn't possible to have reasonably interoperable security encompassing U.S and EU, although recent events sure do make me more pessimistic.
This will be a hurdle but it might mean better pay and less having to clean up after cowboy programmers pumping out rubbish.
There was a huge backlash against it.
As a systems architect that spends a lot of time thinking about scaling, redundancy and resiliency, it's also my opinion that we need to do some work on DNS ... it's probably the most vulnerable part of the "Internet stack".
You know what inspires confidence? Built in 4g connections during a 5 million vehicle safety recall.
What you will end up with is likely something that is LESS secure but now mandated for anything made/sold legally in the US. The rest of world will be free to do better things
You will also end up with mandated Backdoors, weakened encryption, and a variety of other NSA/FBI wish list items that will be included in any "Cyber Security" bill
I have no interest in the US congress regulating IoT devices, that will not be good for liberty, or security
All that disappeared in favor of Windows NT and UNIX w/out security the second they eliminated the regulations. There's just enough of a niche market for a few suppliers to be left, esp defense contracts. Most are gone, though, because private markets dont produce strong security if incentives are turning costs into profits.
If Boeing will be the only company allowed to make IoT products then you might as well kill the IoT industry, as each product will cost 100000x more than it should.
I classify that BAD... sad you do not
Security is one thing, but if it comes at the expense of Open Source and accessible systems then i choose Insecurity.
I will choose Freedom over Government every time
Now, with regulation, you'd still have the same software being developed. The components would be simpler (JSON subset vs XML). Costs spread out in volume. People would get used to new, huge features taking "two or three quarters" (Lipner) instead of a few weeks. Non-paid or non-critical usage could be done to test out proposals without building whole thing.
Far as IoT, solutions already exist that are either inexpensive at OEM level or cheap per unit. Just getting ignored by most of market since there's no regs or liability. Hell, Im typing this on a device running one underneath the OS that cost the same as a ddvice without one. ;)
Because I look around and it's been working remarkably well.