Hacker News new | past | comments | ask | show | jobs | submit login

> When it didn’t matter—when it was Facebook, when it was Twitter, when it was email—it was OK to let programmers, to give them the special right to code the world as they saw fit. We were able to do that. But now that it’s the world of dangerous things—and it’s cars and planes and medical devices and everything else—maybe we can’t do that anymore.

Mark this mindset as the beginning of the end of the open, inclusive programming world as we know it.

Schnier visited RIT (my alma mater) last spring, and his presentation revolved around the threat presented by IoT and the growing need for national legislation to encumber it. I asked him a pointed question about how this scaled to the _international_ level, which he decided mostly not to answer (focus on domestic policy first, and such). Because the answer is simple: _it doesn't_. Without global collaboration, this philosophy is the beginning of national internet feifdoms - moreso than what exists today - and the beginning of the end of the global collaboration we freely enjoy today. I value this freedom a lot.

I respect Mr. Schneier for his poignant responses to popular security issues and his ability to be a public face for computer security, but I strongly disagree with where he's lobbying we take the future to. Maybe I just can't accept the hard reality that "security isn't easy" and that government regulation is the only way to force security on people.

It's not like regulating devices is unprecedented. New devices already have to be approved by the FCC before being sold in the U.S. Suppose that the FCC also checked that Internet-capable devices are safe to be connected to the Internet? This would have global impact because most companies want to sell their devices in the U.S. (And even more so if other countries with big markets cooperated with similar standards.)

Another possible model would be something like having Underwriters Laboratories and other independent organizations check the devices.

This is never going to be perfect, but it doesn't need to be. The goal is to make sure that devices people buy at the store are reasonably secure. In previous eras, the goal of new regulation was to make sure that you can still listen to the radio and watch TV, and that people don't often get electrocuted by their appliances. By and large it seems to have worked.

For more: https://en.wikipedia.org/wiki/Nationally_Recognized_Testing_...

A third possibility, one that I find more likely and advocated by Schneier before, is that government regulates the device manufacturers rather than the devices. For example, if a company sold more than X amount of Internet-capable devices, the threat to society will have to be addressed in form of liability and insurance. Insurance companies will then enforce standards and best practices, and economics will ensure that security teams get funded. All the regulation need to do is to limit the scope of limited liability disclaimers.

Letting insurance companies enforce best practices seems pretty inefficient, considering how that strategy has played out in the medical markets.

Medical markets are a bit different. No matter who you are, you're likely to participate in the medical market.

If iPhones cost $7000 instead of $700, there would be a lot less iPhones. Same for IoT lightbulbs (though they're pretty expensive already)

Are you suggesting that some people should be priced out of using the internet?

This is exactly what I was talking about, insurance companies will use the fact that you need access to something to erect a giant money gate in front of it, justifying it with advanced tech that helps in some ways but is almost always used out of context based on the insurance companies control policies that they force practitioners to follow.

Not out of using the internet full stop, but out of putting internet in every random device, perhaps. If the increased liability of putting internet in your refrigerator gets reflected in the purchase price, so that an IoT fridge has, say, an extra $200 in liability tacked on vs. a non-IoT fridge, many people might choose to buy the non-IoT fridge, which is probably not really a huge loss to society overall.

The insurance market is also relatively free. It's not like there won't be competition there either. Whoever has the best risk modeling to price things appropiately should win.

The phenomenon you describe, demand elasticity, exists just as well in the medical market. There's a lot of rationing going on, explicit or implicit. It's just that the elasticity coefficient may be lower compared to the smartphone market.

The FCC worked because the devices they regulate are mostly local in range, but this stuff is global. Even if you do get some regulation in the First World internet devices, there are still going to be millions of legacy devices, millions more unregulated in the Third World, plus how many illegal ones and even more with latent flaws. All of them are going to be targets in the next DDOS botnet wakeup. So what do you do there, put up national firewalls? How?

There's around 9,000 CVE's this year so far. Should devices be checked against all of them? How about next week, does the vendor have to go back and check your fridge? Do they have to patch it? For how long? Who pays for all that? A $20 webcam suddenly needs $500/year just in ongoing maintenance and updates.

I don't have any answers, only questions.

> A $20 webcam suddenly needs $500/year just in ongoing maintenance and updates.

Yes, but what is the negative externality the market is not currently capturing when these devices are assumed into botnets and used against others? Simple economics states that the $20 device is $20 because somebody else is paying for the security mistakes of the shitty device's developers.

This point is key and a lot of people seem to be missing it. We're collectively paying for a lot of this already, it's just hard to measure.

And, not to be prudish, but the lack of regulation is part of what has allowed the industry to explode. It's more unlikely I can succeed at a kickstarter for a neat computer dongle if it needs an extra few thousand dollars for regulatory approval before I can ship it. While I have no personal experience with the FDA approval process, my father worked for a company in the healthcare industry - I'd hear stories of multi-month long FDA audits of their hardware after a 'statistically significant' number of failures in the field. That kind of pressure is not amenable to single-man operations, as is frequent in our field - nor should a dev be on the hook for lifetime tech support for a silly lightbulb or other trinket. Yea, the infrastructure as it currently stands has issues, but while regulatory pressure will whip the big players into line, it could also easily choke out smaller players and startups.

On the subject of answers, rather than questions... I have a funny story. So the XBox one has this neat feature where you can control your console via an app on your local area network PC or phone. The default setting was that any device on the network the Xbox was connected to could control it. Imagine this, in a college dorm. I saw a lot of xboxes available to control. So, after testing with a friend (yep, I could easily interfere with whatever), I developed a key combo that I could rapidly input from any console state which would open the settings menu and disable the remote control feature, locking out my own access (And I'd know it had worked because I'd be disconnected). That's right, I effectively developed a virus which patched the vulnerability. If attackers have the advantage in this field, then maybe we should put more effort into thinking about friendly counter-attackers. If the silly IoT device can be pwned, then it can be pwned for good, as it were. Does anyone know of any groups working in this area, or any research done towards it? Pen-testing and other white hat hacking activities I know about, but does anyone officially do this kind of guerilla-patching?

I'm having difficulty finding any authoritative or historical resources on this, but I recall that "good" viruses were for some time planned that would do just that: Run around, see if they could infect via the method, then patch and self-destruct.

Ultimately the idea was considered not good because of difficulties with getting it to work as expected, pressure and fear that the fix would introduce more issues, liability issues, and so on, and probably some ethics debates on computer intrusion even for the purpose of securing the device.

I'm not really sure what stance to take on such an issue, since the idea behind it is good intentions, but I feel like it can lead to unintended consequences that ultimately would have no one liable. For my personal machines I have fairly vanilla setups, but many of my friends and colleagues have rahter intentionally complex set ups and most definitely would object to someone accessing their set up and making changes without their permission.

I remember a history¹ about one that was tried. It just took down the university network where it was placed.

1 - I'm sure it is in one of my undergrad textbooks. In other words, no way I'll find it again.

> "Increased ICMP traffic"

Wow. If a virus propergated like this on today's networks, would such traffic event make a noticable dent in the available bandwidth?

Anyways, I hadn't heard of this virus - it's super neat. Patching its own infection vector and even explicitly removing an existing virus from the target machine... The article loathes it for how overtly it affects machines (forced restart to apply an update) and networks (congestion), but the work it attempted to do was decidedly good. Sounds to me like it worked well, but had poor execution in accounting for the network effects it would have. (I doubt it was rigorously tested in a prod environment ;) ) If anything, I'd see this as a case study that this kind of offense-as-defense strategy has the potential to work... Its just nobody wants to take responsibility to do so.

  my father worked for a company in the healthcare industry - I'd hear stories of multi-month long FDA audits of their hardware after a 'statistically significant' number of failures in the field.
Healthcare industry? Statistically significant (which I note you put in scare quotes) failures?

I damn well hope so that such incidents are taken very seriously by the FDA.

Yeah, I think his point was that it's not sustainable for a company selling $20 webcams to get that sort of scrutiny.

The counterpoint is that, if the webcam is used in failure-critical situations, then it absolutely should be under that level of scrutiny. The problem is finding how you can define that operational scenario in law.

In a world where random webcams can mount DDoS attacks on basically any internet service, is there any non-failure-critical situation for an internet-connected device? (Honest question.)

One alternative is you get the standards and quite a few components (eg FOSS) that help meet them. You aren't evaluated unless you're sued due to harms from your product. The potential fines or damages go up with the amount of negligence they find. This way, it only costs money when harm happens.

Meanwhile, people wanting to use higher security as a differentiator can get evaluated ahead of time as some do now.

that's exactly how it is going on now. and it doesn't work anymore - different scale.

No it's not. I can't sue Microsoft for preventable, buffer overflows in Windows. The evaluations they target that government accepts dont even look at the source. There's no software liability or source-based evaluation requirements for mass-market software at the moment.

Matter of fact, NSA's new scheme only requires 90 day evaluation at EAL1 (certified insecure).

It's not unreasonable to expect that a device connected to the Internet should be patchable over the same connection.

As for the influence on other countries -- blocking their traffic is an effective way to convince them of the need to take action. For sure this still means temporary disruptions and maintenance cost for every operator, but it's part of the "cost of living" on the Net as long as the others don't catch up. Take it or leave it.

(yes, I am aware of the security risks of hijacking the updates, but it's still still a better control than no control at all)

Don't limit that to Third World. Everywhere it's possible to open a company, sell some products, close the company and go away with the money. And I'm not saying that it must be malicious.

It's not hard to find bugs and problems that were hidden in unexpected places or triggered by weird combinations of inputs. Simple mistakes (or well thought backdoors) like goto_fail and heartbleed can explode a good time after they were created. And then we will sue, get money, send to jail, but the damage will be already done.

A lot of horrible things are precedented. That something similar already exists is an awful argument for doing it.

"I asked him a pointed question about how this scaled to the _international_ level, which he decided mostly not to answer (focus on domestic policy first, and such)"

He did answer that on his blog however:

"It's true that this is a domestic solution to an international problem and that there's no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change."[0]

And I mostly agree with him. If major markets start require certification of security (maybe something like FCC and CE), the rest of the world will follow as they want to trade in these premium markets. Sure, it will not solve the problem overnight, but at least it could be made less severe.

0. https://www.schneier.com/blog/archives/2016/11/regulation_of...

If you want to see a simple example of this in action, consider the impact that the California Air Resources Board has had on vehicle emissions globally. If you want to sell a car in CA it has to meet CARB standards. This effectively means that if you want to see a car in the US it has to meet CARB standards. As a consequence one of the largest car markets requires a specific minimum level in terms of emissions and almost all cars built for an export market can meet these standards or be easily modified to do so.

One state moved the needle here just by virtue of being the largest component of what was the largest market. There is no reason the same could not be done for internet-connected devices.

CA regulation did not stop VolksWagen to do what they did. And they did it for many years.

It didn't stop them from cheating, but they had to make the effort (a quite complicated effort at that) rather than just ignoring the regulation.

Nice to see he's thought about it since then. But anyways:

> "simply because it makes no sense to maintain two different versions of the software"

Except this isn't true. European law already requires Microsoft to separate certain features from it's core for European distribution - which is why there's Windows N for them and normal Windows for the US. Other pressures (such as government mandated backdoors, or regional media licensing) could easily create a situation where a company would find it beneficial to shard their software by market. (Which is, again, a load only larger companies can easily take on or optimize for!)

Microsoft already had their market segmented, so it was not hard to implement. I really doubt that some no-name Chinese router maker would bother to keep few versions of firmware vs disabling default admin password (yes, even that would be a huge step forward).

I think we're probably transitioning into the stage where the global computing infrastructure finally resembles biologic systems. Malicious agents will be everywhere. Attacks will be carried out all the time, at various scales. Anything that wants to survive in this environment will need to have some sort of immune system. Likely there will be a hierarchy of defenses at various levels, as a matter of business as usual.

Biological systems are different from our infrastructure in that it seems that no part of an organism tries to fuck over another to get more resources for itself. I assume this is a result of a local minimum (an organism that fights within itself would kill itself quick), but maybe there are some mechanisms ensuring cooperation within the organism too? E.g. something that prevents a developing brain to grow to capture more oxygen from the bloodstream than it should, starving out other organs? Maybe someone with a biology knowledge / background can chime in on this?

Cancer is a central example of one part of an organism seizing resources in a way that threatens the organism itself.

Well, you're looking at the Internet as a single organism. I don't think that's appropriate. There are many, many selfish entities that populate it, that would happily wreak massive destruction.

Case in point - the recent IoT Dyn attacks.

If we had other civilizations to compete with for resources we could evolve into the cogs of a single superorganism!

DISCLAIMER: I don't necessarily think the types of regulations proposed are the answer, but at the very least there should be some responsibility on people using high-bandwidth connections. It would probably have to be legally mandated.

Even if his proposal doesn't fix the rest of the world it would set an example for other countries.

It would also give other countries more that they can do themselves. Like with the DNS DDoS, the traffic originated in the US -- even if the perpetrator was in another country.

Getting the US locked down means that those countries can start taking responsibility (and I mean that in a positive way) for their own piece of the internet. Right now there isn't even a place for them to start.

I generally agree with this, but with respect to IoT, let's be honest: these are things, and they are connecting to the Internet from some country.

Would it be so bad if the law required "any connected device sold in the US must permit firmware updates, at least for security patches?"

If the devices are on a US network, it seems reasonable to require them to meet certain standards. Other countries can set their own standards, just like we have FCC standards for wifi/spectrum usage in the US, and in the EU the CE sets the standards. This has not balkanized wifi; if properly done, I don't think it would balkanize IoT.

The problem with "permitting firmware updates, at least for security updates", is there are no vendors that only do security updates. Our industry is guilty of using security updates as the proverbial carrot. :(

Indeed; since I first had a cable connection, I remember the software industry using automated updates as a continuous delivery platform. That's why I always hated updates and often disabled them in the past (back when I could feel the impact on my Internet connection and processing power).

He addresses this in his testimony when he talked about emissions standards in California affecting emissions standards across the US. I believe his words were, "We don't need two versions."

Often times regulations in one jurisdiction will impact behavior and equipment sold in other jurisdictions. This is especially true when the regulations are not particularly onerous, the jurisdiction with the regulations represents a large market relative to the total market, and the number of suppliers forced to comply with the regulations is limited.

I worked for a long time in the embedded networking equipment hardware business and was involved in the rollout of ROHS. https://en.wikipedia.org/wiki/Restriction_of_Hazardous_Subst...

ROHS is a EU directive, but it had the larger effect of limiting hazardous substances in devices across the world. Manufacturers wanting to sell equipment into the EU had to get rid of stuff like lead solder and replace it with safer elements. It was simply too expensive for most hardware manufacturers to have two different manufacturing processes, one for the EU, and another for everyone else. So now in the USA you have a difficult time finding lead solder in new electronic devices, and it's because of the EU and ROHS.

Hardware device manufacturing these days is incredibly concentrated. These companies will abide by whatever regulations the USA or the EU forces them to abide by because they want to sell into these markets. And it's likely not cost effective for them to create parallel assembly for smaller markets.

Security isn't easy. It remains to be seen whether regulation will actually be helpful in getting it right.

Personally, I don't believe that it will, but that's just my opinion.

As with most things in this field, it's one thing to talk about it, another to do it. Schneier has been talking about software liability for a long time.

I'm not sure regulation in this country is capable of making it easier to provide better security.

It was done successfully before. Resulted in most secure products ever to exist. A few still use such methods with excellent results during pentests. Also preempted major vulnerabilities in popular stuff. Bell, of Bell-LaPadula security model, describes it here:


Well the current statu quo is just unacceptable. A major data breach every week, and that's what is being discovered. I suspect most data breaches are not advertised or even spotted. We are discussing how to best store passwords for when they will be stolen, taking that it will be the case pretty much for granted. And the big players are being hit as much as small fly by night shops.

I just don't see what technological change is coming that will change that. Not only that, but because of the big data buzz, even privacy concerned companies feel the urge to collect and store ever more personal data. How is that going to end well?

I mean the only reason lawmakers and regulators are not all over this issue is because they don't realise how bad things are.

They realize it. Impenetrable systems are also impenetrable to FBI and NSA which advise against the good stuff being mandated. The bribes they take from COTS vendors also brought in a preference for insecure solutions from samd vendors. Everyone paying them wants to maximize profit, too. Costly rewrites will cut into that.

So, they're willingly covering their ears while sitting on their asses. At least those on major committees.

One positive thing that might result is FCC or another agency's requirement to open the source of the critical parts of the system.

An analogy: if you produce a soda, you can keep the recipe secret. If you produce a potent regulated medicine, you publish detailed formulas and include them in every box of the drug.

I like your analogy between drug formulas and FOSS. I might use that talking to decision-makers in the future.

FCC is openly hostile to Open Source, so I doubt they would require anything to be Open Source

Is it? I never knew that. Could you provide some examples of that (besides the whole OpenWRT / patching your router debacle)? I'm curious.

> Without global collaboration, this philosophy is the beginning of national internet feifdoms - moreso than what exists today - and the beginning of the end of the global collaboration we freely enjoy today.

You mean like a national network? I believe something like that existed in several countries. And the Internet of old grew over them, as a way to interconnect nodes within disparate networks (thus the name Internet).

Give it enough time, and the national networks will merge into a single global network again.

I guess I would think something like if several major markets have reasonably interoperable - and non-contradictory - security requirements then probably the strongest will be implemented by default.

Not sure if it isn't possible to have reasonably interoperable security encompassing U.S and EU, although recent events sure do make me more pessimistic.

This might mean a change in the programming industry. A move to a more regulated industry, where programmers must have some sort of certification to practice. Much like other professions, such as accountants, lawyers and doctors.

This will be a hurdle but it might mean better pay and less having to clean up after cowboy programmers pumping out rubbish.

And no more "check out my new Javascript framework".

I dream of a world where JavaScript frameworks mature and last.

Something similar was proposed in Kenya a few months ago. Some kind of certification for "ICT Practitioners".

There was a huge backlash against it.

Yeah it won't be a pretty transition if it does happen.

In the '80s K. Eric Drexler warned of the danger of nanotechnology in his book "Engines of Creation". The terrors of the "gray goo" haven't materialized, perhaps in part because of his warnings. I believe that the IoT does need to be more secure, but I was also part of a start-up in the '90s that had an Internet connection for years without a corporate firewall. I think we'll continue to learn and adopt improved security as we go.

As a systems architect that spends a lot of time thinking about scaling, redundancy and resiliency, it's also my opinion that we need to do some work on DNS ... it's probably the most vulnerable part of the "Internet stack".

I read Drexler's book and I was under the impression he was way too optimistic - he spent a lot of space trying to convince readers that there can be a balance between attacker and defender. I honestly don't see it. Whether on the Internet or with nanotech, attackers always have an advantage - because they choose when to attack, and they can strike when they're sure current defences won't hold them. Patching that would IMO require defenders to have exclusive access to a smarter-than-human AI.

the sad thing is that you're probably on lawyer's mentality and hence deliver mantra "if that does not work on country level, lets push it up". Only that international law is not law-as-we-know-it. It is countries in interaction thing. Everything there is different - scale, speed, error rate, response rate, everything. And aside from "international law" part, it has started to change in earlier 00s, about 2005-2008. Next 10 or 20 or even 30 will be mess on international level, if not worse. So expectations of security global collaboration is... well, naive. And hence Mr. Schneier is right when he skips global level before going national. He is also right in a way that it is way easier to cast leadership with existing and working legislation than jump to table on global level and try to pull it towards imaginary rules. So... Mr. Schneier was right to skip. As per his lobbyism, without seeing his material it would be premature to jump to conclusions.

What better solution to the problem do you have?

Don't connect cars to the Internet? I can't believe companies like GM do it. They recently had huge problems with ignition keys, but I guess they are confident that putting cars online will be OK without hiccups.

Hahaha, right?

You know what inspires confidence? Built in 4g connections during a 5 million vehicle safety recall.


So when did he get bought? Leglislation? He's getting old or something. The Bruce Schneier I remember was a lot more rebellious.

We just voted Donald Trump into the white house. Sanity IS rebelious.

The problem can only be made worse by involving government. Sad that Bruce Schneier does not understand that reality

I'm not saying that you're right or wrong, but why do you think that?


What you will end up with is likely something that is LESS secure but now mandated for anything made/sold legally in the US. The rest of world will be free to do better things

You will also end up with mandated Backdoors, weakened encryption, and a variety of other NSA/FBI wish list items that will be included in any "Cyber Security" bill

I have no interest in the US congress regulating IoT devices, that will not be good for liberty, or security

History under the last mandate was actually a pile of products and research projects that were more secure than anything today. That included Boeing SNS server, BAE XTS-400, Aesec GEMSOS, esp KeyKOS, a secure VMM, an embedded OS in Ada, GUI's immune to keylogging/spoofing, databases immune to external leaks, and so on.

All that disappeared in favor of Windows NT and UNIX w/out security the second they eliminated the regulations. There's just enough of a niche market for a few suppliers to be left, esp defense contracts. Most are gone, though, because private markets dont produce strong security if incentives are turning costs into profits.

So then I guess you also want a Personal Computer or Home Security System to be $1,000,000. Have no Open Source Technology, and generally have no access to the average person let alone low income persons.

If Boeing will be the only company allowed to make IoT products then you might as well kill the IoT industry, as each product will cost 100000x more than it should.

I classify that BAD... sad you do not

Security is one thing, but if it comes at the expense of Open Source and accessible systems then i choose Insecurity.

I will choose Freedom over Government every time

First you miss history. Now economics. High-assurance development of the TCB, critical part of system needing it, cost around 35-50% extra on development. Volume sales spread that out. Windows would cost $100-200 with key pieces done that way. The same basically. Only negative effect is rigorous dsvelopment slows release and upgrade cycle. Many firms maximize profit by shipping often then fixing problems later. Market currently rewards that. It's why Lipner, who led high-assurance VMM, favored quick shipping over security in the SDL he built at Microsoft.

Now, with regulation, you'd still have the same software being developed. The components would be simpler (JSON subset vs XML). Costs spread out in volume. People would get used to new, huge features taking "two or three quarters" (Lipner) instead of a few weeks. Non-paid or non-critical usage could be done to test out proposals without building whole thing.

Far as IoT, solutions already exist that are either inexpensive at OEM level or cheap per unit. Just getting ignored by most of market since there's no regs or liability. Hell, Im typing this on a device running one underneath the OS that cost the same as a ddvice without one. ;)

So you're saying that things like the Clean Air Act, the EPA, and pollution regulations don't work?

Because I look around and it's been working remarkably well.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact