Mark this mindset as the beginning of the end of the open, inclusive programming world as we know it.
Schnier visited RIT (my alma mater) last spring, and his presentation revolved around the threat presented by IoT and the growing need for national legislation to encumber it. I asked him a pointed question about how this scaled to the _international_ level, which he decided mostly not to answer (focus on domestic policy first, and such). Because the answer is simple: _it doesn't_. Without global collaboration, this philosophy is the beginning of national internet feifdoms - moreso than what exists today - and the beginning of the end of the global collaboration we freely enjoy today. I value this freedom a lot.
I respect Mr. Schneier for his poignant responses to popular security issues and his ability to be a public face for computer security, but I strongly disagree with where he's lobbying we take the future to. Maybe I just can't accept the hard reality that "security isn't easy" and that government regulation is the only way to force security on people.
Another possible model would be something like having Underwriters Laboratories and other independent organizations check the devices.
This is never going to be perfect, but it doesn't need to be. The goal is to make sure that devices people buy at the store are reasonably secure. In previous eras, the goal of new regulation was to make sure that you can still listen to the radio and watch TV, and that people don't often get electrocuted by their appliances. By and large it seems to have worked.
For more: https://en.wikipedia.org/wiki/Nationally_Recognized_Testing_...
If iPhones cost $7000 instead of $700, there would be a lot less iPhones. Same for IoT lightbulbs (though they're pretty expensive already)
This is exactly what I was talking about, insurance companies will use the fact that you need access to something to erect a giant money gate in front of it, justifying it with advanced tech that helps in some ways but is almost always used out of context based on the insurance companies control policies that they force practitioners to follow.
There's around 9,000 CVE's this year so far. Should devices be checked against all of them? How about next week, does the vendor have to go back and check your fridge? Do they have to patch it? For how long? Who pays for all that? A $20 webcam suddenly needs $500/year just in ongoing maintenance and updates.
I don't have any answers, only questions.
Yes, but what is the negative externality the market is not currently capturing when these devices are assumed into botnets and used against others? Simple economics states that the $20 device is $20 because somebody else is paying for the security mistakes of the shitty device's developers.
On the subject of answers, rather than questions... I have a funny story. So the XBox one has this neat feature where you can control your console via an app on your local area network PC or phone. The default setting was that any device on the network the Xbox was connected to could control it. Imagine this, in a college dorm. I saw a lot of xboxes available to control. So, after testing with a friend (yep, I could easily interfere with whatever), I developed a key combo that I could rapidly input from any console state which would open the settings menu and disable the remote control feature, locking out my own access (And I'd know it had worked because I'd be disconnected). That's right, I effectively developed a virus which patched the vulnerability. If attackers have the advantage in this field, then maybe we should put more effort into thinking about friendly counter-attackers. If the silly IoT device can be pwned, then it can be pwned for good, as it were. Does anyone know of any groups working in this area, or any research done towards it? Pen-testing and other white hat hacking activities I know about, but does anyone officially do this kind of guerilla-patching?
Ultimately the idea was considered not good because of difficulties with getting it to work as expected, pressure and fear that the fix would introduce more issues, liability issues, and so on, and probably some ethics debates on computer intrusion even for the purpose of securing the device.
I'm not really sure what stance to take on such an issue, since the idea behind it is good intentions, but I feel like it can lead to unintended consequences that ultimately would have no one liable. For my personal machines I have fairly vanilla setups, but many of my friends and colleagues have rahter intentionally complex set ups and most definitely would object to someone accessing their set up and making changes without their permission.
1 - I'm sure it is in one of my undergrad textbooks. In other words, no way I'll find it again.
Wow. If a virus propergated like this on today's networks, would such traffic event make a noticable dent in the available bandwidth?
Anyways, I hadn't heard of this virus - it's super neat. Patching its own infection vector and even explicitly removing an existing virus from the target machine... The article loathes it for how overtly it affects machines (forced restart to apply an update) and networks (congestion), but the work it attempted to do was decidedly good. Sounds to me like it worked well, but had poor execution in accounting for the network effects it would have. (I doubt it was rigorously tested in a prod environment ;) ) If anything, I'd see this as a case study that this kind of offense-as-defense strategy has the potential to work... Its just nobody wants to take responsibility to do so.
my father worked for a company in the healthcare industry - I'd hear stories of multi-month long FDA audits of their hardware after a 'statistically significant' number of failures in the field.
I damn well hope so that such incidents are taken very seriously by the FDA.
The counterpoint is that, if the webcam is used in failure-critical situations, then it absolutely should be under that level of scrutiny. The problem is finding how you can define that operational scenario in law.
Meanwhile, people wanting to use higher security as a differentiator can get evaluated ahead of time as some do now.
Matter of fact, NSA's new scheme only requires 90 day evaluation at EAL1 (certified insecure).
As for the influence on other countries -- blocking their traffic is an effective way to convince them of the need to take action. For sure this still means temporary disruptions and maintenance cost for every operator, but it's part of the "cost of living" on the Net as long as the others don't catch up. Take it or leave it.
(yes, I am aware of the security risks of hijacking the updates, but it's still still a better control than no control at all)
It's not hard to find bugs and problems that were hidden in unexpected places or triggered by weird combinations of inputs. Simple mistakes (or well thought backdoors) like goto_fail and heartbleed can explode a good time after they were created. And then we will sue, get money, send to jail, but the damage will be already done.
He did answer that on his blog however:
"It's true that this is a domestic solution to an international problem and that there's no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change."
And I mostly agree with him. If major markets start require certification of security (maybe something like FCC and CE), the rest of the world will follow as they want to trade in these premium markets. Sure, it will not solve the problem overnight, but at least it could be made less severe.
One state moved the needle here just by virtue of being the largest component of what was the largest market. There is no reason the same could not be done for internet-connected devices.
> "simply because it makes no sense to maintain two different versions of the software"
Except this isn't true. European law already requires Microsoft to separate certain features from it's core for European distribution - which is why there's Windows N for them and normal Windows for the US. Other pressures (such as government mandated backdoors, or regional media licensing) could easily create a situation where a company would find it beneficial to shard their software by market. (Which is, again, a load only larger companies can easily take on or optimize for!)
Case in point - the recent IoT Dyn attacks.
Even if his proposal doesn't fix the rest of the world it would set an example for other countries.
It would also give other countries more that they can do themselves. Like with the DNS DDoS, the traffic originated in the US -- even if the perpetrator was in another country.
Getting the US locked down means that those countries can start taking responsibility (and I mean that in a positive way) for their own piece of the internet. Right now there isn't even a place for them to start.
Would it be so bad if the law required "any connected device sold in the US must permit firmware updates, at least for security patches?"
If the devices are on a US network, it seems reasonable to require them to meet certain standards. Other countries can set their own standards, just like we have FCC standards for wifi/spectrum usage in the US, and in the EU the CE sets the standards. This has not balkanized wifi; if properly done, I don't think it would balkanize IoT.
Often times regulations in one jurisdiction will impact behavior and equipment sold in other jurisdictions. This is especially true when the regulations are not particularly onerous, the jurisdiction with the regulations represents a large market relative to the total market, and the number of suppliers forced to comply with the regulations is limited.
I worked for a long time in the embedded networking equipment hardware business and was involved in the rollout of ROHS.
ROHS is a EU directive, but it had the larger effect of limiting hazardous substances in devices across the world. Manufacturers wanting to sell equipment into the EU had to get rid of stuff like lead solder and replace it with safer elements. It was simply too expensive for most hardware manufacturers to have two different manufacturing processes, one for the EU, and another for everyone else. So now in the USA you have a difficult time finding lead solder in new electronic devices, and it's because of the EU and ROHS.
Hardware device manufacturing these days is incredibly concentrated. These companies will abide by whatever regulations the USA or the EU forces them to abide by because they want to sell into these markets. And it's likely not cost effective for them to create parallel assembly for smaller markets.
Personally, I don't believe that it will, but that's just my opinion.
As with most things in this field, it's one thing to talk about it, another to do it. Schneier has been talking about software liability for a long time.
I just don't see what technological change is coming that will change that. Not only that, but because of the big data buzz, even privacy concerned companies feel the urge to collect and store ever more personal data. How is that going to end well?
I mean the only reason lawmakers and regulators are not all over this issue is because they don't realise how bad things are.
So, they're willingly covering their ears while sitting on their asses. At least those on major committees.
An analogy: if you produce a soda, you can keep the recipe secret. If you produce a potent regulated medicine, you publish detailed formulas and include them in every box of the drug.
You mean like a national network? I believe something like that existed in several countries. And the Internet of old grew over them, as a way to interconnect nodes within disparate networks (thus the name Internet).
Give it enough time, and the national networks will merge into a single global network again.
Not sure if it isn't possible to have reasonably interoperable security encompassing U.S and EU, although recent events sure do make me more pessimistic.
This will be a hurdle but it might mean better pay and less having to clean up after cowboy programmers pumping out rubbish.
There was a huge backlash against it.
As a systems architect that spends a lot of time thinking about scaling, redundancy and resiliency, it's also my opinion that we need to do some work on DNS ... it's probably the most vulnerable part of the "Internet stack".
You know what inspires confidence? Built in 4g connections during a 5 million vehicle safety recall.
What you will end up with is likely something that is LESS secure but now mandated for anything made/sold legally in the US. The rest of world will be free to do better things
You will also end up with mandated Backdoors, weakened encryption, and a variety of other NSA/FBI wish list items that will be included in any "Cyber Security" bill
I have no interest in the US congress regulating IoT devices, that will not be good for liberty, or security
All that disappeared in favor of Windows NT and UNIX w/out security the second they eliminated the regulations. There's just enough of a niche market for a few suppliers to be left, esp defense contracts. Most are gone, though, because private markets dont produce strong security if incentives are turning costs into profits.
If Boeing will be the only company allowed to make IoT products then you might as well kill the IoT industry, as each product will cost 100000x more than it should.
I classify that BAD... sad you do not
Security is one thing, but if it comes at the expense of Open Source and accessible systems then i choose Insecurity.
I will choose Freedom over Government every time
Now, with regulation, you'd still have the same software being developed. The components would be simpler (JSON subset vs XML). Costs spread out in volume. People would get used to new, huge features taking "two or three quarters" (Lipner) instead of a few weeks. Non-paid or non-critical usage could be done to test out proposals without building whole thing.
Far as IoT, solutions already exist that are either inexpensive at OEM level or cheap per unit. Just getting ignored by most of market since there's no regs or liability. Hell, Im typing this on a device running one underneath the OS that cost the same as a ddvice without one. ;)
Because I look around and it's been working remarkably well.
On a scale of engineering "hardness" (meaning, we can predict all side affects of action), software engineering is closer to medicine than to, say, civil engineering.
We know stresses, materials, and how they interact. We can predict what will happen, and how to avoid edge cases.
Software? Is there any commonly used secure software? Forget about Windows and Linux. What about OpenBSD?
Did it ever have a security hole?
And that's just the OS. What about software?
There are just too many variables.
So what will happen?
There will become "best practices" enshrined by law. Most will be security theater. Most will remove our rights, and most will actually make things less safe.
Right now, the number one problem of IoT security is fragmentation. Samsung puts out an S6, three years later stops updating it, a hole is found, too bad. Game over.
The problem is that "locking firmware" is common "security theater", which, if there'll ever be a legal security requirement on IoT, it'll require locked bootloader and firmware.
And you can't make a requirement to "keep code secure", because then the question will be for "how long"? Five years? 10 years?
This level of hubris is pretty revolting. Software engineering is easy. Writing secure software is easy. The difference between civil engineering or medicine and software engineering is that practitioners of the former are held responsible for their work, and software engineers are not and never have been.
Nothing will improve until there are consequences for failure. It's that simple.
I agree lack of consequences is a big part of the problem. But this only hints at a solution strategy, it doesn't describe the problem itself. The problem is that software is so internally complex that it's beyond comprehension of a human mind. To ultimately solve it and turn programming into a profession, we'd need to rein in the complexity - and that would involve actually developing detailed "industry best practices" and stick to them. This would require seriously dumbing down the whole discipline.
 - which I'm not sure I want; I like that I can do whatever the fuck I want with my general-purpose computer, and I would hate it if my children couldn't play with a Turing-complete language before they graduate with an engineering degree.
 - which we basically don't have now.
No, sorry, software does not inherently have more degrees of freedom than e.g. building a bridge has. The reason other engineering fields are perceived as "limiting" is exactly because they have standards: they have models about what works and what not, and liability for failing to adhere to those standards.
I would argue that the lack of standards is exactly what makes software engineering look like voodoo -- but it is because of immaturity of the field, it's not an inherent property. Part of the reason software is so complex is exactly because engineers afford themselves too many degrees of freedom.
And I disagree that establishing standards constitutes a dumbing down of the discipline, in fact the opposite: software engineering isn't, exactly because every nitwit can write their own shoddy software and sell it, mostly without repercussions. That lack of accountability is part of what keeps software immature and dumbs down the profession. As an example, compare Microsoft's API documentation with Intel's x86 Reference Manual: one of the two is concise, complete, and has published errata. The other isn't of professional quality.
So, it's definitely not easy. The people that pull it off are usually quite bright, well paid, have at least one specialist, and are given time to complete the task. The introduction of regulations might make this a baseline with lots of reusable solutions. We'd loose a lot of functionality that's too complex for full verification with slower development and equipment, though. Market would fight that.
Of course it's not that simple. Clearly you've never written much, if any, real software.
You want to make an SSL connection to another web site in your backend. You use a library. If that library is found to contain a vulnerability that allows your site to be used in a DDoS, where do the "consequences for failure" lie? You used a library.
Do you think people will write free libraries if the "consequences" fall back on them? If not, have you even the slightest understanding of how much less secure, less interoperable and more expensive things will be if every developer needs to implement every line themselves to cover their backs? Say goodbye to anyone except MegaCorps being able to write any software.
Where does this end? Would we need to each write our own OSes to cover ourselves against these "consequences", our own languages?
Anyone can practise carpentry, but if someone is going to do so professionally and build structures that can cause injury or damage if they fail, then they should be accountable for the consequences. This is why indemnity insurance exists.
In software, a lack of rigour is fine for toy applications, but when livelihoods and safety become involved, we need to be mindful of the consequences and prepared to take responsibility, just like everyone else in society is expected to do.
1. Your system might be hacked if connected to a hostile network. Avoid that by default.
2. If connected, use a VPN and/or deterministic protocols for the connections. Include ability to update these. No insecure protocols listening by default. Sane configuration.
3. Certain languages or tools allow easy code injection. Avoid them where possible.
4. Hackers like to rootkit the firmware, OS, or application to maintain persistence. Use an architecture that prevents that or just boot from ROM w/ signed firmware if you cant.
5. DDOS detection, rate-limiting, and/or shutdown at ISP level. Penalties for customers that let it happen too often like how insurance does with wrecks.
That's not a big list even though it covers quite a lot of hacks. I'm with the other commenter thinking all the unknowns may not be causing our current problems.
On what basis did you choose that library? Did robustness of the software come in to your evaluation? Did you request a sample from the supplier, and performed stress testing on it? Did you check for certifications/audits of the code you were including in your project?
If that library is found to contain a vulnerability that allows your site to be used in a DDoS, where do the "consequences for failure" lie?
With you, unless you have a contract with your supplier stating otherwise.
> On what basis did you choose that library? Did robustness of the software come in to your evaluation?
Did you request a sample from the supplier, and performed
stress testing on it?
Did you check for certifications/audits of the code you were including in your project?
And even if, you implement rigorous audit of code, that means you can't update, because you have to go through the same audit rigamarole, each time a bug is found. By the time you audit your software, a new vulnerability will probably be discovered.
Not to mention this essentially makes open sources software nonviable.
Far as finite amount, the vulnerabilities coming in fall into similar patterns enough that people are making taxonomies of them.
The seL4 project has produced a formally verified microkernel, open sourced along with end-to-end proofs of correctness .
On the web front, Project Everest  is attempting to produce a full, verified HTTPS stack. The miTLS sub-project has made good headway in providing development and reference implementations of 'safe' TLS .
These are only a few projects, but imo they're a huge step in the right direction for producing software solutions that have a higher level of engineering rigor.
 n.b. I'm not crypto-savvy, so I can't comment on what is or isn't 'safe' as any more than an interested layperson.
We need to ruthlessly eradicate undefined behavior at all levels of our software stacks. That means we need new operating systems. We need new programming languages. We need well-thought-out programming models for concurrency that don't allow the programmer to introduce race conditions accidentally. We need carefully designed APIs that are hard or impossible to mis-use.
Rust is promising. It's not the final word when it comes to safety, but it's a good start.
An interesting thought experiment is what would we have left if we threw out all the C and C++ code and tried to build a usable system without those languages? For me, it's hard to imagine. It eliminates most of the tools I use every day. Maybe those aren't all security critical and don't all need to be re-written, but many of them do if we want our systems to be trustworthy and secure. That's a huge undertaking, and there's not a lot of money in that kind of work so I don't know how it's going to get done.
It depends on the CPU.
The problem is that C was designed to be as close as possible to hardware, and some places (RTOS? Kernel?) speed is critical.
There is a huge amount of ways security can go haywire even with perfectly defined behavior. Make no mistake, I love watching as unsafe unbehavior is slowly getting fixed, but I think language nerds are too fixated on the UB to see that it's not the big deal and won't get rid of our problems.
Another problem language nerds miss is that we can adapt existing code and tools (in "unsafe") languages to weed out problems with undefined behavior. It's just that people aren't interested enough for it to be mainstream practice. Yet the bar is much lower than asking everybody to rewrite everything in a whole new programming language. So why do they keep proposing that a new programming language is going to be the solution? And if people just don't care about security, well, we would have all the "defined behavior" security flaws in the new code written in the new shiny programming language.
However, in the future where software can do everything, there is no such thing as "limited trust." If you trust someone to operate on your car, you are trusting them with everything the car interacts with. Which... quickly explodes to everything.
The opposite. When the field was in its infancy, one was able to keep whole stacks in his head.
How complicated were CPUs in the 1960s?
How many lines of assembler was in the LM?
How many lines is Linux or FreeBSD kernel? Now add libc.
Now you have a 1970s C compiler.
Now take into account all the optimizations any modern C compiler does. Now make sure there's no bugs _there_.
Now add a Python stack.
Now you can have decent, "safe" code. Most hacks don't target this part. The low hanging fruit is lower.
You need a math library. OK, import that. You need some other library. OK, import that.
Oops, there's a bug in one module. Or the admin setup wasn't done right. Or something blew.
Bam. You have the keys to the kingdom.
And this is all deterministic. Someone _could_ verify that there are no bugs here.
But what about Neural Networks? The whole point of training is that the programmers _can't_ write a deterministic algorithm to self drive, and have to have a huge NN do the heavy lifting.
And that's not verifiable.
_This_ is what's going to be running your self-driving car.
That's why I compared software engineering to biology, where we "test" a lot, hope for the best, and have it blow up in our face a generation later.
New SQL injection vulnerabilities are being introduced every day. Passwords being MD5. Array boundaries being sourced from client data. I mean there are perhaps 5 to 10 coding errors that are generating most of the vulnerabilities.
That's not the only problem. We also need to trust the users, who are either careless or malicious. But I'd like at the very least to be able to trust our systems.
Your thermostat maybe wants to talk with your alarm clock. I can get that. But it does not have to happen over the Internet. Let them talk locally.
You and I know the basics of computer security. We can take a crack at designing a secure system and maybe do okay. But the argument goes like this: Yes there are security conscous programmers, but there are also many which are not. And those people work on products which make it to market. How do we stop those products from making it to market? Government intervention. He's given up on education and relying on the informed developer, and would rather rely on public policy. I find it a bit sad.
The government has a proven conflict of interest and disincentive to harden the infrastructure. Vulnerabilities are valuable for espionage. And there are already regulations like HIPPA, PCI, etc, yet there are still breaches. Regulation will add complexity to business and will protect market share for the entrenched players who can afford to follow it. That will lead to further consolidation and reduced competition while I feel that the opposite is needed.
The fiefdoms described in another post wouldn't be such a bad thing. At the nation state level, competition will also make for better security. Isolationism doesn't work out well in world history. Movement of goods and ideas does a better job at bringing countries together. I feel there's a pendulum swinging back towards isolationism but it goes back and forth over time.
This was addressed in the hearing. Schneier says its a negative externality, like invisible pollution. The problem is that the consumers don't care because they aren't the ones getting attacked by their devices. Instead their devices are quietly using their residential internet connection to help DDoS websites. Would you pay $20 more to buy a different DVR that is less likely to annoy a random person you've never met over the internet? Most people don't care, and don't have the knowledge and experience to care.
Because consumers won't pay for it, the manufacturers don't bother to invest in security engineering. (These are low margin products after all). As a result we're all worse off.
Case in point: If a seller in one country can lower the costs of a good shipped to another country by scrapping responsible waste management in favor of polluting the commons (i.e., places where individual property rights claims are difficult to press), then "the magic of the market" is likely to increase - not decrease - the amount of pollution generated by the trade.
What libertarians don't like to admit is that they see free markets as more than ideal mechanisms for preserving efficient economies. They also see them as efficient sources of good and just governance. Like all belief systems, faith in the quality of governance supplied by free markets, while rational and well-supported to a point, can be taken to counter productive extremes where it maintenance stops operating like empiricism, and starts to function is ways indistinguishable from fundamentalists religions.
Not sure how many sensible knowledgeable people want this to be the dominant force in guiding global network security.
This is a pretty good argument the absolute value of principle, tbh
For what it's worth, I tend to see principles like map; useful - even indispensable - in many situations, that are nevertheless abstractions and therefor imperfect guides to actual reality. Use maps, yes, but avoid mistaking them - or any system of symbols - for the things they represent (i.e., the map is not the territory).
Indeed, principle is a form of proxy wisdom for the young and inexperienced. It's better than nothing, but probably not enough to save you from at least a few episodes of hard reckoning. Assuming these don't get you killed, the places where principle doesn't serve are the ones where mature judgement develops.
Granted, there's a point where this doesn't serve either, but that's okay too since mortality always wins in the end.
The free market crowd lost all credibility after Enron, Worldcom, and the housing collapse. They had countless reasons why the above scenarios wouldn't happen. Reality proved their theories to be the complete and utter BS any rational human being could see from the start. In a utopian society a lot of ideas are great, unfortunately we've got reality to deal with, not utopia.
So that friction will always exist.
I mostly argue that myself where the problem is demand-side where customers don't put money into it. There's a few things on supply-side to factor in, though:
1. Companies lie to customers about how necessary these vulnerabilities are. They condition them to expect it. They also charge them for fixes. It takes almost no effort to knock out the common ones with only 30-50% premium for high-assurance of specific components. Even premium producers often don't do either with those that do so rare most consumers or businesses might have never heard of them.
2. Years of lock-in via legacy code, API's, formats, patents, etc means consumers often don't have a choice or only have a few if they want the modern experience. Many times specific choices will even be mandated by groups like colleges. Market created the problem that now lets it milk a captive audience out of money. It won't solve that problem no matter what they want.
These two, esp 2 given patents and First-Mover Advantage, are huge reasons the market alone isn't likely to fix things. Some regulations could deal with them. The market can also fix things where these two don't apply. The market can also be combined with regulations like with DO-178B market that regularly outputs high-quality software far as I can tell.
No, the free market crowd would recognize the cost of poor security for what it is, an externality. That fits into basic economic theory as something that the market won't naturally correct. No one will demand security if they're not the ones suffering from the lack of it.
Governments need to figure out some way to price in the externality. A good for instance would be to allow companies that don't take reasonable security measures to be held accountable for all damages caused, not just the portion attributable to their negligence. If Dyn had grounds to sue any device maker who, say, has a default password that isn't required to be changed during initialization before the device connects to the internet, then it would start to change things.
After enough of the fly-by-night device makers are hit with large judgments, it will start to become common practice to put every new device through security audits before introducing them into the market. Those reviews and the followup development will take time and cost money which will add to the final cost of the device, but make them safer.
The market oppertunity is not for non-hackable thermostats but for more advanced internet routers which can do more advanced packet inspection under low latency.
That's not the same as "enforced regulation", but it certainly walks and cuaks like it. No direct government intervention -> No freely interconnected internet.
And what about consumer offerings ? There was no such restricitons. The simple reason an ISP did not create its own Internet because by connecting to bigger net it increased networksize and value of its offering. Thats how the whole world, not just US, settled on Internet.
The private networks are still making walled gardens with no innovation in fiber space with innovative, walled gardens in Internet space. Same old same old doing nothing of significance with pure self-interest unless building it on what government created and partly subsidizes. The latter groups usually also plateu into stagnation sucking profit while the open, less-selfish models grow in new ways.
Inviting governments to regulate Internet is unnecessery risk.
Internet is a set of protocols run on top of huge pipes that interconnect across many companies, nationalities, etc. They all speak common language. You're trying to oversimplify it to expand on your false argument. What I just described was only achieved once... by governments & companies making money off government projects. No private industry has duplicated it.
Closest thing was the cell phone industry where they limited what type of traffic, kept the bandwidth minimal for high profit, charged per amount of data, and so on. They eventually started looking more like the Internet by internally using Internet technologies funded by DARPA, NSF, etc. Originally, though, their model couldn't have created something like we see with the Web or Internet-run commerce. Just like MA Bell before them with their schemes.
Private sector wouldn't have built the Internet on their own since it's too risky and costly with 3rd parties getting most of the benefit. Government did it better.
"What someone builds on top of it and/or how open/close, It still does not negate that characterstics of Internet."
It does within what they build. Much of online activity has transistioned from purely Internet technologies to Web technologies. Companies like Facebook and Slack are where content and activity is going instead of HTML web sites and IRC. The result is people are locked in to vendors to just get what experiences they allow in their walled gardens. With most Internet tech, I could just move everything I had to a different client or server if what I was using wasn't good enough. Standard protocols existed to help. Private sector prefers the opposite as lock-in equals more money.
So, they fail twice: preventing something like the Internet from occurring until government did it; trying to turn it back into wall gardens of the past albeit with web browsers and more graphics.
I am not refuting this. What I am claiming is it was not neccessery despite being helpful. After the market made absolute  long distance communication and processing on data cheap and at large scale, it was only a matter of time. Even we had/were to go through IP level walled-garden/subnets, the world would have set on non-discriminating Internet.
Think of this way, what Govt. created/helped-created intially was a local network and only after thousands of ISPs coming together, not because of incentives from Govt, but because of demand we have the Internet as we know now.
> Companies like Facebook and Slack are where content and activity is going instead of HTML web sites and IRC.
Facebook, Slack, HTML, IRC != Internet. Question: Is someone being restricted from sending/recieving anypacket to/from any IP in the world ? If no then Its not walled-garden from Internet point of view. Internet is not being harmed in anyway. However bringing Govt. into this will most likely make the answer yes.
> So, they fail twice: preventing something like the Internet from occurring until government did it; trying to turn it back into wall gardens of the past albeit with web browsers and more graphics.
I am not aware of any IP level walled-garden. Facebook/Slack/Myspace/etc are/were in app/website business not Internet business.
 Not telephone etc which transform the data nondeterministically.
It still hasn't to this day in private services. They almost all wall off whatever they build. Those that build connections charge out the ass for them with all kinds of restrictions and schemes. Many get acquired and then crippled.
You need to justify your assumption with evidence from the IT market. Vast majority of it works against your expectation. Further, something like the Internet would require vast majority working for that expectation.
"Think of this way, what Govt. created/helped-created intially was a local network and only after thousands of ISPs coming together, not because of incentives from Govt, but because of demand we have the Internet as we know now."
It was actually a combo of military needing survivable, distributed comms with universities needing to collaborate with groups that were basically self-less and highly cooperative at the time. There were private parties trying to do their thing with their self interests even at that time. It was called OSI and circuit-based lines. One failed entirely, the other isn't what Internet was built on, and itself diminished over time in favor of faster, packet-switched lines. Even in ideal environment the incentives of businesses killed their opportunity while incentives of groups not motivated by profit led to Internet.
"Facebook, Slack, HTML, IRC != Internet. Question"
They make up vast majority of Internet traffic along with Netflix and Google. That makes them the Internet experience for most people. A lot of the rest is walled garden apps on mobile. Sites and services purely building on Internet technology, like IRC networks or FTP servers, are barely used because private parties rarely invest in them. It's simply too easy to escape lock-in that way. We can't throw out how 99% of people and products use the Internet when discussing Internet regulations or issues.
"I am not aware of any IP level walled-garden. "
You should look up ISP's like Comcast policies on web servers or SMTP ports. Stuff exists even at that level to serve the monetary interests of private market. Most of the walled gardens are built on top of the Internet protocols with ecosystem effect meaning you have to work within them to reach users they hit with First Mover advantage in new markets.
Then companies can balance the cost of adding a secure enclave and a Grsec kernel (just an example) to their smart coffee maker against having to recall all of their infected products from the market when a botnet takes over them.
Besides that stick, I would throw a couple of carrots in there, too, like the companies being able to brag that their products are A+ security rated, etc, in their promotional materials and on their packages.
In short, I think regulations will prevent more damage than post-hoc legal retribution.
Time has proven that wrong so far. We got a lot of highly-secure products after DOD's Computer Security Initiative giving clear guidance plus financial incentive. DO-178B and other safety-critical markets are cranking out lots of them on safety side. So is segment of smartcard industry focused on high-security.
Regulation works so long as it has effective standards, they're clear, evaluated against product, and must be followed to sell the product. As in TCSEC era and DO-178B, reusable components for common cases show up to reduce the evaluation cost or risk. Open-source security would likely get a boost, too, as companies sponsoring it would sponsor certifiable versions with the higher QA.
The constraints of DoD CSI, DO-178B, and the smart card industry focus are all embodied in regulations, which precede legal retribution.
If a company busts the regs, it can be sued. But the first line of defense is that companies are required to do it right - by the regulations.
I agree regulation needs clear definition and followup to be effective. I continue to regard regulation as a better mitigator of damage than post-hoc penalties.
Regulation is about preventing a mess. Litigation is about cleaning it up. I'd rather not have the mess to begin with.
That's exactly it. Although, I did propose possibility in this thread of defining regulations that aren't immediately applied but apply in court after harm is alleged. The reason being evaluation costs and time can be a big problem, esp for startups. This lets them simply follow guidelines with evidence produced during development & they only pay the cost if they screw up. The cost goes up with level of deviation and harm it caused.
Just cause it worked for the DOD doesn't mean it'll work everywhere else.
There could be an interesting market for Apple clocks and Apple thermostats, but to increase the likelihood of something like that becoming popular (vs just using the easy Wi-Fi route), wouldn't it taken an unlikely push like that?
It will be a long time before manufacturers decide whether or not to put everything online, I think.
I say that sarcastically, but it seems to be where the world goes.
It is designed around local low bound (9600 bps) network where you can connect KNX devices (thermostats, actuators, sensors etc) or gateways/routers to other networks.
Only problem it has is that the communication is not encrypted, but being local and isolated, it is not a very big concern.
Every other home/building automation systems worked like this recently until some marketing geniuses came up with the IoT campaign.
Although I agree, what if one the devices that they communicate locally is connected to the Internet? Then it becomes just another level of indirection...
Too simplistic. If devices talk then there will be a way to listen and scale that talking/listening.
> Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we've been trying to fix for more than a decade.
> A fatal IoT disaster will similarly spur our government into action, and it's unlikely to be well-considered and thoughtful action.
> Our choice isn't between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex — and they're coming. We can't afford to ignore these issues until it's too late.
When we have TLA's subverting security, megacorps sucking up information and leaving it open to attack, manufacturers baking in backdoors etc... I would run an absolute mile from being a developer in this new world.
A coherently articulated, normative counterfactual security platform would be a better place to argue from.
It's a cousin to the negative liberty arguments: they only list what not to do to in order to avoid hurting people, rather than what we can do to help them (positive liberty.)
Maybe we could frame the question as "If we let the EFF and Bruce Schneier redesign the United States security apparatus from scratch, what would it look like?"
We already have excellent critiques, and are good at articulating "what's bad," but far too little on "what would a good system look like that strikes the 'right' balance?"
Far as design, they could put cheap knockoff of an INFOSEC guard in their modems with CPU's resistant to code injection. Include accelerators for networking functions and/or some DDOS detection (esp low-layer flooding) right at that device.
Who buys these products? Why does a toaster need to be connected to the internet and synced with your "smart"phone? What exactly can you achieve having this feature?
> Consumers may find it totally cool to design images for toast using a smartphone. Meantime, the resulting data would help food companies understand how people approach breakfast, design new products and market to consumers more effectively.
But how did the bread get in the toaster?!
An internet connected fridge could be really useful if it integrates with grocery lists, recipes, expiry dates, automatic deliveries, etc. These fridges are still in their infancy, but we're moving towards a fully automated kitchen. I've seen a lot of fridges that run Android, and people like to make fun of them because "Why would you need Twitter or YouTube on a fridge?". Those are just apps that you can install on anything, and it didn't take any additional effort. It's weird when it becomes part of their marketing, though.
Anyway, this is the future we are heading towards: http://www.moley.com/
I believe the technology is already here, and it's just a matter of lowering the cost and improving the software.
If you think that any significant fraction of IoT devices are located in homes and controlled by phones, you do indeed live in a bubble. (Don't feel bad, so do lots of HN posters).
No thank you very much.
1) DDOS attack is detected
2) Attacking IP addresses are sent automated DDOS abuse notifications
3) ISP, like your credit card company when it's machine learning algos detect fraud ask for human back channel verification like SMS.
4) ISP notifies user of suspected bot on one of their devices. The onus is put on the user to run a secure network and remove or fix offending devices.
This system could work well for residential connections at least.
It could be implemented similarly to the way spam is handled by the internet, bad neighborhoods and networks that don't self police are treated as second class citizens.
Please no, the way spam is handled on the internet means that anyone who isn't already a massive internet company is usually treat as a second class citizen
I think that our routers may be the key, or are at least completely negligent at the moment.
A normal consumer router is essentially a black box, but it should be a watchdog. The router should alert users when suspect outbound traffic is originating from it's network.
Of course the router could be compromised, and router patch cycles are atrocious generally, but this method could notify in case of hacked IP cameras and thermostats.
This already works well for medicine, aviation, telephone systems, transit signaling, and a huge number of other things. There's no reason to believe this wouldn't work for internet security.
We could start by mandating the network best practices RFC.
Humans make mistakes. Computer systems are fragile. As humans keep developing computer systems, new attack surfaces, new vulnerabilities will be introduced. It's pointless to try and keep playing catch-up either with or without "regulations".
Instead, we should consider _why_ such attacks happen in the first place. Who are the targets? States? Corporations? Maybe they're not open enough. Maybe they're too powerful. Or are we afraid that our governments, corporations, or other entities/instituions of our society invade our privacy to manipulate us? Or are we scared for our wealth or status?
None of these are technological issues at their core. Our society needs to adapt and grow up to this powerful technology. Until then, the only thing we can do for our safety is refuse to use it.
However, the day the the Internet Era of Fun And Games is over is the day that the internet keels over dead. The Internet was (almost literally) built on fun and games.
Maybe some stick (vs carrot) for unwittingly contributing to a DDoS to make people care will work? Not sure. I'm not really liking any of these solutions, but the problem is that it'll get worse.
Maybe a first step is just to have an address range for these demonic things, which isn't publicly routable (unless you deliberating NAT it). The "home network" range (steal back some of that loopback space!).
Yes it can. It is fixing the problem as we speak. I, a security aware person, would never buy a connected thermostat. I'll buy the 5$ model that does the job I want perfectly. And my connected DVR lies behind some decent protections on my local network. Should it start participating in a dos attack or talking to those it shouldn't, I'll notice and replace it with a better system. The same is true of laptops, phones and cars. Once people are burnt a couple times they will opt for the safer models. Government could perhaps accelerate this process by increasing manufacturer liability (lol, not for the next 4/8 years) or by mandating that unmaintained products self-brick (again, lol). But the market will react nevertheless.
The market of devices bought by security aware people is not the market that Schneier is concerned about. It's this other much larger market of normal people buying the least expensive devices with the most convenient features.
Those consumers won't notice or care if their devices participate in dos attacks, but the targets of the attacks care a lot. It's a classic example of a negative externality.
Device insecurity is in aggregate very similar to environmental pollution of devices.
It is a latent danger that in some cases can stick around for many decades, it is caused by many people invidicually adding a tiny bit to the problem, it is created because right now accepting insecurity makes the product cheaper without directly hurting either the manufacturer or the buyer.
That's roughly how the Department of Homeland Security was born and the TSA was given the critical task of managing airport security where the failures happen. I wonder how Mr. Schneier thought that worked out?
I cannot take seriously any open-ended demand for regulation in the name of safety that doesn't spell out pretty detailed proposals which would work in practice; what such regulations should address and what is off limits. The last time we did this, we sacrificed rights and gained little (if any) additional security; I think Schneier calls it "security theater". I want to know why he thinks his call will result in ANYTHING better than that fiasco before I'm even vaguely convinced that this is the right answer.
Please don't read this as my not believing that there is a real problem and threat. I'm simply dubious of Schneier's answer at this point. "Any action" here is not necessarily the right action.
I'm pretty sure the U.S. government has taken on other security issues since the DHS was founded in 2002.
So let me run the exercise a bit... DHS... Patriot Act?... All that Snowden revealed stuff?... Invasion of Iraq?... Lybia?... Syria?... the U.S. Border?...
OK... a bit glib, but the point is there: where are the successes where the government protected citizens successfully without a disproportionate reduction of rights of those citizens. Off the top of my head, I can cite about as many as you did. I imagine there are some out there... but the track record isn't so good.
The first can't be solved with politics without locking down the internet and fine frained control of billions of devices connecting globally in others words an unachievable strategy. And the second cannot be solved with technology unless anyone truly believes they can fight an organization with the law, tens of thousands of programmers working on surveillance 24/7 and near endless resources on their side. This way none of the problems get solved.
Will all open source software have to be submitted to some government run or approved certification? OpenWRT? Linux? Random App that accesses the net? Raspberry PI python script?
Sure it matters that 1 million IoT cameras are secure and can be updated but what about a million servers running some npm library? Do those need to be regulated too?
I'm very concerned about the security issues but I'm not looking forward to no longer being able to write software because I need it certified every time I add a line.
Are there other solutions that don't require regulation?
Dear god, now im pro-gun.
Also then identity fraud will be the weapon of tomorrow..
I miss the old internet.. maybe if the speed is so slow, that strikes become uninteresting..
Could the servers transferring TCP-IP be adapted, in such a way, that a mass-traffic causing incident (controllserver) leads to a slowing down of the connection of the causality chain origin?
No, crypto prevents that.
What a nice little maze.
I don't think law could even keep up with the space it wants to govern. They would be better trying to govern the application space as they have already been doing, such as vehicles, nuclear, etc.
In other articles Schneier called to solve other problems by regulation - for example to limit data retention by internet companies. are laws the answer? Is these no way to solve security related problems be better technology?
Will microkernel os solve embeded device security or will it be regulation?
I suspect what he's getting at is having some set of "safety standards" to be put in place, especially for IoT tech. The whole quote of
>Our computers are secure for a bunch of reasons. The engineers at Google, Apple, Microsoft spent a lot of time on this. But that doesn’t happen for these cheaper devices. … These devices are a lower price margin, they’re offshore, there’s no teams. And a lot of them cannot be patched.
sums up that point pretty well.
I honestly don't know how you would really enforce something like this, but the proposal seems to be less about the specifics of regulation and more about the need for new regulation. The FCC really isn't equipped to handle this kind of thing.
Probably like with automotive code - they don't have code audits but have mandated test scenarios. The problem is that you need large organizations to work these out
i guess Schneier has a point, if consumers do not demand secure systems then it can only be done through regulation.
that would be a very German approach - they have strong consumer advocacy organisations like Stiftung Warentest and the Adac, these then push for more consumer protecting regulation. 
Once upon a time these requirements were also used as protectionist barriers. In our days that would mean: you did not bother to update your toolchain and have no firmware updates for this smart light bulb of yours? gone is your import license.
"the government should be part of he solution?" Hahah wtf
the gov't regularly "hacks" all the things(including the internet of things) legally that I literally couldn't give a shit if some random fucktard script kiddie decides to hack my router.
the government literally does it on the regular legally.
that's the actual problem. get a mitt bro.
Now we have two problems.