I hate to be running to the government for this, but... the FCC has regulations that require some level of testing for devices that are going to use certain parts of the spectrum. Some parts have been declared "free zones" and I believe that's where the wifi systems tend to operate.
Perhaps we need the FCC to step in there and mandate some basic security certification for connected devices. At the very least the certifications should check that:
* The device always uses some form of encrypted communications if at all possible based on what it's communicating with
* If the device implements some form of remote control, that remote control includes both sufficiently secure authentication and authorisation mechanisms to ensure that at least it will take some effort to break into
* The device does not leak personal data to unauthorised or unauthenticated requesters, and provides a clear list of what data will be willingly communicated to whom (software should do that too...)
And so on...
The certification process could/should be implemented by high-reputation security companies like Kasperski or Matasano...
Until/unless some barrier to entry is erected, I think it's inevitable that everything that can be connected will be, and that this will mostly be insecure. In the meantime, I guess the solution is to only buy potentially connected devices from premium technology companies (e.g. Apple, Microsoft, Google, Tesla, Nest, etc) - but then, of course, those companies don't sell everything (e.g. no microwaves, car washes, etc) so that would limit the range of things you can safely purchase, for now...
On that note, given how corrupt and broken the US system is, perhaps this needs to start in Europe, where there is already a general mindset of consumer and personal data protection...
Frankly, that sounds like a good way to kill open source IoT projects (which can't afford to hire Matasano) while helping TLAs by centralizing the information on just a few companies.
I'd much rather have strong penalties to companies selling unreasonably insecure devices, with reimbursements to clients and rewards to the reporters of security flaws.
Speaking as one who stands to benefit from such a rule, I also think that requiring 3rd party validation is a bad idea. First off, it's always a race to the bottom, and secondly, there are not enough qualified people in the world to look at everything.
I would, however, like to see a general rule requiring software and hardware makers to take "reasonable steps" to secure their products, and opening them up to liability if they do not. A few class-action lawsuits would go a long way towards encouraging everyone to put in a secure SDLC.
If IoT becomes a security or safety nightmare, which I think most of us will concede as possible if not likely, there will be a public outcry that will result in either gov't oversight and regulation or the industry being sued out of existence. So, assuming for the moment that the IoT industry does not want to be suffocated by lawsuits, the real question for it is by whose hand regulations emerge and are enforced; the industry itself or the gov't?
Industry-based regs, e.g. UL.com, will be the least burdensome. But almost always an industry cannot self-regulate because of free riders, et al., or out of a short-term focus over profit maximization. So, in steps gov't regulation. And gov't regulation is very often over-kill, like using a bazooka to kill a fly. Said bazooka does result in a dead bug but also a lot of collateral damage to the industry being regulated. Think the FAA and how it's Part 23 regulations have both guaranteed safe aircraft and stagnated the general aviation industry nearly to death.
With articles such as this one and Gawker's "Why is My Smarthome So Fucking Stupid", it's pretty obvious that the IoT industry as a whole should be embracing and spreading industry-wide security, safety, and UX standards. Yet, for now there seems to be no such industry initiative to do so, leaving the task to Apple and, to a much lesser degree, Google. With HomeKit, Apple is forcing partners to adhere to tight security and usability standards. With its large user-base of ApplePay-enabled, willing consumers, Apple can force its will upon IoT partners going through the HomeKit acceptance process. But as if on cue, some partners, exhibiting short-sightedness, have whined to the press that Apple's process is onerous. While I'm sure Apple is more than happy to let IoT manufacturers not affiliated with HomeKit IED themselves through lax security or UX, for the industry it's a big mistake.
The only reason we have wireless innovation outside of the military is that we have these "free zones". The problems we have today aren't radio problems -- which is what the FCC is there for. It's application layer issues that exist whether you are wired or unwired, private or public networks.
We're in an early adopter phase, so products are immature. You shouldn't be allowing IoT devices in high security environments, or incorporating devices into structures that cannot be retrofitted in 5 years until we move a little further up the lifecycle.
If you want high assurance controllers for light fixtures, motors, and other IoT use cases, you need to talk to Johnson Controls, Honeywell, and similar companies and pay for the privilege.
This will result in only huge companies that can afford a bunch of paperwork and liability insurance being able to sell the same old insecure things while no one else will be able to afford to challenge them with actual secure things.
As usual it will become more about permission than proficiency so we'll predictably end up with corruption instead of competence.
I agree with the sentiment but do you really trust the government can actually audit some giant codebase? The internet of things really includes your computer and your PS4 and every piece of software on them. It includes your router and your printer and your IP cam that's basically the same thing as your router with camera attached.
I don't know what the solution is but I really can't imagine a government body able to audit all that code in any meaningful way.
I think rather (and please punch holes in this idea) ... maybe fines if something isn't secure? I can't see how that would work either though. Not even the big guys have secure software as new issues are found all the time.
Basically it seems like you need to shun companies that get caught which will hopefully send a message. Also possibly take precautions. Put your internet of things devices on their own networks etc., don't let them on the net directly, ...?
I would trust the government to do the things it is good at: regulation, enforcement and penalties. They can contract away the code auditing, security reviews, penetration testing, etc.
The government is needed because the free market provides no real way to hit back at companies that harm customers with defective products. In an ideal world, customers would boycott companies that misbehaved, but in reality this never happens. Victims simply do not stop doing business with companies that victimize them. Last time I checked, my local Target and Home Depot were chock-full of customers, despite their demonstrated inability to handle their customers' data securely.
EDIT:
And, it would be the FTC, not the FCC who would apply such regulation. Indeed, they already list Privacy and Security as within their power to regulate: [1]. In fact their site even has a section dedicated to Internet Of Things: [2]. The problem seems to not be that they are uninvolved, they simply seem to have no teeth.
> I agree with the sentiment but do you really trust the government can actually audit some giant codebase?
He didn't say that the government would do the code auditing (unless I missed it). It would be outside firms. I believe they do something similar for certifying hardware. The FCC certifies outside hardware testing labs as being qualified to certify that hardware satisfies FCC requirements. The hardware makers wishing to gain certification hire one of those labs.
The government wouldn't be doing the work. We just need the legal framework in place. The industry would do it themselves, but they don't now since there are no real consequences. There should be. We have plenty of rules around how medical records are treated. There needs to be similar rules for all this stuff.
Making companies liable for any hacking would be one way to go, which may be quite effective - granted the politicians will actually listen to the People on this, and not the army of lobbyists that will be set unto Washington against such a proposal.
> I don't know what the solution is but I really can't imagine a government body able to audit all that code in any meaningful way.
The NSA doesn't seem to have any trouble hiring top-notch reverse engineers. There's no reason to believe that the same approach couldn't work to benefit the country if the combination of mission & budget for competitive salaries were applied to defense instead.
That said, the first thing I'd start with would be much simpler: mandatory support where device manufacturers are required to issue security & reliability updates for 10 years[1] or release all of the source code, tools and signing keys into the public domain so there's at least the possibility of user support.
1. Most people expect a car or major appliance to last at least that long without becoming unsafe.
Security certification doesn't work and can't guarantee products to be free from security holes.
What we need is an obligation for the manufacturers to provide an automatic update mechanism and updates fixing security critical bugs for several years.
Sometimes "upgrades" breaks things for users, and they are reluctant to apply them. Users are trained to not apply upgrades. Ideally you'd need law that manufactors not do that.
Oh, the updates should definitely be automatic (like Chrome). For one, it would ensure the companies are a lot more careful with what they're sending for update so as to not break millions of devices at once, and second, as you said, it removes the hassle from users having to deal with tens of different devices.
This is not a good idea. Now every time a want to sell a minor piece of connected hardware not only will I need a security professional on my team, but I'll need to pay someone (government approved) to audit my work. I want nothing to do with such laws.
To quote Dan Geer, "Yes, please! That was exactly the idea."
There are many industries that require this kind of auditing or similar regulation. I would think that the technology-based industries are clever enough to find a way to accomplish these checks quickly and cheaply.
I'm hardly a massive exponent of this kind of thing, and I've complained about CE/WEEE/RoHS in the past; but I think the general idea of some minimum product standard is a good one (with a clear "hobbyist" exemption).
If you don't have a security professional to hand, how do you or I know your IoT thingy isn't going to turn into a malware vector?
But I agree that governments don't have a great track record on setting the correct standards. Cameron was only recently talking about a ban on non-backdoored communication.
(Note that CE supports self-certification, it just has very unclear and lengthy rules)
The government is something of a blunt instrument. The difficulty in the proposal as described is how you can precisely define the terms such that you can reliably measure 'compliance' -- cue: lots of legalese and paperwork (cf medical device regs where you really do want stringent controls and considered processes). Sanctions for data breaches and poor security might work but then they simply become costs of doing business and don't necessarily lead to improvments (just passing the buck).
In my view (see my other comment [1]), we need to reconsider how we build, deploy and manage software for this 'connected-age'. If developers are not willing to try to solve this with better tools/infrastructure, then no amount of legislation is going to fix it. If anything, the poorly-secured incumbents will simply misappropriate existing laws to go after those who uncover faults.
(NB: I'm not suggesting that government doesn't have a role to play, but solely relying on them is doomed to fail -- eg who d'you think would be advising them on such legislation? Not the people you'd likely want.)
I thoroughly agree that this is something we developers have to fix instead of adding yet more hoops to jump through. Like you I am also working on the problem of tools/infrastructure to solve this problem though probably a different aspect of the problem, with my startup resin.io
I continue to fail to see how connecting appliances or small electronics to a network adds actual value. Simply throwing technology at a thing doesn't automatically make it better.
Yet, here we are, rushing headlong into the "IoT". We ought to recognize this for what it is: pursuit of profit from uninformed purchasers.
Ignore the junk that doesn't add value. You're right -- it's noise, and there's a lot of it right now.
Identify the specific long-term opportunities where the added technology can actually give users a superpower: a valuable ability they didn't have before. That's what we've tried to do with Pantelligent. (Disclaimer: co-founder / https://www.pantelligent.com/ ) For us, it's the ability for home chefs of any skill level [democratization] to cook any frying pan-based meals [versatility] perfectly [quality] every time [repeatability], even when you're multitasking in the kitchen [convenience]. If that isn't adding user value by adding a bit of connected intelligence to an everyday home appliance, then I don't know what is!
It adds value, just not for the consumer or user. Note that I didn't say owner as this internet of thing is actually the internet of someone else's things, typically the manufacturer. Think apple devices meets with giving away ownership and control of your data to a third party provider, the current obvious is facebook.
All this seem to be the logical next step in the evolution of corporation trying to exploit the internet for their own needs and turn it into a massive surveillance tool. Something that started quite a while ago on the day advertising invited itself to the party.
I can see some limited benefits, like applying inventory tracking tech to refrigerators. But for the most part you need to be there to refill the device etc.
I work in the foodservice equipment sector at the moment and there's a huge demand from the manufacturers to add upstream monitoring to their devices.
This is mostly motivated by the HACCP (Hazard Analysis and Critical Control Points) guidelines that the US FDA has set in place. Someday that designation will change from "suggested" to "mandatory".
The equipment requirements are pretty large. Restaurants need devices to track refrigeration temperatures, cooking/holding/production temperatures and times, equipment status, check probes, and lots more. Eventually most of the devices in a kitchen will be IoT connected in some form or another. Many of the larger chains will also want this data pushed upstream to their servers for overall safety monitoring and, eventually, other "big data" benefits from seeing store production in near real-time.
The push is that all devices will end up connected as commodity manufacturers continue to search for 'value-add' services (even if that value is dubious). In a few years, I wouldn't be surprised if 'smart TVs' were the only ones available. Security also becomes an afterthought as companies rush to get products in the market. This is mainly because the components used to build software rarely take account of security/privacy themselves so it has to be considered by the developers -- who are rarely trained to handle it.
One approach to this is to build new tools and components that incorporate security & privacy by design. Discarding elements that are not required for a particular use case is also beneficial as there's less a hacker can do if they do manage to get in.
These approaches are captured in ideas behind unikernels, such as MirageOS [1], which themselves can be part of a larger stack [2]. I work on both of these and we even put together a contest (Bitcoin Piñata) to incentive a search for weak spots (and a bit of fun) [3]. I honestly think that only new software stacks or government regulations can fix these issues. Given mass-surveillance, I don't hold out much hope for the latter.
I'm evangelising the term "value-subtracted" for things which make money for the vendor at the expense of the user. Lenovo is just the latest, biggest example of this.
- "In a few years, I wouldn't be surprised if 'smart TVs' were the only ones available. "
To your point: when I purchased a new TV last year, the only available "non-Smart" models were of generally inferior quality to the Smart models, from picture quality to physical design. I ended up purchasing one simply because it was the best TV at its price point—I had no interest in the "Smart" features.
My TV isn't even "smart" but it has a USB port used for doing system updates (there's been exactly one in years.) I did some very basic investigation on the firmware update and could see it had busybox. It got me thinking. I'd settle for a TV where the firmware could be replaced.
Looking forward to the day I can start calling televisions "telescreens" as they "anonymously" record their environment (unless, of course, the NSA gives Samsung a blanket warrant).
Not only would I prefer not to have everything connected to the internet, I believe it's imperative.
In my first IT class at high school, my teacher told me something which has stuck with me as a golden rule of computer security:
If you want to make a computer 100% secure, you should unplug all the cables, drop it in a vat of cement and then drop the entire block into the Mariana trench.
He's right, but that's some next level security. However, a more usable tenant of security is that devices should be smart enough to do their job, and no smarter. Connecting your fire alarms to the internet will, at some point, result in someone setting all your alarms off at 2am just to fuck with you.
The flip side is that if you're at home, you'll hear the alarm before you hear your phone, and if you're not at home then as long as it calls the fire brigade ASAP what different does it make if you get an alert on your phone?
"Connecting your fire alarms to the internet will, at some point, result in someone setting all your alarms off at 2am just to fuck with you."
I disagree. The "at some point" argument can be applied to anything. Having a smartphone connected to the internet will, at some point, result in someone hacking into your phone and calling 911 using your phone number, tens times a day. At some point, it's going to happen to someone.
By your argument, we should ban smartphones because they are not 100% secure. Phones should just be phones, and web browsers should be separate devices. Phones should not be internet accessible because then they can be hacked into.
Let's not make personal attacks here. Focus on the issue, not the person. Whether or not I am illiterate is not what we're debating here, and quite frankly, is none of your concern for the sake of this thread.
Back before smartphones became popular, everyone I knew used to make the same argument for phones. "Do phones gain tangible benefits from being smart?" "Sure, you can check email on your phone, but I can do that at home--I don't want to check email on my phone" "I'd rather have a small flip phone than a huge PDA--too bulky to fit in my pocket."
If Steve Jobs held the same attitude, smartphones wouldn't be the industry it is today. It would still just be an idea, being dismissed as "a toy" and not useful
Have you used a kettle that is smart? I assume you must have given the conviction in the way you dismiss IoT as not useful. What pros / cons have you experienced from using a smart kettle that led you to dismiss it as being not useful? Was the smart kettle you were using designed well? How could it have been improved?
I was calling you out on the worlds most obvious straw man. Would you rather I just call you a liar, instead?
>Have you used a kettle that is smart? I assume you must have given the conviction in the way you dismiss IoT as not useful.
It's not a sensible funding allocation decision to attempt to make everything smart on the off chance that it might make it better. Smart pegs? Smart carrots?
Generally, in the real world, we theorycraft before we invest using that miracle of nature, our ability to model and predict the future in our heads.
Provide for me a tangible way that a kettle might be improved by being made smart, because my theorycrafting is coming up blank.
I'm willing to be shown wrong, but I will compare any benefits you vision bestows against any downsides it may introduce.
> I was calling you out on the worlds most obvious straw man. Would you rather I just call you a liar, instead?
I'd rather you not call me anything. As I said, it doesn't matter who I am. That's not the issue I'm interested in.
Likewise, I have not called you any names--I care only about the arguments you make.
> in the real world, we theorycraft before we invest using that miracle of nature, our ability to model and predict the future in our heads.
That is my point. If we only invest in making anything we can predict will succeed, and avoid potential failures, a device like a smartphone would not have existed, because I was there when companies like Apple tried to launch PDAs like the Newton, and the world did not care. Based on prediction, smartphones would not gain traction in the consumer market. And yet, the iPhone made that happen
Same for iPods--"Hundreds of dollars for a music player? Everyone would get a $30 sony". It was only with a lot of conviction pushing a "theoretically blank" idea, that the smartphone industry became what it is today.
If we only invest in things that seem "not dumb" in "theorycraft", a lot of the successful startups today would not exist. Success requires experimenting on things that may not seem obvious at first.
> Amaze me.
1. I don't need to amaze you. You as an individual are not that important.
This is especially true if you are the type of person who sits and waits to be amazed. The most important people in the world go out and amaze others, as opposed to waiting to be amazed by others
2. Ideas are not dumb until they are proven good. They are ideas until proven wrong. You don't get to dismiss an idea with near-certainty that they are "dumb", until you've actually tried it out yourself and can prove that it doesn't work.
> I'm willing to be shown wrong
That is not true. Someone who is willing to be shown wrong, would encourage others to try new ideas, even ones that are not apparent successes "in theory", or "dumb"
From the conversation so far, it sounds like you are not supportive of investing in ideas that are not proven in theorycraft. Based on the conversation alone, and the trust you put in theorycraft, it does not look like you want to be proven wrong.
That assumes a two way connection, adn/or a persistent connection. Various existing fire alarms etc will only open a connection to the service provider when needed. This usually because it has a legacy from the POTS days.
That is perhaps the biggest issue with IOT, assuming a always present and bidirectional connection.
Your electricity usage patterns are enough to determine your schedule, especially if you have electric water heating. Browse through some systems at PVOutput.org until you find one that publishes power usage data; you can usually see when the occupants wake up, when they do their washing on the weekends, when they turn on the TV in the evening, and when they go to sleep.
With higher precision usage readings, you can determine what they're watching on their TV [0] [1].
I agree that the internet-of-things is another concerning layer of risk for privacy and security, though.
I wonder if this might be the push that functional programming + formal verification needs to hit the mainstream.
Compare Erlang, for example, which must have seemed needlessly complex and theoretical outside of modern super-horizontal-scale computing.
I understand that NASA, the #1 in "if this code breaks we all lose our jobs" driven development, are big into formal methods. I think applying that same rigor to smart microwaves wouldn't be such a bad thing.
The formal verification subject is tricky. For a lot of software (especially in the web startup world) it is often not possible to hire someone trained in formal methods to perform extensive checks/proofs on software which undergoes rapid change as the company pivots every couple of months.
Functional programming alone doesn't give you any guarantees about safer or more correct software than any object oriented language unless you ruthlessly exploit its type-system. Even if you do, the specs have to exist upfront and they have to be correct and stable.
One can make the case that isolating side effects and capsuling them in a controlled structure is "the right thing" to do, but that alone does not give you any formal verification of your program.
From personal experience I can only say that whenever I brought up formal verification because of security/correctness concerns, I was immediately shut down by business, because it's simply too expensive for software that doesn't control life-critical systems.
I think with security (and esp. privacy), the problem is more coming up with the specs in the first place. If you can do that in a reliable way, designing appropriate static analyses is probably doable.
I was hoping to read more about the crappy quality of most "things" in the Internet of Things (The Nest thermostat is an exception to the rule). To cut costs, the sensors involved are usually very simple, "dumb", often built poorly with low quality components and not integrated very well. It's up to the software, which often isn't written that well either, to compensate. Just getting the system to work is difficult enough with the budget, time, and resources available. Never mind securing it. So, welcome to the Internet of Things that don't work half the time and could probably hurt or kill by accident.
IoT devices should not be connected directly to the Internet. I don't want my "smart" lightbulbs to be turned on or off through the Internet. I also don't want them to become yet another way for the NSA to spy on us.
All things that are connected to the Internet can be hacked, let alone things that come with poor security and from manufacturers that never intend to update them either. In fact, the plaftform makers for IoT (or governments if you will) should require manufacturers to update the security vulnerabilities for 80 percent of users until the end of life. For example, if 80 percent of customers keep the smart lightbulb for 5 years, then that's how long they should be updated.
So far Google and ARM's Thread protocol for mesh networking between IoT devices looks interesting and seems focused on security. The devices connect only through a "gateway" through the Internet (which can be your smartphone). That feels like the right approach to me.
"IoT devices should not be connected directly to the Internet. I don't want my "smart" lightbulbs to be turned on or off through the Internet. I also don't want them to become yet another way for the NSA to spy on us."
Seriously?
By the same argument, phones should "not" be connected. It will just make it easier for the NSA to hack your phone through your data/internet connection, read all your call logs and maybe even take over your phone and start making random calls on your behalf.
Phones should just be phones. Why in the world would we want to make phones "smart"? Phones should not be connected to the Internet. What a dumb idea it is to make a phone that's connected to the Internet--that is going to be a HUGE security disaster. The world is going to explode because everyone's phones will be hacked
Even if something is not directly connected to, or reachable from, the net they can still be a issue.
Consider something like a network printer.
Convenient as heck, but if your PC gets compromised only for a shot while the attacker may have left a little surprise in the printer firmware. End result is that even after you fully scrubbed the PC the attacker returns because the printer is acting as a proxy.
More and more it feels like a no win situation, unless you physically unplug the router between each time you need to do something online.
It feels like this because most people are just not willing enough to spend the time, effort and money to be secure. The only thing you (as an average consumer) can't be secure against even if you tried really really hard (why bother) is a well funded government organization (from any country). Security threats for everywhere else are more or less manageable if you really want.
One problem with a mesh, is it might be hard to avoid or shutdown.
I will never, ever, connect a TV to the internet. At least at current very low quality levels. There's no reason an Apple iTV would have to suck as much as present smart-ish TVs.
I'd be pretty angry if a junky TV got owned and started spamming all viewers because it connected to the internet by talking to my kids video game console which is connected, or my roku or even worse my cabletv settop box.
Presumably the "things" are networked via wifi? In that case I just won't enter my wifi creds, and they'll remain off the network. Possibly some devices might be more valuable when networked with each other locally, and the WAP they use just won't get connected upstream.
Of course the things will still be vulnerable if they just connect automatically to any visible rogue WAP, in which case maybe one could glue some lead sheets around the antenna. The only government reaction to this phenomenon I would welcome would be a requirement for device vendors to clearly label devices that automatically connect to any visible WAP, or will only function when connected to the public internet.
Demanding that devices like this be "secure" is silly. Only devices the firmware of which is regularly, securely updated, which update process is regularly observed by human beings, can even hope to be effectively secure for any period. We probably can expect that from POS devices in corporate use. We probably can't expect that from a refrigerator in some random family kitchen.
Some HP printers, P1102w, have wifi cards, and when not associated with an access point, they will broadcast their own open network. There is no way to disable this except to open it up and remove the wifi card.
I think some Roku models will also broadcast a wifi access point for the remote control to connect to.
Our new Canon all-in-one will optionally run an AP, but it's off by default. I've seen roku APs before but I guess I just assumed they could be turned off or secured.
I know these aren't all web-based hacks, but I'm guessing the majority of connected devices are using http. Simply switching to https everywhere would remove a huge amount of attack surface for almost no cost.
This is partly due to the big red warnings you get with self-signed certificates. Yes, it would certainly help, but to the user seeing a crossed-out https is worse than a simple http. And the user's perception matters much more than security does to these people.
@tootie didn't say anything about the certificates being self-signed. Later this year, the new Let's Encrypt CA will make it free and easy to get certificates.[1]
Moreover, it's my understanding that the default with HTTP/2 is for connections to be secure.
Only if the appliance is serving requests, not if it's requesting. For a piece of hardware like a carwash that is running servers, the manufacture should be maintaining that software routinely anyway.
Why not? If they're going to go to all the trouble to "Internet enable" a refrigerator, surely they can include yearly certificate changes as part of their maintenance plan.
They really should, but I can't even get a cert update on my little home router, do you think a fridge maker is going to do that? Probably once it's out the door, they will pull the one programmer that writes the app and put him on then next fridge or the oven. It's very difficult to see an appliance manufacturer going back to update.
We're currently working on a consumer device that could well be classified in the IoT category.
We're pushing hard to put in multiple good layers of security, even though the hackable potential of the device is low. The amount of personal or otherwise exploitable information is also low. But that is no excuse to leak anything, or to allow the device to be taken over by attackers.
The path isn't easy... the library support on many of these embedded platforms is poor. But it must be done.
There are two things that can break IoT, security and fracturing. But security is a necessary condition for IoT to succeed.
I know Apple has surprised many of the companies that want to work with HomeKit with its security requirements. I heard from one company that, for example, was upset that locks cannot be remotely activated. The last thing anyone needs is their house getting hacked and robbed as well.
Remotely activated locks? What's the use case for this? Call your girlfriend when you're locked outside your house and ask her to open the door with her cellphone?
There seems to be a high risk for little benefit, or perhaps I don't have a lot of imagination.
I would imagine it was more of the opposite scenario. Rather than calling your girlfriend when you forget to lock your house, you might want to just lock it remotely.
That goes back to what someone else questioned: Should we not address the problem more directly with devices that take action themselves? All we've done here is move the interface off the physical object. Not much in the way of actual smarts. Requiring an owner to take action on a smartphone is a transitional phase.
Ironically, fracturing has superficially improved security for the moment. Devices with the largest user base get a disproportionate amount of the attacks.
I agree that not every device needs to be connected, and there are security implications for those devices that could benefit from being connected. IoT will continue in a big way, and just like the early deys of the Internet, security solutions will develop. It would have been nice if Kapersky would have offered something in the way of potential solutions in this post.
Perhaps we need the FCC to step in there and mandate some basic security certification for connected devices. At the very least the certifications should check that:
* The device always uses some form of encrypted communications if at all possible based on what it's communicating with
* If the device implements some form of remote control, that remote control includes both sufficiently secure authentication and authorisation mechanisms to ensure that at least it will take some effort to break into
* The device does not leak personal data to unauthorised or unauthenticated requesters, and provides a clear list of what data will be willingly communicated to whom (software should do that too...)
And so on...
The certification process could/should be implemented by high-reputation security companies like Kasperski or Matasano...
Until/unless some barrier to entry is erected, I think it's inevitable that everything that can be connected will be, and that this will mostly be insecure. In the meantime, I guess the solution is to only buy potentially connected devices from premium technology companies (e.g. Apple, Microsoft, Google, Tesla, Nest, etc) - but then, of course, those companies don't sell everything (e.g. no microwaves, car washes, etc) so that would limit the range of things you can safely purchase, for now...
On that note, given how corrupt and broken the US system is, perhaps this needs to start in Europe, where there is already a general mindset of consumer and personal data protection...