Hacker News new | past | comments | ask | show | jobs | submit login

> how precisely could attackers disable the light in any plausible ways?

The light is software controlled, as this very change clearly demonstrates. The fact Google can turn and off the light at a whim suggests an attacker could. Your defense of this being possible is based on a vague theory of infallible security.

You even seem to use a fictional quote to that end:

> What other methods besides “issue Google-signed firmware to the device” are you implying exist

Who are you quoting? What are you quoting? And where can I read up on google's infallible firmware signing strategy that you're implying has the ability to block exploits and bugs?

Despite claims to the contrary you're less "requesting an explanation" and more making several unsupported security claims, then asking people to refute your claims as if they were factual in nature.

If this device was as secure as you imply that would be quite unique for smart home devices in general, and software in general for that matter.




Nest historically used firmware updates to do this stuff, I see no explanation why Google would vary that practice for something they’re doing to gain trust. They could be so absurdly stupid that they software control that lamp off from their servers as a permanent solution, but I think they have more wits about them than that. As you point out, it’s a terrible idea; and it’d wreck them in the press if that was their final answer.


Please reread Someone1234’s comment.

In my words: Nest devices had security issues in the past [1]. It’s very likely that they still have issues today (like most software). Once a hacker gains enough control of the device she can turn off the led because it’s software controlled[2].

[1] https://gizmodo.com/this-nest-security-flaw-is-remarkably-du...

[2] https://www.grahamcluley.com/webcam-spying-without-turning-l...


That is not an answer to the question you were asked.

"And where can I read up on google's infallible firmware signing strategy that you're implying has the ability to block exploits and bugs?"

I have no clue where on the scale of "state action" -> "bribed/coerced insider" -> "disgruntled ex google employee" -> "Elbonian strip kiddie" you need to be to "control that lamp", but I'm 100% certain that the reality is that the capability to revert this new lamp behaviour is not exclusively in the hands of authorised Google employees...

It's exactly the same as the current encryption bullshit. If there's a backdoor for law enforcement, there's a backdoor. If there's a remote way for Google to update the Nest software - there's an exploitable remote way to update the software.


>If there's a remote way for Google to update the Nest software - there's an exploitable remote way to update the software.

This is (quite obviously?) wrong. One doesn't imply the other at all in any way shape or form. If this were true, there wouldn't be a single device on the planet, including phones, servers, etc. with remote update functionality that would be secure, ever.

Also

>If there's a backdoor for law enforcement, there's a backdoor.

duh? But I think what you are trying to imply is that "there is a backdoor that criminals could use" - which is also wrong. As long as the implementation is correct and the key isn't leaked, this is completely secure. Of course it'd be insane to have one key being able to unlock essentially all communication within a huge system, because chances are it might be leaked eventually, but that has absolutely zero impact on the fundamental possibility of making this system secure.


> If this were true, there wouldn't be a single device on the planet, including phones, servers, etc. with remote update functionality that would be secure, ever.

You're so, so close to waking up from the matrix. Very excited to have you join us in the real world soon! Follow the white rabbit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: