(Or it could be that everyone working at Google has been carefully chosen to not have such concerns; I do get that feeling sometimes too.)
The engineering team refused to take the less expensive route, and insisted that the mute button physically disconnect the circuit, so that no future engineering team could decide to stealth "unmute" the microphone through software.
To this day, you can disassemble an Amazon Echo device and you will find a physical disconnect of the mic circuitry when you push the mute button. Don't want an "always listening" smart speaker? Just keep it muted, and a red LED circle informs you that the mic is physically disconnected.
I'm proud of the approach that Amazon takes to privacy. Privacy of customer data is considered the most important thing to Amazon, and this customer obsession (the #1 leadership principle) permeates the organization.
Disclaimer: I'm a principal engineer at Amazon.
Update to clarify reasons for this characterization: Parent used the words "refused" and "insisted," which strongly suggest conflict between the pro-privacy engineers and others at Amazon involved in the project. And "so that no future engineering team could decide to stealth 'unmute'" suggests a lack of trust in long-term company management. Nothing in this story supports the later statement that "Privacy of customer data is considered the most important thing to Amazon."
This type of product design decision happens all the time. Whenever you're considering component costs, you have to evaluate all of the options. You're mischaracterizing it as a fight between engineers and management.
And second, Amazon did the right thing and listened to them, when they didn't have to. They could have given the project to a different team, reassigned people, or even fired them.
Instead, they had the sense to listen to their engineers, which was the right thing to do.
for some reason i'm thinking this point of view isn't held company wide.
That's a rather poor choice of words. I prefer "helping police departments catch criminals", myself. 'cuz, you know, police departments exist for a reason.
Police have a legitimate, important societal purpose, and have historically abused and over-surveilled minority populations in a way that's highly problematic.
There's a compelling use case for facial recognition in law enforcement. There's also a compelling case that it needs to be closely scrutinized and regulated.
New York recently finished settling and paying out a case where they were accused of heavily, unreasonably surveilling a number of muslim people and properties. In the end, they still admitted to no misconduct.
This is just a single drop in heavy policing that NYC pushed for and is still dealing with the effects of doing so. Stop and Frisk comes to mind: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=846365
A lack of order is not what I'm 'proposing', but more the factor that 'order' is very obviously skewed towards minorities as shown by various sources you could seek now. One could also discuss how it's based on crime statistics, but that would reach to "systematic oppression" fields: You can't continuously punish random people of a certain race just because statistics say they're 'likely' to commit crime, this is systematic. What you see as a dystopian shithole is already just that for those who can't have their peace without law involvement, and ML tools will not skirt around this, the bias will only transfer and amplify such shit.
> I also believe countermeasures are necessary to prevent Islamic radicalization (and any other kind of radicalization as well).
The department responsible for the spying disbanded and confirmed in 2014 that they hadn't generated a terrorism related case since 2008, as stated in the previous NYT article. You'd frankly figure that after 9/11, American Muslims — Let alone those in New York, would be actively against any kind of 'radicalization' unless you consider simply practicing religion as 'radicalization', which NYPD practically did here.
> Don't try nothing - won't be nothing
I mentally envy the ability to state such a thing, to be honest.
Sounds like an argument for addressing income inequality.
If you want more recent examples, Chicago was disappearing people to black sites for interrogation this decade.
and torturing suspects for decades, well into the 2000s:
See: South Africa
They aren't always good guys, and it's ok to have legitimate concerns about cooperation with law enforcement.
Who are these engineers? Have they ever spoken publicly about this stance?
So while I agree with you, it _also_ wouldn't surprise me if someone raised the concern, but that person was on a different / more silo'ed team and therefore the concern never reached the execution stage. Herego too many management layers and/or nodes of entropy for communication.
But that's just a hypothesis.
And when you go looking, sure enough, there it is in the backlog down in Priority "We'll get to it when we get to it, after all these other more important things that needed to be completed by last week."
"The school district intentionally did not publicize the existence of the surveillance technology. It also actively sought to conceal it."
I've worked on several products that had capabilities that we were told by attorneys could not be "advertised" (i.e., no references to them) until the complete feature was ready to be announced.
The '1st order' response of marketing doing their job is 'who cares?'. Few people care about tech tidbits that are not user oriented.
There's already tons of things to worry about and address - and every single bit of copy takes up valuable space.
The issue also does not fit into the standard communications framework: Hey, should we should tell people that there is a microphone, even if it is not working and does nothing? How do we even do that? "Hey, your alarm has a microphone!" Wouldn't that seem odd, why does it have a microphone?! You'd even have to kind of explain it: "Your product has a microphone so that one day in the future, you might enable some other features that don't exist yet"
"this has privacy implications" - no - it only has the perception of privacy implications. Because Google is not actually intervening on people's privacy, it's unlikely they really thought about the need to give people an unneeded affirmation.
Maybe they had a discussion about it, maybe it just didn't rise to the level of 'very important'.
Only with a very specific concern for a subgroup of customers who are wary of these things, would someone have enough leverage to get that "Hey, there's a microphone that does nothing!" notice on the box.
Google is not doing specific evil. They are not trying to infiltrate your homes and take nude pictures of you so they can look at them or sell them.
They are systematically evil, in the sense that it makes sense for them to sell you voice/video features that you want in order that they might provide you even better services. And their AI will use nudies of you in a backwards way to learn more about you.
They're getting evil due to their scope of influence, and negative externalities, much like FB has problems with 'Russian interference in elections' - i.e. not a problem they are trying to create, not a problem they want, just a sensitive byproduct of their product and massive success.
I've got to disagree here: it has privacy implications because at Google's end they can issue a software update and now be monitoring all audio, and the T&C no doubt say they can be unilaterally varied without notification. That all means that Google think I accept Google snooping on me; of course I don't but in court Google lawyers would say I did, and them being in a position to do that is important.
Also, if Google can enable the mic at some point then it's likely that's a possibility for a third party (crackers), OR that Google could do it in response to a legal demand from a government.
There seems no good reason to me that the specification summary can't say "microphone - not in use yet, reserved for future applications"; with a sentence somewhere expanding on that explaining they want the ability to improve the device later and do shipped a mic because it could be useful to expand the products capabilities.
Some of us actually do read instruction books; it would no doubt get some column-inches in a positive way ("what changes might Google make").
Google know everything they make is going to get a tear-down and that mics are going to be discovered in short-shrift: it's ignorant to not anticipate that. In fact one really has to assume they knew this "issue" would come up.
If/when Google enables this tech, it think it will be clear enough. If not, then definitely would be time to raise a real fuss.
"There seems no good reason to me that the specification summary can't say" - I agree with you there - however, I suggest that they mighn't have even have thought of it. That would be the prudent thing to do, but again, there's no established procedure for it.
There probably was not even a legal review hint etc. because as I say, they were not using it.
So it's questionable, weird, but not nefarious.
What they will do, with your ostensible permission, with a signed T&C - is to me, far more nefarious.
I can see this slipping through the cracks between different job roles' responsibilities, although after this incident they'd probably go through a post mortem and find a way to incorporate new checks into the product launch process.
Not a good look for Nest...
You could have found it by dismantling the device and the renter would have won their case against the landlord.
It seems to me like a team sat for a while, looked at all the possible ways they could get people to mount a package like this in every room and settled on calling it a smoke alarm.
At the most optimistic they started with a smoke alarm and gradually realised they could build a general purpose platform based on the hardware being deployed in lots of rooms and many types of sensors being dirt cheap now.
Decided they could enable new capabilities (and data goldmines!) in software later.
It’s a pity there’s not an actually customer controlled version of it.
Are there any best practices for using a device like this but not having it communicate with the wider world? I.e. It can communicate with you via a homekit hub, but can't connect outside your LAN
I then paired it with https://www.home-assistant.io/ to send alerts to my phone, google home, etc.
Haven't had to actually test that the carbon monoxide alarm works yet because testing that is hard/expensive, but for smoke it works just fine.
Given that these devices are battery powered and meant to last for years on a single charge you can imagine how often they actually connect to the wireless network. And how much traffic they send.
And again, they work perfectly fine without Internet access. Or network access for that matter. I love them, and as someone with an irrational fear of being in a fire they have helped a lot. They're much more sensitive (without being an issue, due to pre-alarm) than the alternatives.
"This is a test. The alarm will sounds. The alarm is loud."
(or something like that)
Do you have a 1st gen? I wonder if they're louder. Mine (2nd gen) test pretty quietly. It's not the full-on alarm shriek. It a a medium-volume beeping and only happens for a couple of seconds.
What is that? Google's not helping.
Like trump's schtick is getting people riled up
(Admittedly, I have no idea if the Nest version has a similar test)
I personally agree with the point being afraid of incompetence, not malice.
The alternative concern that Nest is so incompetent that they somehow issue an automatic silence command (either to all alarms or just yours) seems no more plausible than First Alert being so incompetent that your alarm simply doesn’t work. Especially in combination with the fact that this incompetence must either be undetected permanently (i.e. they always silence your alarm and never notice the horrendous bug) or coincidentally tied to an actual fire in your home, this is probably roughly as likely as a meteorite flattening your house.
The only “viable” concern here is to that an attacker might silence your alarm maliciously, which implies a lot of dedication from an enemy, because they are literally trying to murder you. Presumably this enemy is also an arsonist because otherwise there’s likely no alarm to silence and if there is, it’s likely a false alarm.
lot's of thing can happen by incompetence, not even have to go too far on the scenarios.
The first time my landlord silenced a "false alarm", I'd tell them not to ever do that again. The second time, I'd reset the device and register it under an account they don't control.
But yes, I do see your concern now. I was not initially thinking of the landlord actually controlling the device, merely installing and allowing the tenant to control it. There's a lot less ridiculous coincidence required for a landlord to stupidly silence the alarm.
“Not needed” implies that there is no fire risk or that the risk is so low that you don’t care.
Add to that the economic incentives involved when you wish to insure you property against fire and liability, or mortgage it (which almost always imposes a requirement to insure the property).
> 1. Der Beschwerdeführer wurde von der Vermieterin (im Folgenden: Klägerin) seiner - in einem Mehrfamilienhaus gelegenen - Wohnung auf Duldung des Einbaus von Rauchwarnmeldern in Anspruch genommen. Er lehnte das von der Klägerin ausgesuchte Gerät ab, weil es nicht lediglich dem Brandschutz diene, sondern mittels Ultraschallsensoren und Infrarottechnologie dazu geeignet sei, Bewegungsprofile von Personen zu erstellen, die sich in der Wohnung aufhielten. Sogar die Aufzeichnung von in der Wohnung geführten Gesprächen sei technisch möglich. Der Beschwerdeführer bot der Klägerin an, auf eigene Kosten ein einfacheres, ohne Funktechnik ausgestattetes Modell in seiner Wohnung zu installieren. Dazu war die Klägerin unter Hinweis auf die Vorzüge des von ihr gewählten Gerätetyps nicht bereit. Das Funksystem diene lediglich dem Zweck, eine Fernwartung sämtlicher im Haus befindlicher Geräte über ein im Hausflur installiertes Steuerungsgerät zu ermöglichen.
You have two options, choose one:
- 1. Google wants to spy on you with a hidden mic
- 2. They had future plans for the mic, but it was disabled, so it wasn't mentioned by the marketing department
For the Singapore Airlines story, you have two options, choose one:
- 1. Singapore Airlines wants to record you
- 2. The infotainment devices in the seats are just off the shelf Android devices
One option gets you lots of clicks and let's the infosec drama crowd tweet obnoxious things and sound insightful. The other is the pretty obvious explanation.
Many people already have Android smartphones, so there is already a Google microphone in your house. The big difference is that you know that it has a microphone.
A malicious actor could easily conceal their activity by making 24-hour-long recordings and sending them in the night (or whenever connected to WiFi and plugged into power).
Besides, the attack vector for a non-Google attacker to access this mic may be different than for accessing the mic on a phone
It might be reasonable to be concerned about this kind of thing in the tech crowd, but the vast majority of people aren't.
This should absolutely be the expectation. A note of "microphone (disabled in software)" at minimum. Since when is it OK for a company to sell you a product with hidden functionality that can be used to harm you by either the manufacturer or third parties?
(The obvious defense is that they're not selling it to you, they're renting it out. Such is the pathology of turning products into services. It's a sick market dynamic.)
Do I need to list all the capabilities of some SoC even if I don't take any advantage of them? If a component has thermal sensors I'm not using do I have to list every one of them on the box?
So, I agree no malicious intent is needed to make things turn very bad.
How about both 1. and 2.? Google wants to spy (for ad context etc) with a mic that will be enabled in due time?
And why move the Overton window to "it's ok to have hidden mics in a bloody thermostat, as long as they're not enabled"?
Is there really any doubt that google can and will spy on you if given the slightest opportunity?
I also think this kind of paranoia is detrimental to our evolution as a species.
We should be sharing more, not hiding in our caves.
You can not possibly examine the evidence and claim 100% that there is no interest in spying on anyone.
Doubt that this particular case has that as the core issue? Sure. But be utterly convinced that literally no one, in any intelligence agency, against any target that might be near some sort of microphone-enabled device, has ever had the thought cross their mind that these things might be useful? No intelligence agency has ever looked at one of these companies hoovering up all the data they can get and installing all this stuff everywhere they can and stroked their chin for a moment?
You're basically claiming the NSA, CIA, Mossad, KGB, MI5, and all other such things have never existed, do not exist, and will not exist. The evidence for this is pretty poor.
I'm not asking you to wake up tomorrow and worry about whether your toaster is secretly sending all your thoughts to the alien overlords, but come on. Live in the world a bit. We're 7-ish billion people here on Planet Earth and they are not anywhere near all to a person nice, wholesome people who wish you all the best and would never even dream of exploiting you even a tiny little bit while they joyously enable you on your life journey of exploration and wonder. You're begging for exploitation.
I suggest you start, your profile here is even slimmer than mine.
More seriously: while there surely is some paranoia going on, recent events have made me more careful, not less.
I probably should make a new account every year.
Ironic coming from a non-eponymous account, and in an era when we share two orders of magnitude more stuff than any other, even pictures of what food we had at diner...
i agree with you in spirit.
Really? Did you ever hear of a guy named Snowden? Do you understand that our government spends tens of billions of dollars annual to spy on people? Do you understand that Google, Facebook and every other search, advertising and social media company have billion-dollar business models based almost entirely on surveillance and information hashing? I hope you are being sarcastic here.
Why? And what else should we be sharing?
The Spanish Inquisition would certainly agree.
It is in no way close to conspiracy to question if Google or any other company supported by Targeted Ads where they need massive amounts of Human Intelligence to perfect their ad targeting, would want to spy on their consumers
Conspiracys is not a nutcase dellusion it happens all the time but the term is somehow tainted, which in itself is somewhat of a conspiracy...
"the microphone has never been on", Google say about a passive device as it matters. More accurate would be "we did not record the microphone" but that might sound bad ...
Is this maybe?
That said, "conspiracy territory" gets close to a knee jerk reaction.
History is full of conspiracies.
A conspiracy is just many people doing each other favors under the table and taking covert action to promote their private interests or political beliefs, something which happens all the time.
Heck, didn't a President resign because he conspired (including eavesdropping) against the other party?
Wasn't another in bed with mafia leaders? 
Haven't a third had friends profiteering of a trillion+ dollar war effort (Haliburton, etc), even using false testimony [3, 4]?
Don't tons of ex-politicians usually end up on boards of private companies they helped pass favorable legislation for and done favors to?
Haven't large corporations strong-armed whole nations, toppled governments, pushed for their own lackeys, etc ?
Wasn't the head of the FBI targeting, spying on, and blackmailing his personal opponents and for his personal gain? 
Just to mention a few examples, just the tip of the iceberg...
As Gore Vidal once wrote: "Americans have been trained by the media to go into Pavlovian giggles at the mention of the word "conspiracy," because for an American to believe in a conspiracy, he must also believe in flying saucers or, craziest of all, that more than one person was involved in the JFK murder" (Gore Vidal)
On one end you have individuals who will find nearly any conspiracy viable for whatever reason. That most conspiracy theories are eventually shown to be false doesn't really seem to bother them. On the other end you have individuals that will never believe anything could possibly be true, so long as a government or corporation has plausible deniability. The lengthy list of conspiracy theories that turned out to be true, or other conspiracies that nobody knew of - only revealed decades after due to declassification, don't really seem to bother them.
I suppose we could call both ends naive. Naively trusting to naively untrusting. The 'right' degree of scrutiny is somewhere in the middle. In this case you have the largest ad delivery corporation in the world. They've "accidentally" engaged in behavior such as snooping and logging data from unsecured wifi connections with their street view vehicles, continued to track users' locations on Android devices even when tracking was "disabled", and so on. Google is also one of the companies that known is known to be collaborating with intelligence agencies including, but not limited to, the NSA. Most recently they were one of the first companies fined for refusing to abide the GDPR regulations for a variety of actions including lack of legal basis for the information they were collecting, lack of transparency in what/how it was collected, and enrolling users in tracking without their permission. And while not directly related, I think it speaks to the true character and ethos of the company that one of the words they plan/planned to black-list in their tracking enabled censorship driven search engine in China is literally "human rights." 
And now they "accidentally" forgot to include on the packaging information that an internet connected device installed centrally within homes also had a recording device. I mean given the context of who you're talking about where do you think the idea that this device, and omission might be less the benign, ranks on the scale of 'naively trusting -> naively untrusting' scale? The connotation of conspiracy theory, as in your usage, is implying it's naively untrusting. I do not think this is a logical conclusion.
 - https://theintercept.com/2018/12/01/google-china-censorship-...
Which "conspiracy theories" are eventually shown to be false?
The ones concerning aliens and lizard overloads or illuminati?
Because there are plenty corporate, political, and economic conspiracies going on all the time, including tons of "conspiracy to commit fraud/murder/etc" at smaller and larger scales, as acknowledged by courts of justice every single day.
Needless to say this did not come to pass.
Another one I found amusing was people believing that SpaceX's retropulsive landings, when they were first being successfully executed, were actually just launches played back in reverse. This conspiracy died pretty fast after they did it over and over, to say nothing of people being able to freely go and watch the landings. But it could also be shown to be false beyond any doubt by reversing the landing footage which, suffice to say, looked nothing like a takeoff. There were also more technical ways to debunk these things such as by looking at individual phenomena (birds, etc). It wasn't a good conspiracy theory, but there were plenty of people that believed it for a while.
But yeah, I'm not really sure what's up with people who seem to think that conspiracies don't happen and on an extremely regular basis. Even some absolutely awful things. Operation Northwoods  was very much a real idea that made it way all the way through the intelligence agencies and joint chiefs of staff. It was literally one signature away from being carried out. If we had a president of lesser moral character, not only would it likely have been carried out but we'd probably be none the wiser today. JFK was a great man.
 - https://en.wikipedia.org/wiki/Jade_Helm_15_conspiracy_theori...
 - https://en.wikipedia.org/wiki/Operation_Northwoods
If you design in something which is later not used, you don't populate that part of the circuit board. Not unless you're intending to use it later, anyway. Components cost money.
A software equivalent would be "we had plans to offer an integrated backup system but that didn't happen, although we still upload your contacts list and the contents of your SMSes to our servers."
If you design in something which is later
not used, you don't populate that part of
the circuit board.
I think there is no doubt they intended to start using it later, because they did.
As long as the microphone never recorded anything, they're no legal downside to including it and not documenting it. There could be a slim but potential issue with advertising a microphone that the customer can never use.
The response to this incident is showing that that view is changing though.
Nintendo released multiple generations of consoles in the US with expansion ports for peripherals that ended up not making market sense to bring to the US.
Things would be much simpler if companies were up front about what they're selling, instead of giving you incomplete information optimized to placate the unsophisticated buyers.
“Don’t worry, those soldiers won’t come out. It’s just in case we want to use them in the future.”
It's just not being currently used.
1. They intend to use the microphone in the future
2. They disabled the microphone after having the boards manafactured right before shipping - what changed?
If they knew they weren't going to use it, why didn't they leave the microphone unpopulated? It would save on their BOM cost too, there had to be a reason.
So yes they planned to use the microphone in the future, to do precisely what they have done here.
- so what it's recording now, it only checks if you're still watching.
- so what they're storing it, the plane is a public place and there are cameras on the airports anyway.
- so what it's uploaded to the cloud, everything is cloud-processed these days.
You mean "gaining consumer insights to continually develop and improve our products".
Given the existence of a whole industry sector that is all about covertly gathering information about users and selling them off, I don't see what would be that particular far-fetched about this scenario.
Your analysis is sensible. Where we should choose the most likely explanation, it might become sharper:
- In case you're not familiar with it, one helpful tool is prior probability (Bayesian thinking). This video is short and accessible: https://www.youtube.com/watch?v=BrK7X_XlGB8
- There is a public intelligence budget in 2018 of $54.9 billion in the United States, as compared with the combined annual R&D expenditure of Apple, Google, Intel, and Microsoft at $53.2 billion. This employs over 100,000 people.
- According to Snowden, they covertly use microphones. He had reporters put their mobile phones in a fridge/microwave, since they could be turned on remotely.
A sensible assumption is that you are unlikely to chance upon a covert surveillance mechanisms if one is installed. (For example, speakers could also be used as microphones.) Where a bug is present, I think assigning 1% to the probability of finding it is reasonable.
In view of the above, after you find an undisclosed and apparently (but not physically) disabled microphone in a product, which is more likely?
1. One of the 100,000 people mentioned, using some of the $59,900,000,000 annual budget mentioned, put it there. They do this thousands of times per year, and you've just found one of them. However, the chances of your finding it are low. (1%).
2. It was put in there as part of normal product design but left unused. Perhaps it will be legitimately enabled in a future version. Perhaps Google will use it for OK Google, its voice assistant. It has no covert intention. Google spends a lot of effort on ensuring privacy. The chances of your finding it are very high (90%) - it's not meant to be hidden and is no secret.
If the chances of your finding a covert device is 1% in case there is one, and the chances of your finding an unused but not physically disconnected microphone is 90% if there is one, then to complete your analysis of which is more likely, you should know how many times the scenarios in 1 and 2 occur.
I hope these additional tools - Bayesian probability and some figures about the base rate, could make your analysis sharper. Personally, I feel it's likely that a 1% chance of discovering a covert bug, multiplied by the thousands of such bugs (devices) out there, makes it more likely than the 90% chance of finding a totally unused and unadvertised microphone in a product, since there would be few such cases.
 Pick your reference: https://www.google.com/search?q=snowden+microphones
HAHAHAHAHHAHA, you can not be serious
Next to Facebook, google is the most personally intrusive company there is in the world today
It's strange how you think the latter options are the pretty obvious explanations. "Google wants to spy on you with hidden mic" seems to be the fairly obvious one to me.
What's strange is the amount of pro-government and pro-google comments on hacker news the past few years. I wonder what the two options for why that is?
Also, you are offering a false dichotomy. This isn't an either-or situation. There could be other reasons. Could be that "google wants to spy on you with a hidden mic AND they planned it for the future". Another option is "The mic was put there by mistake". Another is that "the supplier screwed up". Or another is that the "supplier intentionally put it there".
Google spying on its customers would result in an amazing lawsuit. People tear apart and reverse-engineer these things for fun and it would have been discovered in due course. Google knows this. So, no, it's not an "obvious" option at all.
You're starting from a position of "of course google is evil". I'm starting from "how much sense does that make?". We've reached different conclusions because of this.
Pedantically listing a bunch of other options is missing the point, and they basically all fall under option 2.
As for your perceived "pro-government" and "pro-google" views on HN: people have different views on many topics. Maybe this is the only place you encounter views that differ from your own?
On the other hand, Google conveniently forgetting about the mics they installed in people's private residences is actually a big deal. This is exactly the reason I would never buy garbage devices like this. Google couldn't make a better case against such devices if they tried. There's no hint that the disabling of the mic wasn't or couldn't be reversed by Google or other parties. But even if it was secure and didn't record anything, Google broke customers' trust by including a hidden mic. Whether they had future plans or not, they lied to all their customers. If they came out and offered free replacements of any systems, I'd maybe buy their apology. As it stands, it's clearly PR bullshit that this was a mistake. One would have to be extremely stupid, gullible, or both to buy that especially given Google's history. That mic was put there on purpose. I also don't buy it that they never recorded anything with it. Of course, we won't be able to prove it and Google won't tell. But once again, their history tells all.
It's a similar phenomenon to the "post-Hilary" world of the Wikileaks email docs. People assume there was hard evidence proving a criminal conspiracy by the DNC to rig the election somewhere in there... mostly because that's what other people told them. Not because they've bothered to look.
People's cynicism has led them to put more trust in the metafictional reality of leaks than actual reality. Which, ironically, makes them easier to manipulate even as they believe themselves to be somehow above indoctrination and control having reached enlightenment through the "Snowden revelations."
I agree that the sentence exactly as you've written it describes a possible conspiracy theory that some people may hold.
I also believe you're hedging a bit-- it's possible for people who didn't follow the leaks to infer from your exact wording that a) NSA did not access those databases at scale using the PRISM program, or even b) NSA did not access those databases using PRISM, or maybe even c) NSA did not access those databases using PRISM or any other NSA program. None of those are true.
Here's something relevant from Wikipedia about PRISM:
> Documents indicate that PRISM is "the number one source of raw intelligence used for NSA analytic reports", and it accounts for 91% of the NSA's internet traffic acquired under FISA section 702 authority."
Can you speak to the veracity of that sentence?
Would you consider that as misguided? It certainly encourages a general distrust of all those company logos in the slides.
It leads to things like people implicitly trusting DDG because they weren't on the PRISM slide, or implicitly trusting Facebook and Reddit because they aren't the "mainstream media."
You're supporting my point rather than refuting it, in that you appear to have drawn an arbitrary line in the sand and decided to doubt everything on one side and believe everything on the other. That's not a rational point of view, it's religious dogmatism.
Detecting broken glass with a microphone? Does the device even have enough CPU power (and RAM) to add advanced advanced audio processing features? Or was this going to upload the audio to Google's servers to do the work? If it's the latter, that would necessarily require uploading audio without a wake-word trigger.
Either they just admitted to wanting always on microphones in the home, or they are blatantly ling about why the microphone hardware was included. Designing hardware for a large market usually involves a lot of value engineering to reduce the number of parts or replace a feature that requires expensive parts with a functionally similar design that is cheaper. Saving $0.01 (or less) by removing an optional resistor doesn't sound like a lot, but it adds up if you're selling >100k units. A microphone is much more expensive. A part that costs $0.366 (or more?) needs a good reason to be included, and "for the possibility of new features" isn't good enough. So what was the real intended use that justified including a moderatly expensive part?
 The robber about to break your window isn't going to call out "Ok, Google" first so the Nest Guard knows it can upload an audio clip.
 $0.366 when buying >10,000. Up to $0.75 in lower quantities. (prices from a random example: https://www.mouser.com/ProductDetail/DB-Unlimited/MO064402-4... )
(I still think it's insane that the bean counters and value engineers let them include a microphone that wasn't needed.)
Having worked on hardware products, the features planned sometimes (even often!) change after the hardware has been prototyped and an initial production order has been placed. It is cheaper to simply not ship the feature than it is to change the board.
Many in this comment section do not really seem to have much experience with hardware. It is fairly common for products to ship with unused hardware and it much more believable than malicious intent, especially given how disorganized Google is internally.
Couldn’t it run a local model to detect possible incidents, and when a local confidence threshold was exceeded, upload to Google to run a more intense model? I’m pretty sure this is how things like “Hey Siri” and “okay Google” are implemented.
Remember the time google lied about performance impact of adblockers in chrome so they wanted to remove function that lets adblockers work? They changed their position after being pointed out that's a huge lie. It was last week.
Maybe they understand and do not care, because there are many vocal critics. But having a microphone in a product and not disclosing it? If not even google can keep track of what they should tell us, how on earth do they think they deserve trust?
(Not a dig incidentally, just that at some point the pattern of behaviour must reach a point that swings Occam's Razor to malevolence being the most likely explanation)
1. History of privacy violation? Check.
2. Increasing pace and scope of privacy violations? Check.
3. Financial incentive to continue and expand privacy violations? Check.
4. Lack of legal oversight deterring continuing and deeper privacy violations? In every single nation, check.
At this point the onus is firmly with Google / Alphabet to prove the ethics of their actions, because we already know their intent.
I suppose they could try to instil a whistle-blowing culture whereby people are rewarded for highlighting potential problems list this to other silos, but then like external bug bounties you get into a new family of argument about what the problem is worth, who truly found it first, and the race to be first will lead to a lot of noise around any useful signal.
 the engineers will know it is there as they are designing the thing, the money people will know as they will have been involved in the "it is cheaper to just leave it and disable it than to redesign it out" decision-making process.
 no matter how flat/heterogenus/other-all-together-now-buzzword-compliant-word-of-the-momemtn a company claims to be, there will be siloed groups within it, and within them like Russian dolls in larger organisations.
> how on earth do they think they deserve trust
They probably don't, individually. They are like us, with similar concerns.
But they don't need to think they deserve it individually as long as the company overall can convince enough of us that they do (and convince enough of us that will never be convinced of that, that it doesn't really matter in the long run anyway).
Today we have a asymmetry of transparency: institutions and companies are intransparent while the individual isn’t. This assymetry in information translates into an asymmetry of power.
The traditional way citizen of free societies dealt with asymmetries of power was to divide them.
A government could easily sentence and jail anybody if it weren’t for some strangely roundabout rules that made this hard.
The privacy movement is part of a powerplay between individuals and entities that go beyond single persons.
Of course you also have those who think it is about their dick pics..
Yes, millions are a much bigger temptation, but you still have a choice. In the hand, either they decided those companies where matching their ethics, or they gave up on ethics for money.
Given our entreprenarial culture, is that surprising ?
It is possible. Not saying it's easy, but it's still a choice.
The we still make the choice to give the bullies more power in the end.
Yet it is still a choice. It's just a question of how much your values are worth.
This isn't an aberration; it's the goal. Startup companies are group-funded technology incubators that, if they succeed, are consumed by larger technology holding and aggregation companies.
(I used to share an office building with Eero's early team. They seemed nice.)
I'm saying it's still a choice.
We, sadly, prefer one option to the other.
Or maybe there is a third option we don't see.
This is the flow of the Silicon Valley startup ecosystem.
However, what i think happens is that you see a headline "Google buys Nest for $3.2 billion" but the reality (again, i'm assuming here) is that in order for Nest to get that $3.2 billion, they need to reach certain sales goals. So now the little companies drive ends up being to reach sales goals.
So i'm not sure the elegant products get mutated so much to serve the new owners, i think the acquired company gets mutated to cash out.
At least in that case, you have it very very backwards.
Nest made Google more like Nest, not the other way around.
> Can Nest Secure detect breaking glass? No. We’re working on bringing glass break detection to Nest Guard, the main hub of Nest Secure. Nest Detect, the open/close motion sensor, doesn’t have a microphone, so it can’t detect breaking glass. But its motion sensors can detect movement by intruders as well as when a door or window opens and closes depending on how it's installed.
This was listed before this big announcement.
In 2008 a study was carried out that attempted to use facial recognition to identify passengers for signs of terrorist activity , so maybe they are used for that.
On the other hand, as you say it was probably just cheaper to use an off the shell Android tablet that has a built in camera...
Maybe a variation of this old meme might help explain: "The great thing about going to the gym isn't exercising, it's showing everyone online that you did."
I should imagine there's a non-zero number of fetishists who would pay $ for a live Skype call and/or webcam show with an exerciser.
1. Accept the OEM design (cameras uncovered) but possibly have to deal with people not liking a camera shoved in their face (camera active or inactive)
2. Modify the OEM design (cover the camera, costing money) and nobody even knows that there was a camera there in the first place
So I'm curious as to why they chose #1. A pure cost-saving exercise? Reserving use of the cameras at a later time? Didn't have the option of modifying the design? Didn't think people would mind the thought of being filmed?
Privacy has always been an important factor when people consider any Google products, and they are fully aware of that so this topic must have always been on their list of priorities. For a company like Google with rigorous testing/approving processes in place before a product is even launched, to come back and say that it was an accident is pretty hilarious, though realistically what else could they have said?
I still like them. It's a love-hate relationship, we have passed the denial phase and entered the acceptance stage long time ago.
In fact, I think it's more plausible that this entire foray into IOT is to collect even more data for use in advertising (e.g. get more microphones in more places). Why else would an advertising company get into such a wide array of businesses?
Yes, their products are convenient and typically get good QA testing, but there's still no way I'll be convinced that they're not trying to get as much data as possible to contribute to their core advertising business.
Can you change the title to say "Nest Gaurd's", because this has nothing to do with the Nest Thermostat, which is called, "Nest".
Anyone who is even remotely familiar with hardware design will know this cannot be an accident in any way and form. It's there because its designed to be there. The fact that its not documented takes it firmly in the territory of extreme malice and dystopic surveillance unconstrained by any ethical concerns.
The only folks for whom this is not a concern are those unburdened by any sense of societal or ethical concern. They represent those sections of the tech community who have zero compass or qualms and do not see any problem building a toxic dystopic society.
There's no way I'm getting a digital personal assistant like Google Home or Amazon Alexa. It's a novelty that trades privacy for a little convenience, and I'm not that lazy.
(... in fact, I tend to think about it in just the opposite way: if I'm ever murdered in my home, I want the cops to be able to subpoena Google to get evidence on who the killer was. It's nice to have a system watching my back all the time).
As an aside, this is why I would never use an Android phone. At least Apple, for all their faults, allows me to keep my data on my phone and treats privacy, user consent, and app permissions as serious matters. Meanwhile, Onavo is still available in the Play Store.
Edit: It's telling that you didn't respond to my reasons for why this technology is potentially harmful, and instead just reached for "but what about phones?" I would love to hear your argument for why corporate or government abuse of data from always-online, always-listening devices like Alexa and Google Home is not a real concern.
Frankly, they seem like less of a privacy concern to me than a smartphone because they don't also track my movements.
> I would love to hear your argument for why corporate or government abuse of data from always-online, always-listening devices like Alexa and Google Home is not a real concern.
It is a concern, just not a big one. I use an Android phone (and an iPad) so I figure that horse have left the barn. I'm willing to accept some risk if there are benefits. For example, I risk my life every time I drive my car. If I'm willing to take on that very real risk, why would a hypothetical about an internet connected device doing much less harm scare me?
Your point about using Apple exclusively is a good one. If I were more worried about it I would do the same.