Hacker News new | past | comments | ask | show | jobs | submit login

While I appreciate the sentiment...unless you actually think Google and Amazon devices are recording irrelevant ambient sound deliberately (they aren’t), this doesn’t help anything. Unless the software here is better than theirs at recognizing the trigger word (very unlikely), there will be even more false positive activations on this device than there are on the originals.

Edit: It’s very unlikely because Amazon and Google pay for false positives, so they have a strong incentive to develop really good trigger word detection.




> (they aren’t)

How do you know? And, how do you know they will not do this silently in the future?

Also worse detection does not mean more false positives. Usually, you can get the false positive rate very low by allowing more false negatives. In this way you have a choice, how you want to trade-off. Without this device, you are stuck with the choice that Amazon/Google make for you.


> How do you know? And, how do you know they will not do this silently in the future?

Because it's a literal hardware limitation. The device is built in a way that requires a wake word before any recording can possibly happen, thanks to it being built with 2 separate control boards. If they ended up maybe changing the wakeword to "the", then maybe they could "silently" listen to everything, but that would be caught pretty quick because the device would be "lit up" constantly (another _hardware_ thing), or someone would notice that it no longer responds to "Alexa" or "Google".

Seriously, a lot of people on HN need to do their damn homework about these devices before declaring them to be something they have been proven not to be. Packet sniffing and hardware inspection both instantly disprove all these conspiracy-theory nonsense claims that these devices are recording your every word.


Those two separate control boards didn't stop my Amazon dot from acually recording ambient noise and uploading it to Amazon's systems. I know this because of the audio history they themselves provide! You can literally go back and play back all the audio recorded, and a great deal of it did not include questions. Further, there was also a report of being able to trigger audio recording without either activating the LED ring or using a wake word via a serial root console. While a third-party attacker is unlikely to use that method of access, nothing about the hardware actively prevents Amazon ftom triggering it that way. Likewise for Google.

And yes, I work on this stuff. Neither Google nor Amazon have the hardware limitations you suggest.


You are making an enormous amount of assumptions based on a semantic argument.

Echo devices only begin recording if they think they hear the wake word. Obviously this is less than straight-forward, hence the recordings that didn't follow the wake word (just examples of an Alexa device incorrectly thinking it heard it).

To suggest that a serial root console is a point of attack for an Echo device is bordering on insanity. You'd need a breakout board connected via the USB interface (not port, mind you) in order for this work-around to be effective. So yes, if a hacker had physical access to your device, time enough to solder on a breakout board, said third-party could record a variety of things.

But then, it's a whole hell of a lot easier to just install a mic in someones house and get the same effect, now wouldn't it?


> To suggest that a serial root console is a point of attack for an Echo device is bordering on insanity.

That was not what he said. He argues that Amazon/Google could remotely use a similar exploit (without direct access to the hardware) to start recording without lighting up the LED.


Nobody has EVER gotten root console access on an Echo device remotely, and the only successful "remote" exploit that didn't require soldering requires that the attacker and the victim are both on the same wifi network.

Please, feel free to explain how Amazon and Google could exploit that vulnerability (that has since been patched)? More importantly, I'd love to hear how they are going to pull this off and hide it, given network traffic will be a dead give away?

If what your suggesting is actually what he meant, that's even more absurd than attackers trying to do the same.


I'm quite confident Amazon has remote root on every Echo device. It's called a firmware update.


True enough. They could easily push a new update that would record every single thing you say, and despite not indicating anywhere on the device, it would take a matter of minutes before it was in the news because what they certainly can't do is hide network traffic.


Right, the bigger concern for me is targeted attacks. One user, especially a non-technical user, getting a "special" update pushed out.


As indicated in your previous comments, e.g. https://news.ycombinator.com/item?id=18616219 , you work for Amazon. It would be a better look if you disclosed this openly when commenting about Amazon.


It's easily located in my post history, however, I don't work with anything even remotely related to the Echo devices. My interest in this discussion is as a user, not as an employee.


It's an ethics issue. You have vested interest in Amazon's public perception as an employee. You can't try to divorce your comments from your relationship with Amazon and expect to still be taken seriously.

A simple disclosure would have lent your comments more credibility.


> Because it's a literal hardware limitation. The device is built in a way that requires a wake word before any recording can possibly happen [...]

Take note that Amazon Drop In [1] is a feature built around turning on the Echo mic remotely without a wake word. I don't think this feature could exist if there was a hardware limitation.

[1] https://www.amazon.com/gp/help/customer/display.html?nodeId=...


Dropping in causes the Alexa to light up and play a tone, so not exactly stealthy.


But doesn't the device light up when that happens?


I was just offering a counter to the hardware limitation claim. It's possible the device makes itself known during use, I haven't used this feature yet.


And it beeps, and the audio from the other end starts coming through the echo's speaker. There is just about no way to know someone dropped in on you.


But is this behaviour implemented in hardware or software?


This feature is configured via its software.


My understanding is that the light at the very least was hardware. Sound can be disabled.


These are software companies. These devices support OTA updates, including changing the activation word. It's trivial to change the activation word to empty or something innocuous. Ergo, there's no hardware limitation.

If your argument stems from "Google and Amazon would never do that," I do not trust any corporate entity to value my rights more than their ability to make a dollar.


How about the issue where Google’s devices were errantly recording everything due to a hardware issue - where the button override for the voice activation was stuck in the activated position.

People who think that there’s no way that Google and Amazon could be recording everything need to realize that this is also not true. Most of these “limitations” are software enforced, and that software is updated constantly.


This is the main problem that I see. Sure, I tested the packets, sniffed them, made sure it wasn't recording, etc, but then they push an update the next day. I don't think it's practical to monitor these devices all the time, and I haven't been asked to opt-in to an Echo update.

I also don't necessarily assume mal-intent on the part of the companies, but that doesn't mean there won't _ever_ be that intent. Trusting that all of these assumptions hold over time is hard.


"Malintent" can be a hard bar to clear, but it's clear beyond a shadow of a doubt that these companies view these devices as mechanisms to push forward their own interests and desires, in addition to my own. I won't even necessarily call that morally wrong, or at least, that line is very fuzzy. But it does mean that viewing them with a certain amount of suspicion is just rational, not crazytalk.

(It's true of cell phones too, of course, and I am engaged in constant activity to ensure the phone works for me, and not any of the many corporations that want to make it work for them. Turning off notifications, uninstalling certain apps after they've gone bad, ensuring permissions aren't too wide open, uninstalling default-installed apps and disabling others... it's a constant battle made worthwhile only by the fact that in the end, I really have mostly mastered my phone and it is working for me. I don't have one of these audio assistants because it is far less clear to me how to do that. Modulo being spied on by intelligence agencies, anyhow, although at this point I'm not sure how one could even escape that.)


> This is the main problem that I see. Sure, I tested the packets, sniffed them, made sure it wasn't recording, etc, but then they push an update the next day. I don't think it's practical to monitor these devices all the time, and I haven't been asked to opt-in to an Echo update.

This is a problem with forced updates in general (I'm also thinking of Windows, Chrome, Chrome extensions, etc. here) that security experts seem completely blind to.

That said, note that even if the software didn't update, it doesn't mean it would have to send bad packets when you're actually observing. It could randomly start doing that once in a while after a few months.


> Trusting that all of these assumptions hold over time is hard.

This is the big point. You are not only trusting that the company as it is today is doing the right thing, but that the company will continue to do the right thing for so long as the device is in your house - and that they will do the right thing in perpetuity with your data (including if/when they sell the company down the road).


Over a year ago Google removed the part of the hardware that caused that bug on the Home mini: https://www.theverge.com/circuitbreaker/2017/10/11/16462572/...


And so something like that can never happen again? Regressions are a very real thing, both in hardware and software.


I'm not sure how you would regress a button that physically doesn't exist anymore. Also, if you're that scared of future bugs that don't exist, then you should probably throw away your smart phone.


I think the point is that bugs exist and will continue to exist: whether it's the same bug, a different one, mal-intent, negligence, or anything else. Sure, this one device won't solve every single problem out there, but should we not solve anything just because we can't solve everything?


Right, and my point is that bugs will exist for all devices, not just these. Applying the logic, "bugs could happen" to just these devices isn't rational because it applies to all all devices, especially smart phones. We shouldn't ditch devices because of future potential for bugs that don't exist yet.


The entire top surface of the Google Home is a button (capacitive). Those kinds of sensors are just as susceptible to physical defects as mechanical buttons.

As a side note, whataboutism adds nothing of value to this discussion about the Google Home and Amazon Echo.


And the capacitive button doesn't trigger the listening hardware. Splitting hairs over the hardware specifications isn't proof that the bug is still a problem.

Also, talking about your contradictory behavior with your smartphone isn't whataboutism, unless you want to avoid addressing your hypocrisy, because smart phones are susceptible to the same blanket fears you have with homes/alexa. To critique only the latter, and not the former (which you use daily), is not fair.


> And the capacitive button doesn't trigger the listening hardware.

Per Google's own site: "Long press to trigger the Google Assistant."


"The device is built in a way that requires a wake word before any recording can possibly happen, thanks to it being built with 2 separate control boards."

This is a very technically naive interpretation of their hardware/software solution.

If the wake word were hard-coded into silicon then perhaps I would be charitable about your misunderstanding(s) - but of course it is not. The wake word is user-definable and can be changed to arbitrary sounds at any time.

Whatever hardware limitation(s) may exist are trivially worked around with software, which can be updated over the top of you at any time.


There's a widespread sentiment that current evidence of compliance to "doing right by users" should be viewed with circumspection. And it's fair to say Google's past behaviour raises doubts about the level of trust users should extend them.

What is a conspiracy theory about today's hardware, I have no trouble imagining is a planned or at least considered future iteration of their "service".


> Seriously, a lot of people on HN need to do their damn homework about these devices before declaring them to be something they have been proven not to be.

Based on what? Marketing copy? Eyeballing iFixit teardowns?

> but that would be caught pretty quick because the device would be "lit up" constantly (another _hardware_ thing)

Totally not buying it, unless you can show me the traces and discrete components that force power through the LED when signal from the microphone is allowed to reach the uC. If it's done in software, I'm not trusting it.

If you're making an argument of "trust the vendor because economics", you have to recognize how weak it is.


Do you (or anyone else with the same claim) have a citation for this?

I've spent some time reverse-engineering the echo microphone board, and while there is an interlock that prevents recording while the red mute button is lit (just the light under the button, not the ring), I didn't see anything that would prevent recording while the ring light was off.


> Because it's a literal hardware limitation.

Unless there is a separate out of band board with a relay I can hear or see (meaning, code alone can't enable something), then it really isn't a hardware limitation. The security controls and operations are in the code. The code can change or may already have silent monitoring capabilities. Nobody on HN could really answer whether or not this is the case. All we can do is speculate. If someone were required to put lawful monitoring code in place, they would not be allowed to discuss it here. The best anyone could do is decompile the code or get the source code for the firmware. Even then, there could be non-volital space that allows for updates.

Case in point, there have been malware packages that could enable your microphone and camera on the laptop without turning on the LED. This varied with camera model. Some power the LED when the camera has power. Microphones don't always activate an LED. There are a myriad of articles you can find providing examples of malware that can listen to cell phone microphones, laptop microphones without activating the LED.


> the device would be "lit up" constantly (another _hardware_ thing)

Unless the mic and lights are somehow wired together (they aren't), then this is really just more software which can be trivially updated away.


They are.


How so? Ultimately the mic is always on and listening for its keywords - if you look at the teardown of the Alexa on iFixIt, I don't even see any device other than the main CPU that would be capable of performing keyword recognition. Meaning the main CPU would have to be the thing then controlling the lights after the keywords are recognized...

The Google Home at least has a separate board with a microcontroller on it which could be used for keyword recognition, but I'm pretty sure they allow that to be updated for the sake of improving keyword recognition and there's no reason that an update couldn't disable the LEDs in the listen state as far as I can see.


Not a hardware guy, but couldn’t you tie the LEDs to whatever bus that connects the mic and the main CPU?


Yes, I don't mean to say it's impossible - just that you'd need an entirely isolated system to detect when data was flowing over that link which is physically connected in all cases and cannot be updated. I don't believe we see that in either the Alexa or Google Home, but I'd be happy to be mistaken if anyone's done a more in depth teardown of these systems.

And all of this is hinged on hoping you notice LEDs firing in the corner while you're having a conversation. Perhaps a more noticeable method should be used in cases like this. A forced "beep"/tone or something from an isolated circuit hardwired to the speakers.


As an alternate angle - Instead of trying to disable the light have it show the "I'm doing a software update" light pattern. I know I personally wouldn't give that a second glance


> If they ended up maybe changing the wakeword to "the", then maybe they could "silently" listen to everything, but that would be caught pretty quick because the device would be "lit up" constantly (another _hardware_ thing)

From pure technical perspective, can the device not be programmed to be waked by wakewaord “the” with the light off?


> "can the device not be programmed to be waked by wakewaord “the” with the light off?"

It absolutely can be.


I get that you believe this, and I even understand you repeating it to other people on the Internet. What I don't get is that your tone indicates that you are offended people don't believe what you believe... which also just happens to have been incorrect in the past and many others seem to think is provably possible in the future.


> Packet sniffing and hardware inspection both instantly disprove...

I'm under the impression that packet sniffing is useless with end-to-end encryption, but I could be wrong. I.e., you can tell that something is being sent, but you can't know what.


The theory is that we can still estimate how much data goes over the network. So if it were sending all audio to the cloud, we'd see.

It does however not exclude other information, like sending keyword flags, or storing audio fragments to send along with other messages later on.


Neither does it exclude the possibility of any "time-bomb"-type of features or future OTA updates altering the software's behavior.


If OP can show what they use to packet sniff those boxes, then life will be good and we can put the conspiracy theories to bed.

If they can't see all of the traffic, but just know where the traffic is going, then I don't think they did their homework.


You own the client. You can do anything to it. There is no way for encryption on the client to prevent you from inspecting the content.


Citation strongly needed.

I do understand that the wake word processing happens in a special kernel in a low power state, however....

The wake word is a trained kernel, it can be trained to listen for a huge set of things (as seen in the Pixel's passive song detection), so they would just train the kernel to detect a large targeted (marketing?) vocab.

about being "lit up" constantly? I'm not saying you are wrong, but I'd really like to see a citation that this is true. Is it true for both echo and home?

And while Packet Sniffing can disprove that it's listening and sending whole audio to the cloud, it can't disprove that it's listening for a huge set of "wake words" and toggling bits in other control messages to track users in more subtle ways.


> Because it's a literal hardware limitation.

Citation needed. Further, listening for a wake word and reacting to that is likely done completely in software: the fact it's even listening for a "wake word" means the hardware (microphone) is in fact always listening, it's just [presumably] not actually sending that audio to The Cloud (tm).


I don't own either of them but Siri and Google on my phone both require training when I first use them. Do these devices not? IF they do then isn't that proof they are re-programmable and could be programmed to respond to anything?


Good luck packet sniffing an encrypted text blob of your conversations the device is transcribing.


> Packet sniffing

The unauthorized recorded data can be sent encoded with/into the authorized data.


Not that this helps anyone sleep easier, but imagine in today's age... a whistleblower -- perhaps one of the thousands of software devs working on one of these -- leaked proof that these devices are recording everything to re-market and profit, without permission...

The resulting backlash and legal ramifications would be so huge it just wouldn't be worth it. It wouldn't just take an insane and stupid CEO to do that, but also thousands of other tech/adops employees who'd have to be like, "yea this is a great idea."


Surely somebody in the '90s said something similar with regard to location data, and yet your location is tracked 24/7 by adtech megacorps, and the thousands of tech/adops employees don't say a peep. The playbook has 3 easy steps:

1. Get people addicted to technology X.

2. Keep bugging people using technology X to surrender their privacy using classical dark patterns.

3. Profit!

There is no need for whistleblowers. It's all done in the open. You have already willingly surrendered your communications, your 24/7 location, your knowledge searches, your financial transactions, your media interests and your genetic material. Why not surrender the privacy of your home as well? Yes/AskMeLater.


That’s my big concern with this tech, training people to have always-on surveillance in their homes without a second thought. I realize that the typical and trite response by some involves throwing away my phone, but there are holes in that. First, it is trivially easy to control where your phone is, you can get burners, root your phone, and all of the other good things we know and love.

An Echo, or similar dross is a closed box controlled OTA, and networked. Even if someone had immense faith in company X, it would be unwise to ignore intelligence and law enforcement both foreign and domestic wanting access. You can’t root Alexa, it won’t even work without the cloud. It really does feel like training wheels for something entirely unpleasant, and all because people are so helpless in the face of dubious convenience and fashion.


> training people to have always-on surveillance in their homes without a second thought

Even worse: when always-on surveillance devices become popular enough that a judge could rule that the technology (in the abstract, not a specific product) is "in general public use"[2] - crossing the bright-line rule created in Kyllo v United States[1] - the police no longe4r need a warrant to use the technology see the "details of a private home that would previously have been unknowable without physical intrusion"[3].

I'm not talking about the police being involved with Amazon or using the Echo. When a technology is "in general public use", the police can use their own always-on microphone to transmit previously-private speech to a 3rd party on the internet. Normalizing surveillance devices not harms the person using the device, it also reduces *everyone's 4th Amendment protection.

[1] https://caselaw.findlaw.com/us-supreme-court/533/27.html

[2] Used throughout the ruling[1], but especially section II of Justice Stevens' dissent.

[3] The ruling[1], 2nd paragraph


> Surely somebody in the '90s said something similar with regard to location data, and yet your location is tracked 24/7

I remember a Romanian politician and member of Parliament complaining about the local telecom providers displaying the GSM location data on the phones’ screens sometime back in 2002 and 2003, I remember of laughing at his ludicrous (that’s how I viewed it at the time) complaint, I mean, he was a stupid politician while I was a CS student, couldn’t he see how cool it was to see your neighborhood name on your Nokia 3110’s screen? Of course that the stupid politician was right and I and the fellow technophiles like myself were wrong.


I'm reminded of the Volkswagen diesel emissions scandal, where VW were doing something illegal and were whistle-blown by a developer, costing them billions of USD in fines and massive damage to their brand.

Just because something is ultra high risk, stupid, illegal and abuses consumers isn't apparently enough of a reason for large corporates not to do it.


I just kinda doubt this. How much backlash was there when it came out that the NSA was recording the full content of every cell phone call in the Bahamas?

Edit: codename SOMALGET, subproject of MYSTIC. https://en.wikipedia.org/wiki/MYSTIC_(surveillance_program)#...

http://www.documentcloud.org/documents/1164088-somalget.html


Why would there be? The Bahamas is not inside the United States and thus is part of the NSA's mission of monitoring foreign communications.

Spy agencies spy. It's their job description.


> was recording the full content of every cell phone call in the Bahamas?

The what now?


I didn't realise this either but it looks pretty widely known: https://theintercept.com/2014/05/19/data-pirates-caribbean-n...


Priced into product development at this point...

1) whistleblowing unlikely because any employee that steps out of line can and will be destroyed 2) any media fuss will blow over in a few days 3) promotions and bonuses require outsize risks

I think everyone has a point in their career when they realise large tech companies are unaccountable before the law. Mine was watching the MERS database running roughshod over American property ownership laws.


> The resulting backlash and legal ramifications would be so huge it just wouldn't be worth it.

Everything can be explained away with "we discovered a bug that might cause your unit to record you constantly, but it's fixed now. Won't happen again, sorry!"


> The resulting backlash and legal ramifications would be so huge it just wouldn't be worth it.

Are you sure? I don't seem to remember too much backlash from this, which was pretty similar:

https://www.esquire.com/lifestyle/cars/a33654/apple-is-shari...

Or this more recent story:

https://www.digitaltrends.com/home/amazon-alexa-sends-record...


The first article isn't anything nefarious.

> However, Apple's practice of sharing Siri data with third parties [to provide and improve Siri, Dictation, and dictation functionality] is perfectly legal and outlined in Apple's iOS Software License Agreement, which Siri users are required to accept.

I mentioned using the voice data to market "without permission." That's my rationale. All of these scary location tracking this, retargeting that methods are always buried in a privacy policy somewhere. But when you start doing it on the DL, that's when you get in trouble. So the cons greatly outweigh the pros for any sane company.

And of course the second article is an isolated case of human error. Nothing to do with violating privacy for profit.


"Allow Google Maps to always access your location. Yes. Ask Me Later".

Given the multiple precedents on the erosion of privacy path in the past 20 years, of which I quoted one example above, it's pretty obvious that they will turn "always on listening" in the future, using whatever dark patterns necessary to avoid a class action suit.


I am getting sick of yes, ask me later. Stop asking me completely


>How do you know?

Because that would be pointless. The real problem right now is false triggering. If you’re actually worried about being spied on, why on earth would you have one of these in the first place?


From a non-privacy perspective this adds the feature of being able to customize your wake word, which besides being a nice feature on its own also counters Google and Amazon's desire to inject their brands deeper into our psyche by making us say them out loud.

From a privacy perspective, having user control of the wake word prevents Google and Amazon from adding future wake words that could be abused for other ways to track us. For example, Google might get the bright idea to track TV ads by listening for audio in the ads. Or tracking people in your house by making Android phones emmit non-audible chirps. These kind of "features" could be easily introduced at any point in the future by an update to privacy policies that nobody notices.


Google might get the bright idea to track TV ads by listening for audio in the ads

Considering that some TVs have this built-in, I'd be surprised if Google wasn't already doing the same. It's why my "smart" TV isn't allowed to connect to my wifi network.

(There was a previous HN article about it, I believe the brand was Samsung.)


I think it was the Facebook app, it listened to ambient background to determine music and TV shows. The user had to opt in or at least approve permissions.


Afaik by now, there have been least two court cases where Echo recordings were handed over as evidence [0] [1].

At this rate, it's only a matter of time before evidence like that gets leaked/released, which would serve as a good probe on how much these devices really record.

[0] https://techcrunch.com/2018/11/14/amazon-echo-recordings-jud...

[1] https://edition.cnn.com/2017/03/07/tech/amazon-echo-alexa-be...


Who cares. Whoever buys the hardware can do whatever they want with it. If someone feels better with this device on their Google / Alexa product let them do it. If the speech recognition is horrible I’m sure they’ll take it off.


+1 to who cares.

> If the speech recognition is horrible I’m sure they’ll take it off.

This is interesting and cool, cause it sounds to me like they only detect for the keyword to "unlock" the Google home, so in theory they don't even need speech recognition. In most cases, they could just do with telling if a sound you make seems to match the sound you defined to be their name ¯\_(ツ)_/¯


Also, it would make no business sense to always be listening. A lot of people think that just because Alexa and Google Assistant are free to use, it means that these services are virtually free for the companies as well, but that's not the case. There is no way Amazon or Google would waste millions of dollars running the state of the art speech recognition algorithms on your house's background noise.


There is no way Amazon or Google would waste millions of dollars running the state of the art speech recognition algorithms on your house's background noise.

Unless they wanted to listen for a dog barking, then add "pet food buyer" to your profile data.

There are a lot of very easy use cases for monitoring background noise.


Actually, it doubles your area of risk. Now you have 2 companies to worry about per device.


It's an open source project. There's no company to trust.


Users without the skills to verify the code isn't nefarious have to trust good samaritan developers instead.


> Users without the skills to verify the code isn't nefarious have to trust good samaritan developers instead.

I trust that amongst thousands of people with different incentives at least one will raise their voice if something is not right. At least more so than I trust a corporation with, in this case, the the wrong incentives to self-regulate to my expectations.


I've always wondered how much OS code gets audited or if everyone just assumes someone else will do it (bystander effect)


Nothing is 100% guaranteed, but with an open source project, given enough users, its far less likely for someone to be able to bury nefarious stuff without many eyes looking at it and at least one person sounding an alert.


Yeah but really this isn't true. Popular open source that has tens of thousands of eyes on it still gets compromised all the time (see: npm). Even the Linux kernel has had rogue git commits injected into it.


The probem with npm isn't that open source doesn't help, its that the eyes get spread out thin when you have thousands of modules - so nobody is looking at the changes that happen in their lots of small dependencies.

Which is not to say that thats not a valid approach - but for it to work we need better tools to handle lots of git repos at once (for example, the ability to get notified about any new code on github that affects your project would be pretty cool, especially if its coming from people or organisations you haven't explicitly marked as trusted yet)

I would like to see someone try and sneak rogue commits into Linux. It would be quite the feat.


> the Linux kernel has had rogue git commits injected into it

What? Who "injected" what and when?


I'm also interested to know more about this.


Users without skills can still hire a developer of their choice to do the verification, if they're really paranoid.


Do you think being an open source project makes it more secure somehow? It doesn't.


This is code you can inspect running on hardware that you own and control. It's trivial to ensure it's secure at that point. Unlike when it belongs to a company.


I'm tired of explaining why this isn't a valid argument for security. Being able to compile your code means _nothing_.

As is tradition, just read "reflections on trusting trust":

https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p7...


It doesn't have to connect to the internet to do what it does. The scenario you seem to be suggesting is that the Project Alias developers would be conspiring with Google by compromising Project Alias to NOT disrupt Google's listening and then Google would be listening in on you using their network access. This by definition does not double the area of risk.

If you can be confident that Project Alias does not have network access, then the worst possible scenario, even if the developers are literally Satan, is that Google Home would be doing exactly what it does without Project Alias attached.


I say to Project Alias "Call my friend Chris".

Project Alias whispers to my Amazon Echo "Call Secret Project Alias Man in the Middle"

Project Alias requires no network connection to do nefarious things.


What, you think the rasberry pie is a internet connected listening device too? Why did you connect it to ethernet then?


He's not talking about a rasberry pi in general, he's talking about Project Alias, the device featured in this article, and the first step in the instructions is connecting the Pi to your Wifi so you can download the software.

So yeah, this project turns the raspberry pi into an internet connected listening device.


Also says in the same instructable that once the device is trained that there's no need to have the device connected anymore. It also doesn't need to be connected to the internet - you just need to be able to get to it via a browser with a microphone - I was able to train the device with no connection to the internet.


Great point. What happened with good old physical connections via USB cables?


It's probably the same reason that people started installing listening devices in their homes in the first place -- convenience. Few people want to walk around and plug in a cable to update/reconfigure their devices.


Edit: It’s very unlikely because Amazon and Google pay for false positives, so they have a strong incentive to develop really good trigger word detection.

If the false positives give them data that increases the value of your marketing profile by more than the cost of processing the interactions then they actively profit from false positives, and have no reason at all to stop them.


The point is establishing as much trust as you can with your devices. Only having external points of trust when absolutely necessary. This is generally a Good Thing, and something we should do by default, not as a reaction to some corporate leak.


> It’s very unlikely because Amazon and Google pay for false positives, so they have a strong incentive to develop really good trigger word detection.

There's a feature in my Pixel to show what song is playing --like in the real world-- on the lock screen. A kind of always-on Shazam.

They don't mind paying for always on.



Go figure, I stand corrected.


That's fully on-device though, it keeps a database of the top songs fingerprints and doesn't use the network to recognize songs


The sentiment this is countering is not "Google and Amazon _are_ recording 24/7/365". It is countering "Amazon and Google _would_ record 24/7/365 if they could get away with it socially".

Arguing that the technology doesn't do this does not address the underlying root perception that Amazon and Google are not to be trusted.

Google built its entire business on harvesting _all_ data on the web and building an infrastructure to process it efficiently. Purchasing Nest made clear that to us that they now wish to harvest data from the home. If they were able, socially and politically, to harvest _all_ data from the home, their history indicates that they would do so with all possible speed.

Having a device you built yourself that learns locally and is under your control prevents Amazon and Google from changing their mind about what level of recording is acceptable without your informed opt-in consent at the time of the change. Both providers reserve the right to update their privacy policies without notification or consent, including granting themselves to increase data collection.

TLDR: This device is the physical expression of mistrust of Amazon and Google. Their greed for your metadata is well-documented, and their policies let them increase data collection in your home at any time without your consent. Such increases would be prevented by this device.


> While I appreciate the sentiment...unless you actually think Google and Amazon devices are recording irrelevant ambient sound deliberately (they aren’t)

Citation needed.

Or if you have reference firmware I can load, that would be awesome... What do you mean its closed firmware and controlled by Amazon/Google? You mean they can change it whenever they wish, and we have no say???

Long story short; you rented a spy device and you trust some random person online it isn't spying... Even though there are credible stories of these devices doing precisely that.

No, no, absolutely not, and hell no!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: