I read through the full text of the bill (http://www.ilga.gov/legislation/fulltext.asp?DocName=&Sessio...) and it seems like it sounds like the companies can be sued if one user agrees to a written policy, but then is used by another user (e.g. a spouse or sibling or friend), which makes smart speakers basically impossible to exist. (User identification isn't good enough, and even if it were, mistakes can happen)
> No private
entity may turn on or enable, cause to be turned on or enabled,
or otherwise use a digital device's microphone to listen for or
collect information, including spoken words or other audible or
inaudible sounds, unless a user first agrees to a written
policy informing the user[..]
Then perhaps in their current form, they shouldn't exist. It's already illegal to record people without their consent in many states. We shouldn't give up this right to privacy for a bit of convenience.
Realistically though don’t a lot of stores have cameras and mics lawfully recording for security purposes? Are they all legally obligated to post warnings?
Yes, at least here in the UK.
Smart speakers are not telephone calls, and even if they were, federal law only requires consent of one party to record (11 states require all-party consent).
In public spaces, you generally lose any right to consent to recording. If you're in someone else's home and they record you with their smart speaker device, then your beef is with that person whose private property you're being recorded on. Same as if they were recording you with their phone's camera in their private residence, vs recording you in a public space.
A dumb recording device can’t do something illegal without its owner’s/user’s affirmative action. If the homeowner uses the recording thing illegally they’re liable for that use.
In contrast, Alexa and similar devices operate on rules built-in by their builders. They do what their builders intend, not what applicable law or their owners require. Liability should rest with the builders.
Not at all, it's how you use it. Cars can run over people but cars can exist; driving over someone is a crime committed by the operator. A machine with a microphone need not be, of itself, illegal.
There is an issue of a secret microphone, as with Google's camera that secretly included microphones. That seems fraudulent because the user is not able to knowledgable decide if she can trust the device.
Are Alexa and others designed to reasonably ensure only consenting users are being recorded?
Should I be allowed to buy a vehicle without a clear view of my surroundings if I only intend to use it on private land? Should I be allowed to buy a recording device with no privacy features if I intend to use it only in private? Does the potential harm produced by the availability of tools without bells and whistles for public safety outweigh the desires of users who would gladly pay for the bare-bones version and use it private?
You _should_ absolutely be allowed to buy a non-street legal vehicle to operate on private land. You _should_ be able to buy a recording device that has no privacy features. You should _not_ be able to sell a recording device that by default is streaming all audio all the time for common consumers without a big label saying "everything this device is within hearing distance of is now public data and you agree you are in violation of the law, where applicable, for using it".
We are no longer in the days where consumers can be expected to know what their devices are even capable of, let alone what rights are being trampled.
This basically would outlaw ANY device that is voice activated, because it has to constantly be 'recording' to be able to be activated.
Tell me, which of these do Alexa and the like perform?
Would it be more difficult? Yes! This is the reality for any company where rights and laws actually matter. If you can't abide, you can't release. Very simple.
I was thinking that in order to fingerprint a voice, you have to "record" it to a digital format that you can process to determine if it is the user who has consented. Even if it is deleted after processing, it was still "recorded" for a few milliseconds at least.
Does the law as written make that distinction? Would recording locally to check a fingerprint not still break the law?
It's the same with phone and security cameras - the manufacturers are not the ones at fault when the devices are incorrectly used by owners in states with specific legal laws regarding recording.
I'm not suggesting the device be made illegal, but I do think a ton of common applications should be... and without those applications, the device loses viability.
The issue at hand is that if the manufacturer is the one who turns on the device, then yes, it is 100% them breaking the law.
If my phone is hacked and the microphone is remotely enabled to record a conversation and ends up violating the law, I'm not at fault, the hacker is.
So how is it different if the "hacker" is just the manufacturer using a private channel that they built for themselves?
That's not privacy.
> private property
When you actually meant:
> personal property
Why do you want to impose this on the rest of us? Why can't I just let people give me this useful service in my own home?
I think the biggest issue here is how smart speakers work: They record your audio and then send it to be permanently stored on Google, Amazon, or Apple's servers, rather than being processed locally and discarded, which we have the technology to do just fine today.
The only reason we're retaining voice recordings is to provide valuable data to the companies in question.
Indeed you can discard unstored data.
I write data pipelines for a living. We often use "discard" as a term for parts of data on which no further processing is performed and are not stored.
It's an extremely common usage of the term. See  for example.
However, you could make a smart speaker that doesn't transmit the audio to a server.
"Sometimes they hear recordings they find upsetting, or possibly criminal. Two of the workers said they picked up what they believe was a sexual assault. When something like that happens, they may share the experience in the internal chat room as a way of relieving stress. Amazon says it has procedures in place for workers to follow when they hear something distressing, but two Romania-based employees said that, after requesting guidance for such cases, they were told it wasn’t Amazon’s job to interfere."
It seems that workers are willing to sing-off their basic humanity and dignity to corporate authorities just like they did in Milgram experiments.
If Amazon does not want to get involved, they should not get involved by listening.
If Amazon workers hear something that sounds like a situation the cops should intervene in, they're in a tough situation. Not calling the cops could mean letting someone die. Calling the cops could embarrass someone or, somewhat less likely but possible, get someone killed because the situation unintentionally escalates.
Calling the cops definitely draws attention to the fact that Amazon was listening in the first place, which I would argue is a good thing and which Amazon would really like you to forget.
My feeling is that for the sake of the workers' humanity, they should be allowed to call the cops if they are concerned about the user's safety; otherwise you're asking them to ignore their conscience, which is dehumanizing and likely to haunt them.
And if users plan to say alarming things, they should turn off their AI microphones first. Having them know to do this brings healthy attention to the true nature of the devices.
And if there's any question, that's why we agree on safe words.
Wifes sister is an EMT and she's really a special kind of person. Whether natural for her or a consequence of her profession, things just tend to slip off her like water on a goose. One has to wonder though how much sticks on a deeper level, perhaps surfacing during personal distress years later.
How would they tell whether their models are right or wrong without listening and having someone compare?
I see nothing in this article to suggest the clips they're listening to are related to an always-on microphone.
It's everything after the product thinks you said "Alexa".
Coincidentally, this is how my wife discovered and fixed a filler word habit of "Ok, good, well..."
“OK, Google patent 9,876,543 and see if we’re infringing.”
or if my colleague is named Alexa…
“Alexa, let’s hammer those idiots.”
I think the first would wake Google Home. I guess Alexa lets you change the wake word to ‘echo’ or ‘computer’. It would be better if it let you use something arbitrary like ‘Rumplestiltskin’
IP leakage or potential misunderstandings don’t seem so improbable to me. Especially if the listeners in (from OP Bloomberg article) in Costa Rica, India, or Romania aren’t au courant with “hammer those idiots” as English idiom in context.
It is interesting that while there are several manufacturers, all of them opt-in users as testers by default, no matter what product you choose. So maybe such market needs a little stricter regulation.
For an example of accidental triggering, look up news on "Alexa creepy laughter".
If they can't do it, maybe their product shouldn't exist yet.
unlike the location history where I understand the use case and maybe search history. but all those should be disabled by default.
I unplugged it for one flyin.
The housekeeper plugged it back in while I was out.
I unplugged it again.
Why in the world would I want any of these smart speakers?
I find that hard to believe. In fact, I can't imagine how it's not illegal. Or if it's not, how it can remain legal.
I am a fan of Android and it's what I use, but I don't like the privacy concerns. Maybe I'll switch to iOS soon. I'm not patient enough to try to use a device like the Librem phone either.
I get that you needed to fulfill some need for a pithy comeback though. Well played.
A lot of people said Facebook ads suggest them products or services debated on phone calls.
If you have Echo or Kindle Fire TV, guess it can be easily reproduced when Amazon really listening and analyze your voice.
Agree. Here's something like video proof from BBC: https://www.bbc.com/news/technology-35639549
Receiving baby items ads while no one knows your wife is pregnant ? Well maybe she googled a brand of baby strollers and fb used its graph + the fact that you're at the same location everyday after work to determine you're her husband.
Receiving ads for a small music band you only heard at that open mic night you randomly joined 3 days ago ? There probably was an event on fb with the list of bands that played that night + logged in the bar wifi.
With the number of people in the "hacker" community who want to prove fb and google are bad we'd already have real technical proof of it happening. Either they send the data back to a server and process it there, or process it on the device directly, both of which should be fairly easy to detect.
The behavior doesn't even need to "make sense" - it could be that people who log on at these times, live here, travel by train, like dog photos and belong to certain groups are highly to be interested in a particular product. It doesn't matter, the system will learn these relations anyway. It might seem spooky and eavesdropping-y from a naive user's perspective, but the simple fact is that when you have that much data you don't even need to eavesdrop.
Today: "Hey, wiretap, do you have a recipe for pancakes?"
Guys I think we lost to the tech overlords.
This reality is something that should never be ignored as it goes all the way to the top. For instance the Supreme Court is the highest voice in our nation on legal matters and they generally only hear cases where the outcome should be complex and difficult to perceive. Yet on most matters you could predict with a pretty good degree of certainty exactly how most the court would vote, on most issues, based on little more than knowledge of their own individual ideologies. This is why I'd never want a 9-0 supreme court (or even a 7-2), even if the majority aligned strongly with my views and values. We're all subject to bias and other issues. And the only really good way to make sure we keep ourselves based and honest is to ensure the presence of dissenting voices.
 - https://en.wikipedia.org/wiki/Bob_Ferguson_(politician)
This is really where the constitution is starting to fail us. The government can increasingly circumvent the constitution by simply pressuring, or enticing, monopolizing companies to cooperate with them meaning they need not pass any particular law, which in turns means that the constitution is increasingly powerless to constrain the behavior of the government. As one obvious example, imagine the government wanted to stop videos from being published on some topic. In times past this would have been a huge deal and they would have had to try to pass a law, which would have directly run into the first amendment. Today, all they need to do is pressure, or incentivize, a tiny handful of companies, Google and Facebook in particular, to cooperate and they can censor whatever they like without passing a single law.
I wonder if this is analogous to how the interweaving of church and state felt in times past, before we started to pass laws requiring the separation of church and state. Both entities are able to covertly pursue their own ends with mutual plausible deniability. 'We're not being anti-competitive. We're behaving within the bounds of the law set by the government.' 'We're not censoring anything. Corporations have every right to set their own rules and policies.' Whatever the case I've no doubt that the next great nation that is started from scratch, as the US was, will undoubtedly make some effort to isolate government from business, and business from government.
 - https://en.wikipedia.org/wiki/PRISM_(surveillance_program)
The important part is that tens of millions of people used Alexa everyday and the utterances are anonymized before being used as training data, so you don’t know who said what, just that somewhere someone said “blah”.
You may insist all day that this person was irresponsible in examining the contract, which I will also disagree with, but claiming that they have given explicit permission is a lie on your part, and I would have a hard time believing that you don't know that.
if only we had some technology that could identify a speaker based on their voice. oh, wait a second. but, lol, no, they aren't anonymized ... "recordings sent to the Alexa auditors don’t provide a user’s full name and address but are associated with an account number, as well as the user’s first name and the device’s serial number"
Once people can perform actually useful tasks with their voice - "hey siri, turn the shower on to 40 degrees" or "alexa, preheat the oven to 300" - while their hands are full doing something else it'll kickstart the whole field.
For example its acceptable to access the video to prove someone stole from your store but it is not acceptable to use facial recognition on your cctv to see who your best customers are.
In my opinion, the only sensible approach is for the title policy to be unilaterally enforced. Any departure from it will invariably involve someone’s subjective ‘political’ stance on a matter.
As it stands, it looks as if someone at Amazon applied pressure to have this changed. I really hope that isn’t the case because it’s almost too shady to be believed.
The writers of those articles are purposefully leaving key information out, to grab people's attention and likely force them to read the entire article to find the missing information or fulfill the title entice.
In this case I think the change of title is definitely defusing the article, but it's also giving the key information that was left out in the bait title.
I agree with you because changing the original title feels like a disruption of the discourse and opens a bias door for whoever changes the title. But, I also believe that bait titles erode the quality of the content and make harder to consume and evaluate information. It's a hard problem.
In that case, shouldn't they be aware, from the same grad school training that prepared them for this work, that a (genuine) human subjects board would require informed consent for this, at the least?
Do they have informed consent?
And is modern consumer "terms of service" the human subjects ethical standard for corporate researchers?
I don't understand your question about terms of service and ethics here. I work with real human subjects research data and it's a completely different field that does not apply to this situation.
Surveillance capitalism was the first thing to come to mind.
The modern Western conceptions of privacy are relatively anomalous from a historical perspective. There's nothing to stop them from changing again.
Shows every nuke set off between 1945 to 1998.
At least Amazon, from day 1 of the Echo/Alexa launch, noted clearly (not in small print, but right there in pictures on the product detail page) that the user's voice was going up to the cloud. And provided a microphone-off switch right on top of the device. Where's the mic-off button on your (typical) Android or iOS phone?
We are flooded with hype about AI, but they seldom mention the armies of people it takes to get it working and to keep it working.
Even members f the public are recruited to train AI system without their knowledge, i.e.: captcha google translate suggestions.
Sounds more like a smokescreen for normal human intelligence to me.
then click on 'Audio and Voice Activity'
Of course, you already said I am a shill, so I don't know why I am trying to respond.
My shill is MyCroft. They're partnered with Mozilla to develop an open source NLP engine, so I have high hopes for it.
OK I have a device that will let you do all your chores for free and for no work. Oh now I need slavery to be legal. It's ok tho - im trying to provide the service people bought.
See how a tangible example makes tech monoliths flaunting of the law sound ridiculous? Yes. You sound like a shill.
It's hard to appreciate what you are saying when you're basically judging the incentive of economic/technological progress as an inmmoral behavior (in this case the intention of improving a technology)
Labeling data and training a machine learning model sounds far from inmoral. Perhaps for some people the idea of companies developing machine learning models and capturing data to train them is an ethically gray concept.
Thing is, they can't possibly review everything. Tagging is hard. The taggers can accomplish 1000 data points per day, as the article says. They can only cover a tiny portion of the Alexa data, and probably they use some algorithm to select which part would benefit the model most.
They don't have my consent and I'm increasingly surrounded by the damn things. Take a look at the comments about hotels, new condos, etc. that have Alexa or Google Home installed. And, of course, friends and family.
And more to the point - what kind of hell is this where getting permission to my data where I have no say on the unethical ways they use it?
You're doing exactly what I'm saying with the shilling tho.
It's just that there's a retarded level of corporate love in the world.
I was listening to Reply All  yesterday, and one of the hosts had to explain to the other that, of course, the device has to listen to you to constantly in some fashion in order to know if you've said 'Alexa' or 'OK Google' or whatever.
It hadn't even occurred to me that people wouldn't know that.
Sometimes, someone is.
This wording is misleading, because it sounds like someone is listening through the device as the speech is being picked up by it, but in fact it's recordings.
There is no legally binding lock out time because there doesn’t need to be. Live-listening is impractical at scale and also worthless for what the article is describing they do with the data.
Remember, this work is being done to make it so humans don’t have to be in the loop.
Because there is no value in doing so. A person could collect all the spare change thrown into fountains at shopping malls, but they don’t.
Whether you consider that “actually watching” is a matter of opinion, but your ability to process and take action on any of the clips is certainly diminished.