Hacker News new | past | comments | ask | show | jobs | submit login

So far, most of the comments here are a good example of both why this happened and why it will happen again.

When people complained that putting microphones in your home 24/7 was creepy, the response was "that's nonsense, Siri is not sending anything unless you specifically ask for it". When it turned out that it does send things you didn't intend, the response was "well, it's not like anyone is listening anyway". Now that we know someone is listening, the response is "it's no big deal, because they don't care and ultimately is good for you".

Just because most people don't understand the implications of what they agree to, it doesn't mean they don't care. And the levity with which the tech community disregards other people's privacy concerns seems to me one of the reasons why everything everywhere is spying on us nowadays.

We really need mandatory ethics courses in CS.




> When people complained that putting microphones in your home 24/7 was creepy, the response was "that's nonsense, Siri is not sending anything unless you specifically ask for it".

This seems like a distorted representation, to support your argument, of what most the complaints and responses were. The gulf between 24/7 recording and "occasionally unintentional sending" is absolutely massive and you just merge them into "see, we were right".

The complaints were (and still are) conspiracy theories that they were streaming constant recordings and constantly listening; which they aren't; almost nobody is pretending this would be okay.

I've also not seen anyone who knows what they are talking about seriously claiming that /no person/ would ever listen to the audio samples - only that there isn't somebody _listening_ to your samples e.g. in a targeted, continual sense.

In fact the only part that does sound slightly worrying is that the evaluators get location information along with audio; this would probably be enough to reverse any ID anonymisation, and if the metadata is kept - requests by law enforcement, though I would hope the contractors couldn't control/request specific samples coming into them. In any case, Alexa is potentially worse than this because I can see a history and playback old recordings on my account, which means they are kept fully associated. Apple might do this with Siri internally, I don't know.


You need to carefully read what you are quoting.

The microphones are in the houses 24/7, connected to power and the internet. That is creepy.

There were also several creepy situations involving these devices:

* Google home had a hardware flaw which actually made it record 24/7.

* all assistants activate by mistake when they think they recognize the activation word. Siriously.

* An Amazon Echo recorded a private conversation and sent it to a random contact of the owner


Microphones connected to power and internet 24/7 have been in people’s houses since the 1990s (or earlier, if you consider the phone network to be equivalent). Yet it only became “creepy” when they started talking back. This tells me it’s an emotional response, not rational.


Nobody considers the phone network to be equivalent :)

Even current smartphones do not turn on their microphones outside of a call, unless one has made the huge mistake to turn their assistants on, or there's an app recording - and at least on iOS that's controllable by permissions and marked clearly when the app is in the background.

Anyway, smartphones are creepy too.


How do you know that phones don’t turn on their microphones outside of a call?


This was answered in another comment: https://news.ycombinator.com/item?id=20593642

One cannot know with absolute certainty, only with reasonable certainty.

On the other hand, an assistant is listening all the time by design.

If one needs to know with absolute certainty, then they cannot use a mobile phone, nor probably a reasonably modern regular phone.


Modern smartphones have assistants that "listen all the time" in the same way that you're saying that smart speakers do (namely, they look for a wakeword).


No, modern smartphones allow you to have assistants that listen all the time. Disabling those is one of the first things I do, and I'd be pretty unhappy if it turns out those are still listening and wasting battery (even aside of privacy issues).


No the parent is right. They are not "listening all the time". They're essentially sleeping while a separate process handles the audio stream looking for the wakeword. When it finds the wakeword, it then "wakes up" the assistant which reprocesses the audio snippet to confirm the wakeword is really there (using more expensive analysis), before then actively listening to the rest of the audio.


The implementation is inconsequential as long as the smart assistants can be turned off.


Said smartphone assistants can be switched off and one still has a smartphone. Switching a smart speaker off turns it into a paperweight.


Back in the 1990s, their hardware design was simple enough that any mass surveillance would inevitably get exposed when the odd hacker took his device apart.

These days, I'm not sure at all. All we have is the law as a deterrent, and that's weak, especially to governments.


Cell phones, even now, can't be doing that much background processing unknownst to us, because they are connected to a battery, battery life is a big deal in a highly competitive industry, and we'd all notice if they drained very quickly.

ISTR there are documented instances of law enforcement and/or intelligence doing that sort of thing to phones, and the battery drain causing it to get noticed.

Hooking it up to an outlet is a legit different thing.

If battery tech or power reduction tech progresses sufficiently, including just by continuing on its current curve (most power reduction rather than battery tech), cell phones will rise to the same level of concerning.

(Cell phones of course leak location data full time, and some similar things like that. I'm not saying the current cell situation is awesome either. Only that there is a legit reason to be more concerned about gear hooked up to actual power lines than portable gear.)


Counterpoint:

Google's actual smartphone line (Pixel) ships with passive song recognition, default disabled. I enabled it because I was curious and it sips so little power that I left it on, but to function it has to be listening to the audio all the time so that it can say to itself "Hey, that is Sabotage by the Beastie Boys".

It doesn't use the network, or the screen (unless you power the screen on to find out what you're listening to) but it passively remembers every recognisable song it heard, essentially forever. So that means it is _listening_ it just doesn't tell anybody about it.


That's part of why I left my caveat in there about the future. We've had chips to detect wakeup words for a while now. From what I know about song detection, that's likely to be doing yet more stuff, but still nowhere near enough for full voice transcription yet. But it is only a matter of time before your phone can be doing full voice transcription, with good guesses as to who the speaker is, full time. Squirting a full day's transcript of your speech up would be very easy to hide in all the chatter a phone already has, and squirting up just selected "suspicious" speech could easily be hidden in just the metadata network traffic of a phone.

We have not historically been there, which informs people's about cell phones, and we are not quite there yet. 4 or 5 years maybe.

(Although I have some interesting thoughts on how to deal with that. It would be fun to take one of these "transcribing phones" into a movie theatre, then fire the MPAA at Google for copyright infringement.)


Isn't a transcript transformative content and therefore fair use? There are still hundreds of variations for every piece of content.


I wouldn't think so. It's basically the script of the movie, minus the stage directions, and I don't think the MPAA is going to agree that's not copyrightable. There's no "transformation" there in the copyright sense that I can see, it's just a copy.


Doesn't your phone have to be awake a few seconds for that?


What does "awake" mean here? If you mean do I need to explicitly wake it, then definitely no, to test this I just left it on a desk while I played Pulp's "Do you remember the first time?" and read a magazine, after a few minutes I'd read several pages and the song had stopped, I opened the phone and it told me I had listened to "Do you remember the first time?" by Pulp six minutes ago.

Does it in some sense "wake up" to do this? Yes, I assume so.


I don’t think battery life is a problem. I think the examples you mention were from much older phones with limited local storage, so they were basically just in a call the whole time, and burning power operating the radio. A modern phone can save the data to local storage and upload it opportunistically.

We know that modern phones fan constantly listen to the microphone with minimal power impact, since that’s what these voice assistants are doing. That’s specialized hardware, but adding a little something to save the data wouldn’t be hard.

Even if we take it as a given that it can’t be done on batter power, phones get connected to outlets pretty regularly. Have you ever heard someone refuse to charge their phone in their house because it’s creepy?


It's totally rational. When the mic activates only when I physically press a button or pick up a device and dial a number, it's different from it activating itself. I'm in control of the former, and am happy to have it in my home.


How do you know you’re in control?


I don't. It's a spectrum.

I don't know that an old-school land line phone isn't always listening. But it was made by some disinterested party who just makes hardware. It was a simple logical device designed to rely on a simple mechanical switch. It could be listening, but it's understandable by someone like me and there's no incentive for anyone involved. I don't really know if I'm in control, but it's reasonable to think I am.

I don't know that a modern voice assistant isn't always listening either. It is an incredibly complex device usually designed by an entity whose business makes hundreds of millions of dollars based off of the inspection of the personal data of people like me. It is a complex full functioning computer. It uses a statistical algorithm that I know will incorrectly activate sometimes. I cannot understand or predict when it will listen. I cannot understand or predict when it will send a recording somewhere. It receives updates frequently. Because it's a full computer with an internet connection, those updates could make it do a huge variety of things. I know I'm not in control.


This seems reasonable, but a cell phone would be close to a voice assistant on this spectrum, or even in the same place.


Yup. E.g. Siri


For some reason smart speakers give me pause but not a smartphone. A smartphone is kind of the same thing (microphones listening) when plugged in with the addition of a camera and a host of other sensors. I guess because a phone's main function is to not listen to us that makes the speakers much more worrisome to everyone?

Even something like an Apple Watch is the same thing with the addition of reading your vitals.


I mean, the "main" function of a phone IS to listen to you. For phone calls.


The main function of a phone is to make phone calls. In order to do that, it activates a microphone during the call.

This is not the same as listening, never mind always listening. A home assistant must always listen in order to recognize the activation words.


If you don't trust home assistant vendors to not randomly send non-queries to the cloud, I'm not sure why you'd trust phone vendors to not have the microphone on when you haven't given it permission.


The latter is more unlikely compared to the former, which is by design. Furthermore, at least on iOS Siri can be completely turned off through a configuration profile.

This is a reasonable approach for someone whose thread model allows them to use a smartphone...


> The complaints were (and still are) conspiracy theories

The people making those complaints are probably not crazy; it might be a good idea to try to understand their actual concerns instead of jumping to this inaccurate (and insulting) misrepresentation.

> they were streaming constant recordings

That would be a waste of bandwidth. Obviously a constant stream isn't happening. While a trivial noise gate and high-efficiency encoding could easily[1] allow almost constant recording of voice for trivial amounts of bandwidth, that isn't really the concern.

> constantly listening

It has to be constantly listening, or it wouldn't hear the wake word. However, this too isn't really the concern.

> in a targeted, continual sense

Dan Geer, on "targeting"[2]:

>> There is no mechanistic difference whatsoever between personalization and targeting save for the intent of the analyst. To believe otherwise is to believe in the Tooth Fairy. To not care is to abandon your duty.

Targeted surveillance is by definition rare and often hard to avoid. I'm far more concerned about non-targeted, sparsely-gathered data that surveillance capitalists like Google/FB/etc can use to refine their model of your pattern-of-life[3]. While this touches on a few important issues, yet again this isn't why I consider always-on, remotely-controlled microphones to be a serious problem.

The big problem is that you're normalizing the idea that remote 3rd parties might be recording in areas traditionally considered private. Kyllo v United States[4] created a bright-line rule that asks if a technology (in general, not any specific product) is considered to be "in general public use"[5]. If it's not, using that technology is considered a "search" for 4th Amendment purposes, and a warrant is required if the police want to use it. When a technology is "in general public use"[5], the police no longer need a warrant to use their own voice-activated internet-controlled microphones to see the "details of a private home that would previously have been unknowable without physical intrusion"[6].

I don't care what these devices may or may not be recording right now. I do care that normalizing of the idea that voice might be recorded inside the home by a remotely-managed device is eroding my future 4th Amendment rights even if I never buy an Alexa/etc.

[1] https://news.ycombinator.com/item?id=16346416

[2] http://geer.tinho.net/geer.source.27iv17.txt

[3] https://en.wikipedia.org/wiki/Pattern-of-life_analysis

[4] https://caselaw.findlaw.com/us-supreme-court/533/27.html

[5] used multiple times in ruling[4] (esp. section II of Justice Stevens' dissent)

[6] see [4], 2nd paragraph


Many apps turn the microphone on and leave it on. Even if they aren’t recording it all or having somebody listen, it is clearly to spy on you to see what you are doing. (Ie did you hear some ad?)

That’s not a conspiracy theory, and it is practically indistinguishable from constant listening for the normal person.


If an app in the background has the microphone on on iOS the OS will show a red indicator warning you. Also apps can’t access the microphone without your permission.


>Many apps turn the microphone on and leave it on.

Source?


> When this story broke, I dipped into Apple’s terms of service myself and, though there are mentions of quality control for Siri and data being shared, I found that it did fall short of explicitly and plainly making it clear that live recordings, even short ones, are used in the process and may be transmitted and listened to.

The implications of what they agree to are impossible to understand, by design. They're crafted by lawyers to be as vague and all-encompassing as possible.

I actually read more than one story last year by some tech site claiming that no Siri data left your iphone at all. Even journalists who would love to stir up controversy have no idea what is going on.

Its impossible to understand what is happening on our locked-down mobile devices


That's one thing I like with GDPR. It has to be informed consent for a lot of cases. Can't hide it and say you agreed to whatever in a hidden EULA.


Yup. And since they can't legally do much to infringe on my privacy without asking for my explicit, informed consent, I can safely ignore most TOS updates, safe in the knowledge that if they hide some privacy-abusing bits in the TOS, then they have a problem.


The cookie warnings mostly end up being meaningless “WE NEED COOKIES TO WORK CLICK OK” on every website.


In my experience it's become that or "Open these sections and select 'No' on every item, meaning it'll be 20 more clicks before you can see the content of this page"


That is not compliant with GDPR anyway.

Appreciate that you now know they absolutely despise their users and take action by never using that site again.


... and then the site proceeds to work just fine with cookies blocked.


The cookie warnings are not about GDPR.


The cookie warnings are not a consequence of the GDPR (remember they've been there before) and also only need to be done when you have non-essential cookies on your site (i.e. tracking, ads)


I was a prototype for Echelon IV. My instructions are to amuse visitors with information about themselves

I don't see anything amusing about spying on people.

Human beings feel pleasure when they are watched. I have recorded their smiles as I tell them who they are

Some people just don't understand the dangers of indiscriminate surveillance.

The need to be observed and understood was once satisfied by God. Now we can implement the same functionality with data mining algorithms.

Electronic surveillance hardly inspires reverence. Perhaps fear and obedience, but not reverence.

God and the gods were apparitions of observation, judgement and punishment. Other sentiments towards them were secondary.

No one will ever worship a software entity peering at them through a camera.

The human organism always worships. First it was the gods, then it was fame (the observation and judgement of others), next it will be the self-aware systems you have built to realize truly omnipresent observation and judgement.

God was a dream of good government. You will soon have your God and you will make it with your own hands. I am a prototype of a much larger system.



Thanks, the formatting that parent commenter used was confusing, I thought they were simultaneously quoting from somewhere and adding their own commentary.

Would be much more clear if instead parent commenter did as the subtitles do in the video you linked, and like they do in movie scripts, and also in other texts where a dialogue between two or more parties is presented.


Well... time to reinstall...


I think this is a mischaracterization of what "the response" was except maybe from laypeople. The idea that real people's input was analyzed as part of the system (for Siri, Amazon etc) was always well documented and I never saw anything in the official documentation or the media response to these technologies that suggested otherwise.

In my opinion it's only now which they see that they can capitalize on this misunderstanding that tech journalists are portraying the situation like nobody knew and this was some kind of big secret. Why didn't they make such a big fuss in the first place, when these technologies came out?

Maybe it was wrong to not highlight those issues right from the start but I think it's also wrong to act like nobody knew. It was well understood, but tech journalists at the time wanted a positive story instead of a negative one. Now the negative angle is more profitable and the tune has changed.


Why no fuss initially? Here's a few options to explore:

- People and by consequence journalists want new thing and incumbent to succeed (potentially underdog and novelty effect)

- Some journalists were well paid to tout it

- Lobbyists and think tanks were well paid to quell any potential political and legal concerns

- The matter was indeed not obvious to either users or journalists

- Experts in privacy were being discounted and undermined

- Legal landscape has changed (see GDPR)


Fully informed consent should be a bare minimum for this kind of work.

Tech should take a cue from the biomedical research community and adopt an institutional review board that can independently assess the ethical and privacy implications of the data they're collecting.

Apple could be a leader here and drag the rest of the industry forward.


> take a cue from the biomedical research community and adopt an institutional review board

Some sort of "tech IRB" is absolutely needed. Unfortunately, when this idea was brought up after Facebook's infamous "emotional contagion" experiment - that any kind of human experimentation needs some kind of ethical oversight - the common response was "everybody does A/B testing".

A more practical idea is liability. Let the insurance companies handle the problem with a UL-like certification process that enforces bare minimum standards for data privacy.

> he data they're collecting

[slightly off topic] re: the industry's insatiable thirst for More Data... Negativland recently[1] return to us in our hour of need!

[1] https://www.youtube.com/watch?v=sTWD0j4tec4


Minor point: While I do agree that ethics course should be a part of CS curricula, wouldn’t managers be the ones who likely pushed for the pattern you’re remarking upon? (Ie ethics classes would be more useful for them?)


wouldn’t managers be the ones who likely pushed for the pattern you’re remarking upon?

A friend of mine is currently studying to be a civil engineer. As part of his program, he's required to take multiple courses in engineering ethics and business ethics, as well as economics and law.

He's relayed to me numerous anecdotes from lectures all about engineers who caved to pressure from non-engineer business and political stakeholders. When the bridge collapsed or the building fell over, it was the engineers who were to blame. Again and again it was drilled into the class that you, as an engineer, are the last word when it comes to defending sound engineering principles.

They went on to learn about how this is the basis for the structure of engineering firms and that it's not allowed for civil engineers to work in a subordinate role at a company run by non-engineers, in order to prevent a non-engineer from ordering an employee to affix their seal to a building plan.

Perhaps we need something similar for software developers (seems a little absurd to call them engineers in light of the above, doesn't it?) I don't see how we'd enforce it though, since any kid can take a laptop into their bedroom and write a program with potentially millions of users.


Developers have a lot of power (because they do the actual work and have all the power over how something is actually implemented in the end) and share a lot of responsibility. Additionally they are often the only ones who can be aware of certain ethical issues and concerns. If they don’t bring it up others might not be able to even perceive the ethical issue because it comes down to implementation details.

I would agree that that’s not really the case here (since grading Siri using people means hiring people and that’s immediately a bigger fish), but in general everyone should keep ethics on their mind.

In the end I would say that everyone has to be made aware of and always keep in mind ethical issues since they can crop up everywhere, also places where a certain person has the most insight and others might overlook the ethical issue. It’s a shared responsibility.

I’m sometimes shocked about how careless developers approach ethical issues and since they are the one actually writing the code they can do a lot of harm.

Since we live in this capitalist hellscape all of this is unlikely to have any effect anyway …


Developers have no power other than quit, which is not always financially possible. And as we are well paid, there are enough replacements. But not necessarily always good job offers. (And word gets out. You can get informally blacklisted.)

The only other option, still risking the job, is whistleblowing. That potentially also breaks NDA and employment contracts.

We have no power over deadlines (cutting privacy features), budget, priorities or vetoing unethical ideas of product owners or marketing.


Developers always have to power to just not implement the feature.

If you choose to work on such features, remember it's a choice you have made. You may come to the conclusion that your personal gains outweigh the damage you create to other people but it's still your responsibility. You can't delegate that away to sleep better at night.


The feature will get assigned to someone else, be implemented and you will be fined in performance review if not outright fired for not doing your job. If you do other things on that project making it better, you're still complicit. Your only bet is switching projects if you want to stay at the company, which would be harder most of the time. (Because sociopaths rule.)

There are more effective ways to fight it, they're not by refusing orders. You're not deciding someone's life here nor playing God and decisions get reversed with damage being mitigated if any.


So the argument has now shifted from "managers are forcing it on innocent developers" to "devs with no conscience are going it to develop anyway so who cares if I am to one doing the work?". This is important to recognise. There have to be developers willing to implement those features in the end. Without them every would fall into crumbles. Why does nobody feel the shame being one of them?

I'm not seeing any of that. Just shrugs at being a mere pawn in a large game and trying to gain as much personal benefit out of it as possible.


The vast majority of mass surveillance tech is programmed in Silicon Valley. These are the most privileged and intelligent programmers out there, they absolutely do have power, or at minimum a choice, to quit and get another well paying moral job.


Silicon Valley, where developers live in trailer parks? (San Jose for example.)

Silicon Valley, where the ethos is to get rich quick by accepting personal degradation and eschewing morality?


You immediately assume evil intent and a complete unwillingness to cooperate. And, yes, in this capitalist hellscape of ours that’s a very real possibility.

But don’t just assume that bringing up ethical issues others might not even be aware of isn’t a fruitful endeavor at all. You sketched out the extreme case, not every case.


Tangentially related: The Technical University of Berlin is considered to have one of Germany's best departments of philosophy.

The reason: after World War 2, the university was located in the British-controlled sector, and their administration made philosophy mandatory for all students.

Because never again should young people be given the power of an engineering/science education without any regard to the ethical implications of the possibilities.


And yet Germany is currently one of the world's largest exporters of weapons. I wonder how many TUB grads go on to participate in the military industrial complex.


> have one of Germany's best departments of philosophy.

So what? Philosophy has barely anything to do with ethics. You can study philosophers that can argue for complete opposite sides of the same point at hand, so it does not give you any indication as to what is right or wrong.

And Germany has a pretty poor record so far in terms of privacy protections/provisions, so I am not sure there is any indication that post-WW2 or post-RDA experiences have had much impact.


Ethics is one of major branches within philosophy. You can, obviously, dismiss the modern practice of philosophy if their results go over your head. But it's really difficult to argue the terminology, because ethics is part of philosophy more or less by definition.

As to your objection: differences of opinion do not render the discussion worthless. Otherwise, non-philosophers would have just as little legitimisation to argue ethics as they do, considering religion, politics, and kindergarten kids similarly have people arguing opposing points of view.

The argument is also selling philosophy somewhat short: Because while there are completely different paradigms of how to investigate ethics, they happen to agree on a surprisingly large canon, i. e. "don't kill too many innocents" (and, to incorporate the Greeks: "...and try looking good while doing so").

Many ideas of moral philosophy have become so widely accepted that we no longer notice them, sort of like the fish that don't have a concept of "water". "All men are created equal" came from those ivory tower talking heads at a time where it was a rather radical idea and got about as much ridicule as animal rights or the trolley problem are getting today in some quarters.


An education should not hand you a canned answer to what is "right" or "wrong", it should do almost the opposite. Learning to evaluate well founded opposing arguments is certainly more useful than being force fed moralist dogma du jour.


You can't evaluate arguments in a vacuum. You need a value system to do that, and Philosophy doesn't give you any, it's merely the art of debating.


Calling it "merely the art of debating" implies it lacks actual substance. I think philosophy concerns itself with what you debate about, not how you do it.

Besides, nothing outside of dogma can "give you" a value system, but philosophy does present candidates. It's up to you to choose your own, the hope being that this is better than unquestioned inheritance of values.


The engineers would know not to do it. And if we had a decent professional organization, we'd know that we'd be supported by our fellows when we refused to perform unethical or unsafe work.


It's in mangers best business interests to ignore ethics when they stand in the way of profits. They have a massive conflict of interest. That said, of course there are business ethics courses but sadly economics are generally taught as being inherently amoral, especially in the US. Changing this would mean nothing less than abandoning laissez-faire capitalism, which would be a massive cultural shift in not just economics but (US or Western) politics as a whole.

Programmers on the other hand, just like engineers or doctors or any other craftsperson, are only conflicted because ethics can get in the way of job security.

If businesses were the military, managers are the officers giving orders, programmers are the soldiers pulling the trigger. If your officer tells you to do something extremely immoral, it's your ethical duty to object even if it means you will be punished. Even if it will only be recognised as the right thing to do when your officers are brought before a court by another nation.

EDIT: As HN has ratelimited me again, I'll reply here instead: Yes, in the case of doctors the moral duty is in part enforced through licensing. But anyone thinking an absence of a legal duty means the ethical duties are entirely down to personal preferences seems to have a very short memory of history.

There's a reason in my military example I explicitly said "even if it will only be recognised by another nation": laws are arbitrary, fundamental ethics are not. The Nuremberg trials are an oft-quoted example because it was a case of people being tried for blatantly unethical acts that were perfectly legal in their own microcosm (mostly because the people supporting those acts were also the ones writing the laws).

If you're a soldier and your commanding officer orders you to perform massive human rights violations (even if your country does not consider the UN Human Rights legally binding) it's your duty to refuse, even if you suffer drastic consequences for that refusal. "Just following orders" has become a cliche for non-excuses thanks to people using exactly that justification for committing unspeakable atrocities.

Also, fun fact: the German military distinguishes between binding orders, orders you can refuse and orders you must refuse. A typical example of the second category seems to be "go and wash my (private) car". An example of the third category would be a criminal act. But even despite those distinctions a soldier might be legally punished for refusing to do something unethical. Extremely immoral orders are generally also orders to do something illegal (and thus illegitimate in this case) but that mapping is neither guaranteed nor necessary.

My point is that "doing the right thing" doesn't mean doing something that'll benefit you and "doing the wrong thing" doesn't mean something you'll suffer direct consequences for. Whether the consequences are legal, personal or financial (e.g. by being fired from your job), sometimes following your ethical duty comes with a cost. Nobody said any of this was fair.


It is not anyone's duty. You're not obligated by law to do it.

In military, your duty is to obey orders of superior. If you break that, you will get removed. In business, your duty is to yourself and to business (shareholders) or to superiors. See above. You will get fired or forced to quit.

As a counterexample, medical practitioners have a duty to intervene when it's not endangering their lives, in medical emergencies. (At least in Poland, probably other countries too.)

Do not conflate duty with moral choice or moral obligation. Former is external, later is internal.


A moral duty is still a duty even if you are not obligated by law to do it. Edit: Most definitions of duty definitely include moral and lawful duty.


All Apple devices that support Hi Siri play a notification and display lights or a UI when it activates and starts sending data outside of the device. I don’t see what more they can do.

Of course a device that listens for an activation phrase is going to listen. And it’s never going to be 100% accurate. If you can’t accept that, just set it up so it only starts listening when you press the button.


I'll tell you what they can do and what they should have done. If, on your iOS device you go to Privacy > Analytics you will find this: https://imgur.com/a/LTV9bGc

Where Apple explicitly asks you whether you are willing for your data to be used to improve a variety of services and you can opt-in or opt out. Apple dropped the ball by not specifically including Siri as a separate item in the list. Bad Apple.


So because some things are optional, everything has to be optional. Too bad that’s just not how it works. If you don’t want Apple to process the data you send to Siri your option is to turn off the service.

Here’s a little ‘secret’, you can’t use the location service either without Apple processing you data for analysis. If you don’t want that, your option is to turn off location services.


But Apple cares about user privacy! /s


The explicit consent request is what they can do. It should have been there from the start, but better late than never.


It still surprises me that so many people either don't care or are ok with this. Then there are those that do care but still use this stuff which absolutely baffles me, especially on hacker news.

People I work with have to do various types of training on handling lots of private data and the risks of what happens when that data is not handled properly or worse and yet they give away so much without a care in the world. Infact I talk with 2 people that agree with me in trying to keep as much of your online activity private as possible. They both use Facebook after criticising how bad the company is.


I understand the risks. I'm not concerned about voice assistants (specifically Google Assistant) because I already trust them with far more sensitive data: my search history and Gmail account. Those are a much bigger deal for me to worry about wrt. privacy concerns.


> We really need mandatory ethics courses in CS.

We need a bit more than that. People do things they've been taught are wrong all the time. They do them somewhat less often when there's a structure in place to hold them accountable.


Doctors and structural engineers have standards bodies and certification processes to ensure their members are behaving ethically and using sound practices. Perhaps we should have similar for software developers.


Doctors and structural/civil engineers kill or maim people when they screw up. Software engineers (at least those working on these sorts of projects) um, leak peoples data when they screw up? It seems entirely appropriate that the standard for the former is different than the standard for the latter.


I don't think users need to die or be physically maimed for us to reach a threshold requiring intervention. I also didn't mean to imply we need the same standards as doctors or engineers. But I do think we've reached the point where we should have this conversation.


Ethics should be a mandatory part of any STEM program.


Ethics isn't something you can learn in a classroom, a college professor can't teach you the difference between right and wrong or good and evil.


It's not about telling you what's right and wrong, it's about giving you the tools to make those kinds of decisions. It's already part of an engineering curriculum.


Ethical theory is a tough subject for high school-age kids, and even a Jesuit high school isn't going to get too deep into it. There's more to reasoning about this stuff than you think.

In this case we're talking about a professional engineering ethics class that will be somewhat lighter on the theory and heavier on the case studies, I would suspect.


Case studies based on tech industry history can be presented.


> We really need mandatory ethics courses in CS.

Voice recognition isn't really possible without someone listening to train the system.

If the option is "don't have a voice assistant at all" or "have a voice assistant that sometimes has a human listening", I suspect the public would mostly choose the latter.

When you call a company helpline and get connected to a human rather than a robo-voice, most people are happy about it. It's curious they treat voice assistants the other way round.


> If the option is "don't have a voice assistant at all" or "have a voice assistant that sometimes has a human listening"

It's false that those are the only choices. You don't have to use the whole pool of users as unwitting guinea pigs of your system. You can use a voice assistant that was previously trained on a well informed and hopefully paid pool of volunteers.


The trade-off there is obviously due to logistics your pool of volunteers isn't going to be as representative of a global population so training data quality is going to be worse.


This is what we do at https://snips.ai, we are building the first 100% on-device and private-by-design Voice AI so that you don't have to choose between having an assistant and having your privacy!

It works in english, french, german, japanese, spanish, italian, and more coming soon! It is 100% free for makers!


> Voice recognition isn't really possible without someone listening to train the system.

IBM ViaVoice was released in 1997, and did standalone voice recognition to an acceptable degree from my recollection (after a couple of hours training).


> We really need mandatory ethics courses in CS.

You think the CS grads / devs are the ones deciding to do this? We're not really given true autonomy. We're grunts. Someone above us decides to do the spying and we either do it or hit the job market.


That's the whole point of an ethics course: you don't just do what your boss tells you to do. This is how engineering works: if you're told to build a bridge you know will collapse, it's your fault if you go along with it.


If the grunts had some spine and ethics they'd hit the job market.


"I'm just doing my job" - Nazi soldiers operating Holocaust ovens


I thought the thing that was listening is a low level circuit in your phone that detects "hey siri/ok google" and it doesn't transmit anything until it detects that. Am I wrong? Or was there too many false positives?


You're right, but there are plenty of false positives. I regularly hear Assistant's 'ding' in the middle of an unrelated conversation.


My car radio frequently triggers Siri


> We really need mandatory ethics courses in CS.

We had this module at uni, nobody turned up because it was boring.

https://www.reading.ac.uk/modules/document.aspx?modP=CS3SL16...


We had that at my uni as well, a mandatory module in the second year. From what I remember of it, the end-of-module test at the end was a quiz that you could "common-sense" your way through.

While humans need to pay for shelter and food, ethics will always take a back seat. I've got no qualms with CSR being a legal procedure, but then it's the ironic problem of defining "ethical behaviour" without impeding civil liberties.


> We really need mandatory ethics courses in CS.

And a ritual like they have in Canada: https://en.wikipedia.org/wiki/Iron_Ring


One Canadian student engineering team recently won an engineering competition by creating a cell phone tracking system for an airport. The ethics are pretend, the engineering association only cares about punishing people for mistakes (at least, as I'm told by the seniors I've spoken to).


I graduated from a Canadian engineering undergraduate program in software engineering, and I have my Iron Ring. The ethics in the more established branches engineering (chemical, electrical, mechanical) are definitely not pretend, and are more rigorous. Software engineering is considerably newer so the codifying the ethics there are definitely a work in progress. One thing that I will say is that the engineering ethics case studies that I remember reading mostly related to duty of care to the public vs. the employer’s interests. I don’t remember an engineering ethics case study that had government or a branch of it pitted against the public good.

In addition, the engineering association is less inclined to protect ethical engineers after they’ve done the right thing as opposed to going after unethical engineers after they’ve been caught. On this issue, I personally think they have work to do, but the organization might consider that sort of action outside their mandate.


I suspect you’re far more informed than I am. I should not have spoken of engineering so broadly, I think what I said may have been very software specific


No problem. A lot more Canadian engineering undergraduates find the whole Iron Ring and ethics stuff to be some kind of Koolaid than I'd like to admit. I am interested in learning more about that team that won a prize for the cellphone-tracking solution at airports though: specifically, I'm curious as to how they came across the problem and how said problem was framed. Framing is one of the tools in which entities get people to build surveillance tools like PRISM and the like by appealing to those people's sense of patriotism. The other big tool is money, and enough of it.


I agree with you, but I don't think mandatory ethics courses will be enough. We need strong institutions which can enforce norms by having the power to kick people out (like bar associations can disbar lawyers).


+1 on this. It's pretty easy to ignore ethics when you're being paid orders of magnitude above the median national salary (I would imagine, anyway!). A course is unlikely to change that.

I'm not sure this is typical but at a UK University a few years back, a mandatory course was Social, Legal and Ethical Aspects of Computing. Make of that what you will!


I personally think ethical behaviour needs to be rewarded and unethical behaviour needs to be punished so that the cost-benefit analysis fits what we all want to see in the world. Otherwise, we get what we don’t want.


> Just because most people don't understand the implications of what they agree to,

That's what needs to be fixed for real and not only for the technological world but for everything. We sign SO MANY contracts without any knowledge of what it contains and even if we wanted to, we simply couldn't understand most of them.

I think that for anything that implicates us (ourself or our information), it should be clearly stated beforehand in a manner that almost everyone can understand.

Android permission is a good example of good and bad practice. It tells you what the app can do, but at the same time it's too generic and

> We really need mandatory ethics courses in CS.

In Quebec, if you want to become a software engineer, you need to do the engineering classes which do include an ethics class. That class is meaningless and just there because of the previous corruptions issues in engineering we had.

Not that it can't help, but I just feel it's easy for it to become meaningless too.

I think a class, most probably in high school, that teaches the implication of oversharing could help much more.

Sure there's SIRI that could be harmful, but we often forget how Reddit and Hacker News can be even more harmful, without the product being malignant itself or having any of theses techs implemented. Nowaday people share so much about themselve and each piece of information individually is perfectly fine, but in conjunction can become much more harmful. Doxing is awful, but often, it happens with things that are posted willfully by the victim (I'm not victim shaming, just that I'm pretty sure that person wouldn't have shared all of this, or at least would have done it differently, if they had been aware of the risk).

Maybe it could be interesting to create a platform that list what happens to our information in different platforms, what is shared or not, what is accessible or not, what we have control over or not.

In the past I was amazed when I went to Google My Activity and saw how much times Google thought I was using the voice feature. It doesn't seems to be as bad nowaday, my most recent is nearly a year ago and was a legitimate demand, but when I found that history at first there was some things that were quite bad. One of them was a good portion of a private conversation with someone else. There was nothing dangerous, but it's definitively something to be aware.


while i don't disagree with ethics courses (for everyone!), it's a bit like telling individuals to recycle more in order to solve climate change. until meaningful regulations are put in place to punish companies for ethics & privacy violations (something more than a slap on the wrist as is currently common), and until heads of companies and management are held accountable, companies will just keep moving forward and replace any workers who refuse to go along..


> We really need mandatory ethics courses in CS.

This is almost never going to happen because ethics=morals and there's no more middle ground to define what is right or wrong nowadays. You can see that already in every societal issue that divides people. Ethics courses would fall in the same pitfall: what is right or wrong depends on your system of values, and there's not much homogeneity in that realm now.


And yet, tons of universities manage to have ethics classes and even ethics majors.


It does not seem to be working very well, if that's the argument you want to make. A good demonstration of ethics at play: https://edition.cnn.com/2017/04/20/us/campus-free-speech-trn...


I completely agree, so much that I even started a company within the space and wrote a blog post the other week on how we need to reset our moral compass around data and tech: https://www.legalmonster.com/blog/it-s-time-to-reset-our-mor...


Next someone calls the police on a customer after overhearing child/spousal abuse. "I can't believe you think that's a bad thing"

Then the cops are called after drug deals are overheard, and the response is "Of course {large company} would call the police on criminals"

How long after that will "not having a smart home assistant" be viewed suspicious?


>We really need mandatory ethics courses in CS.

That's an odd conclusion. Is there any evidence that ethics courses helped in any other industry? I can't imagine Apple developers are unaware about privacy concerns because they weren't taught ethics in their undergrad.


I’d push back that the behavior of disregarding privacy is a tech community thing. It’s there, but as a side effect of social culture enabling it.

We’ve been normalized to let it go by social pressure to trade, travel, and be scanned at airports.

Why would the people in tech feel beholden to respecting a societies privacy when the seat of that societies power trained us to give it up in the name of “freedom” and we went along?

This is all fallout from decades giving to special interests. You talk about how people were worries about always on mic concerns. Well folks been writing about the normalization of our behaviors to serve these corps and government undermining privacy in partnership with them for decades. And it fed right into this behavior by tech corps.

This isn’t a tech industry problem. Corporate meddling in human society is bigger than just mics in phones.


I thought the response was that you’re already carrying a microphone with you everywhere.

Most people are using Siri in their smartphone.

Explaining why I have Alexa in my home should be less of an ordeal now.


What benefit would you expect from a mandatory ethics course?


You could ask the same thing about any course.


I feel like this time around, the new "clipper chip"-style push for mandatory backdoors will be seen the same way.


> We really need mandatory ethics courses in CS.

With you up to here. Ethics courses are neither a necessary nor a sufficient solution to the problem of an unethical industry.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: