>Alex Hanna, the Google ethicist who criticized the Neurips speech-to-face paper, told me, over the phone, that she had four objections to the project. First, ...Second, ....Third, ....Finally, the system could be used for surveillance.
I'm curious about this. As I understand it, an ethicist is objecting to the publication of software that could be used for surveillance. If that is correct, does it follow that this ethicist would object to all software that could be used for surveillance? If so:
1. How does one distinguish between software that can be used for surveillance and software that cannot?
2. Presumably the ethicist would object to the dissemination of software that can be used for surveillance whether published openly or sold. If so, would they not object to the sale of iPhones, as iPhones contain software that can record video, and therefore surveil people?
It seems like whenever morality or ethics come into play, the slippery slopes start popping up like weeds.
There are very few, if any, clear cut, night-and-day subjects in ethics. Not wanting to engage with ethics because slippery slopes exist is simply turning from incredibly important work because it’s difficult and not work you want to do.
These technologies can incarcerate people. It’s really important to dive onto the slippery slope and find out where it’s stable.
The simplest criteria is whether the software records data that models the real world. If so, it can be used for surveillance by entering data about real people.
So most useful record-keeping software can be used for surveillance. Most software capable of inferring data about the real world from sensors can also be used for surveillance.
Most likely a better ethical framework for talking about surveillance is "how can surveillance (its components, owners, operators, and consumers) be identified in practice and what are its effects?" Surveillance is a subset of knowledge, and most knowledge is both useful and amoral. The use of knowledge for unethical things is the problem, not the knowledge itself.
Names, for example, are probably the oldest technology for surveillance; used since pre-history to refer to specific real people and data about them such as the names of locations they reside in or visit. No one blames names for the effects of surveillance.
Wikipedia: "Surveillance is the monitoring of behavior, activities, or information for the purpose of information gathering, influencing, managing or directing."
Which I think most would agree is the standard definition of surveillance, and I also think the concept of being surveilled comes naturally to most, which is why parties that surveil people make clear efforts to obfuscate the fact that they do so.
The unstated implication here is that ethicists either don't comment comment on such things (and are therefore hypocrites) or that their critique is so broad as to subsume all utility (and is therefore excessive).
Rather than throw up hypotheticals which this ethicist is probably not around to refute, I think it's be better to look at the person's body of work in this area and if you think they have some sort of intellectual blindspot or bias, point out how you think that might detract from their analysis.
Well, none of the publication titles on this page contain the word "surveillance". I was able to find an interview (https://medium.com/ethics-models/interview-with-dr-alex-hann...) where she says we're "embedded in a system of racial surveillance capitalism", but doesn't really discuss it or mention any specific times when bad AI caused people to be wrongly surveilled. If you know of any specific explanation that she's provided, I'd be interested to read it.
But the most likely explanation IMO is that Hanna isn't using the term "surveillance" as a pointer to any concrete idea about what will go wrong. It seems to be just an all-purpose explanation for why AI models need to respect her sensibilities.
I didn't mean to imply anything other than what I wrote.
I actually mean to ask the exact question: How does one distinguish between software that can be used for surveillance and software that cannot?
To be more specific, if an ethicist objects to the publication of software that can be used for surveillance, they are assuming that the software can be used for surveillance. How did they arrive at that conclusion?
Of the 16 linked papers/book-chapters a few were not freely accessible.
Of the ones that were accessible, 3 contain the string "surveil" in the body and 1 contains it in the bibliography. None of the references were helpful unfortunately.
Do you have any other links that may answer the question?
Without making any kind of statement of whether I agree with this ethical framework, I think it's pretty straightforward to assign more or less blame depending on the specialization of the technology for the purpose of doing something evil. In an ethical framework where manufacturers could be culpable for the uses of their products, iron refiners would have specks of micro-sins on the 0.001% level, whereas torture rack manufacturers would be about as guilty as torture rack users.
The result that most of technological society would have a small amount of ambient guilt is not entirely absurd; the idea that industry is not sharply, but diffusely and slightly evil is common in fiction and many people's intuition.
In US society/laws, it seems that the vast majority of blame is ascribed to the entity who last causes an event to occur. So the blame in the following sequence:
1. Miner mines iron
2. Manufacturer makes knife
3. Retailer sells knife to person A
4. Person A kills person B with the knife
I think most people living in the US would be ok with the general idea that person A should be held responsible for killing person B, and not miner/manufacturer/retailer. Perhaps I'm wrong.
Would you consider the knife retailer equivalent to the torture rack manufacture in your example?
> Would you consider the knife retailer equivalent to the torture rack manufacture in your example?
Not GP, but doesn't
>> I think it's pretty straightforward to assign more or less blame depending on the specialization of the technology for the purpose of doing something evil
I guess I'm saying that it doesn't seem so straightforward to me.
I think that "depending on the specialization of the technology for the purpose of doing something evil" merely shifts the question to multiple questions:
1. What does "specialization of the technology" mean?
2. Can technologies have multiple specializations?
3. What does evil mean?
4. Even if we had a perfect translation between "specialization" and "evil," if a technology has multiple specializations, which controls? The most evil? Does this require us to predict which specializations are most likely to be used?
I'm not sure what you're criticising here. Would you prefer that ethicists at Google don't criticise surveillance? Or do you prefer that Google not hire ethicists?
An ethicist criticizing surveillance yet taking a paycheck from arguably one of the largest (non-Chinese) surveillance machines in the world screams conflict of interest.
It's far worse than irony. Google and FB hire AI ethics researchers as a way of laundering their reputation. IMO selling AI Ethics street cred to an ad tech company is a lot more problematic than any particular NeurIPS paper.
1. Identifying the tools available for surveillance is vital to discovering, understanding, and predicting surveillance practices.
2. I instead presume ethicists object to the unethical surveillance of users by any means, and warn of risks they find or foresee.
There is no slippery slope; the unethical effects of surveillance should be countered by effective and ethical means. Banning technology is not ethical or particularly effective. Banning and punishing unethical behaviors is about the best we can do.
In other words, don't ban iPhones. Ban configuring iphones to surveil on users in unethical ways. 911/999 geolocation is ethical surveillance. Stalking apps are unethical surveillance. There's ethical grey in between.
1. How does one distinguish between software that can be used for surveillance and software that cannot?
What is the killer app for the technology under discussion? If the killer app is improving camera filters to make more vibrant pictures but it can also be used for surveillance then you do not worry about it being used for surveillance. If however the killer app is surveillance then you worry about it.
As a general rule surveillance however is not the killer app of these technologies, surveillance is just an instance of the killer app, and the killer app is the more general one of privacy violation of which surveillance is an instance.
>In this paper, we study the task of reconstructing a facial image of a person from a short audio recording of that person speaking.
I can imagine a potentially interesting application of this that has nothing to do with privacy violation or surveillance (that I can see):
What if we could take an audio recording of an ancestor or someone else of interest, for whom we have no pictures, and output a probable face for that person? Sounds interesting and potentially "killer" to me. Maybe I'm wrong.
Generally the killer app for a technology (as I understand the use of the term) is something that is going to earn the creator of the technology lots of money, that basically everyone - either consumers or the members of some specific industry is all going to want that technology and will be willing to pay enough money that the owner of the technology becomes rich.
I can see every police department and government agency paying plenty of money for being able to identify the speaker of a particular sentence in a meeting.
I can see the company with this technology providing services for academic research of the interesting kind you mention for the tax write-offs and also a sort of moral cover, but there is really only one thing I see as their main revenue stream.
Thinking this through a bit more: As I understand it, many published ML papers detail how to train a model, how to predict from the model, and sometimes code that represents a model that has been trained on data that can take new data and predict on it.
I wonder if there is an example of someone objecting to a paper that did not release the training data or software that can predict (aka just the algorithms).
The point I'm trying to get at is: it seems that most ethical objections are to the result of applying algorithms to ("bad") data, not the algorithms themselves.
I'm curious about this. As I understand it, an ethicist is objecting to the publication of software that could be used for surveillance. If that is correct, does it follow that this ethicist would object to all software that could be used for surveillance? If so:
1. How does one distinguish between software that can be used for surveillance and software that cannot?
2. Presumably the ethicist would object to the dissemination of software that can be used for surveillance whether published openly or sold. If so, would they not object to the sale of iPhones, as iPhones contain software that can record video, and therefore surveil people?
It seems like whenever morality or ethics come into play, the slippery slopes start popping up like weeds.