
Racial-profiling software shows how AI can be dangerous - known
https://qz.com/1286533/a-startup-selling-racial-profiling-software-shows-how-ai-can-be-dangerous-way-before-any-robot-apocalypse/
======
AndrewOMartin
It's not about intent at all, frequently in cases along the lines of "AI did
racial profiling" the effect was entirely unintentional, it was the expression
of a bias in the data, or an unintended bias from the point of view of the
developers.

~~~
apeace
Exactly. I wrote a relevant comment[1] on a different thread a while ago,
which I'll reproduce here for convenience:

The other day out of curiosity I was searching for basic words on ImageNet:
cat, dog, man, woman.

Here is the page for "woman" [http://image-
net.org/search?q=woman](http://image-net.org/search?q=woman)

I was surprised to notice a couple of things:

\- There are categorizations for "mistress", "slut", and "adultress". The
closest analogs for "man" appear to be "playboy", "stud", and "pimp". There is
also a "foolish woman" and doesn't appear to be an analog for man.

\- Look at the pictures for "mistress". Many if not most of them are just
women in their bathing suits or doing other normal things, like laying in a
bed with a man. The same is true for "slut". Does this mean AIs using ImageNet
could look at a woman on a beach and categorize her as a "slut"?

\- There is a category for "honest woman", which appears to be three pictures
of a woman standing next to a man.

I'm not saying anybody is at fault or that the people who put ImageNet
together did this intentionally. After all, it does reflect the (unfortunate)
reality of how our society often perceives images of women.

But before I poked through that dataset, it wouldn't have occurred to me how
important it is that we focus on equality and non-discrimination when it comes
to AI. This declaration[2] seems like an interesting step.

[1]
[https://news.ycombinator.com/item?id=17103921](https://news.ycombinator.com/item?id=17103921)

[2] [https://www.accessnow.org/the-toronto-declaration-
protecting...](https://www.accessnow.org/the-toronto-declaration-protecting-
the-rights-to-equality-and-non-discrimination-in-machine-learning-systems/)

~~~
weberc2
I don't know how much stock I would put in this being indicative of our
society's views on gender. Notably, you point out that there is no male analog
for "foolish woman", but we're all familiar with the foolish man trope (Joey
from friends, Phil from Modern Family, Tim from Home Improvement, Ron Burgundy
from Anchror Man, etc etc).

Note: I shouldn't have to point out that I'm _not_ saying our society is fair
and just with respect to gender, but I know a lot of people will try and twist
my words, so hopefully this disclaimer heads off a lot of unnecessary
tangents.

~~~
crooked-v
It makes me think less "failure in image classification", and more "implicit
bias in the WordNet database and/or failure to represent concepts that can't
be described succintly in English".

~~~
weberc2
It could also be "failure to pick words that mean the thing the image
classifier learned". Also, implicit bias in the WordNet database could be _a
different implicit bias than exists in society_. Simply, we don't know and we
should avoid pat explanations and guard ourselves against our own confirmation
biases.

------
ikeyany
> “Ultimately, it’s about intent and we have a lot of faith in humanity,”
> [Zeiler] wrote. “It’s not our business to limit what developers create and
> we choose to believe that most people will use our Demographics for good"

It would be less insulting to the reader's intelligence if he outright said
that racial profiling is just too lucrative of a market to pass up.

------
ogennadi
> Privacy advocates like the American Civil Liberties Union already decry the
> use of facial-recognition AI in most cases, making the case that widespread
> adoption of the technology would mean we would live under constant
> surveillance by police or large tech companies. Ethnicity recognition is a
> step even further past the ACLU’s line in the sand.

