
Automatic Racial Discrimination via Artificial Intelligence - Neverbolt
https://arxiv.org/abs/1611.04135
======
ctoth
I am a little curious why the title of this submission, "Automated Inference
on Criminality using Face Images" was altered to the current, which is
"Automatic Racial Discrimination via Artificial Intelligence" Especially when
the abstract of the paper explicitly mentions "...using facial images of 1856
real persons controlled for race, gender, age and facial expressions..." This
seems to be needless politicization of the topic, designed specifically to
suppress discussion. Unfortunate as this topic seems rather interesting, and
the finding that criminals and noncriminals divide so easily into distinct,
detectable groups would be interesting to discuss here.

~~~
alejohausner
I kind of sympathize with the author's snarky change of title, but I would
have used "class prejudice" rather than "racial discrimination". The paper
reminds me of Victorian "research" showing that criminals have differently
shaped crania from "normal" people.

Perhaps all they're detecting is that the criminal faces are mugshots, and the
subjects are stressed and unhappy. They look upset. The noncriminals are all
wearing a suit and tie, and look relaxed.

I am certain that the research is spurious.

------
Impossible
The title is misleading. Although I get that this is dangerous, and trained on
the right (wrong?...) dataset it could be an automatic racial profiler. The
paper is written by Chinese researchers from Chinese Universities, and all of
the pictures shown of people in the dataset appear to be Chinese or at least
"Asian" race in American racial categories. This makes it hard to assume their
technique will automatically have racial bias, although I imagine running it
on a set of US criminals could disproportionately flag black or latino people
as criminals. The core idea of classifying people as criminals based on facial
structure is equally gross without the racial component though.

~~~
ctoth
I think we're just stuck in this little echo chamber as there's no way this
post will catch any interest 4 hours later with only 7 points, which is
precisely what I suspect the person who changed the title was aiming for. :(

------
bendbro
Hilarious, but wouldn't it be useless when the population you use it on is
mostly not criminal?

If I had a population that was 90% non-criminal, and the accuracy of the face
predictor something like 75% (for non-criminal and criminal), wouldn't my
predictor that always outputs non-criminal almost always be more accurate?

