
The Diversity Crisis in AI, and Fast.ai Diversity Fellowship - Crys
http://www.fast.ai/2016/10/09/diversity-in-ai/
======
seabird
There seems to be a fundamental disconnect between the author's understanding
of bias in AI and the actuality of it.

Google's "Gorilla" incident was a result of bad training data. Recidivism
scoring is the result of multiple unfortunate truths that the data concerning
crime will lead you to, no racism or bias required. Nikon cameras detecting
Asian faces as blinking happens for obvious reasons; claiming that there's
racism or bias is especially absurd in this case. Microsoft's chat bot is the
result of intentionally targeted training data to get a certain (unsavory)
result. Google's language algorithm reflects the bias of language and its
usage, not the algorithm.

I will be glad to accept any example of a bias (and not the underlying data)
presenting itself in an algorithm. Until then, the underlying argument behind
these diversity pushes needs work to be truly compelling.

