
Artificial Intelligence's White Guy Problem - Chinjut
http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html
======
bonobo3000
IMO, no one wants to acknowledge simple facts for fear of retribution/being
labelled racist, instead we keep dancing around this issue forever getting
more and more ridiculous.

Also, trust me i can talk about this because i'm not white...

fact - people prefer people like them. Not even consciously, this is basic
shit hardwired into us. I don't blame white men for being subconsciously
biased to hiring white men, literally any other group would do the same. sure
we can try to fight that bias, but its not at all evil or wrong to have that
bias, only natural.

fact - taking an "agnostic" approach the way science does, of course the
algorithms will reflect "biases". if men are statistically more likely to be
programmers, or black people are more likely to commit crimes (STATISTICALLY),
then the algorithm will pick that up. They are biases sure, but also
statistical realities.

Now we can debate whether we should actively engineer algorithms to fight
these "biases" on a case-by-case basis (for example, focusing more on women
might be a win if you can find talent no one else can), but there's no reason
to start pointing fingers at the "evil white guys" on top who planned this
from the very beginning... it's just more stereotyping.

hypothesis - she wrote this crap to gain publicity.

~~~
sangnoir
> if men are statistically more likely to be programmers, or black people are
> more likely to commit crimes (STATISTICALLY),

I think you meant black people are more likely _to be convicted of_ crime. The
problems with crime 'statistics' is that on the surface, it all seems coldly
scientific, yet they are generated and derived via very biased, very human,
very unscientific processes - there is a lot of bad data. The ACLU did
research that showed that there is no statistically significant difference in
the possession of weed between white and black people, yet more black people
are convicted[1] for possession.

Here's a mind experiment: after watching this YouTube video[2], how skewed do
you think the statistics for white female criminals (bike thieves) vs black
criminals would be?

1\. [https://www.aclu.org/files/assets/aclu-thewaronmarijuana-
rel...](https://www.aclu.org/files/assets/aclu-thewaronmarijuana-rel2.pdf)

2\.
[https://www.youtube.com/watch?v=ge7i60GuNRg](https://www.youtube.com/watch?v=ge7i60GuNRg)

~~~
loop2
If that's the case, there must be a hell of a lot of unconvicted white
murderers to make up for the 7:1 disproportion.

------
Animats
The last person I met from Google's machine learning group in search was
female and Chinese. The big name behind machine learning is Andrew Yan-Tak Ng;
he was a professor at Stanford and is Chief Scientist at Baidu now.

They're complaining about Nikon cameras not recognizing Asian faces properly,
and this is discrimination? Nikon is a Japanese company. Headquarters is in
Tokyo. The CEO is Kazuo Ushida.

~~~
adam419
My experience as well. I'm from Seattle and have met (at my office and others)
and overwhelmingly large number of asian, and especially female asians working
in data science and machine learning.

------
sbierwagen

      But similar errors have emerged in Nikon’s camera 
      software, which misread images of Asian people as blinking,
    

Ah, Nikon, that famous White Guy company. Founded by white guys in Tokyo in
1917, and headed by notable white men Makoto Kimura and Kazuo Ushida.

~~~
tim333
I think their point was the training data may have a lot of white guys. I
don't know what Nikon used but if they just googled the web for images they'd
probably end up with quite a lot of white subjects.

------
adam419
The reason why this is garbage, is that there is not a single naturally
occurring domain in society in which groups can be found to be represented
equally. I'm pretty sure this is true in nature as well. So one can literally
at their sole discretion, analyze any area of life and make statements like
"systemically oppressed this", "unequally represented that". It's like staring
at an ink blot and being asked what you see.

I have the nagging sensation that if were up to todays hyper-sensitized media
on how society should look and function, we would all be grey globs in a grey
world, devoid of any differences.

The more depressing truth? Striking these chords are an absolute goldmine for
ratings and clicks. Everyone is naturally curious about how they might be
currently oppressed or disadvantaged, it plays to our instinctual tribalism.

So please, realize you can do anything you want in this world, and don't be
seduced by hate and bitterness from some writer sitting in Soho that has a
click-quota to meet this month.

------
apozem
Lord, the comments here are as bad as I feared.

This article makes a perfectly valid point- AI is only as good as the data you
use to train it. If you feed it bad, biased data, then the AI will behave in
bad, biased ways.

These biases can be major (no Amazon delivery to black neighborhoods) or
minor. I'm reminded of a gaming podcast I heard (can't remember which one)
where a guy recounted watching a female journalist try VR goggles that
couldn't detect her eyes because she had mascara. Apparently no one making the
headset had tested the effects of that kind of makeup.

The article is right. If we are serious about creating products that
revolutionize everyone's lives, we need to involve more kinds of people. Our
perspectives are limited. We can't understand everything. That's the point of
having a diverse team. Like Ben Thompson says, there's a very strong business
case for diversity because "You don't know what you don't know."

~~~
Joeboy
While I agree that the article raises an important point, I don't really see
how more diverse development teams would have fixed any of the problems
raised.

~~~
voiceinthewoods
There are a number of evidence-based case studies, reports, and other
scientific papers that support the assertions in the OpEd. I'd recommend
reading them before jumping to contrary conclusions. See, e.g.,
[https://www.whitehouse.gov/blog/2016/05/04/big-risks-big-
opp...](https://www.whitehouse.gov/blog/2016/05/04/big-risks-big-
opportunities-intersection-big-data-and-civil-rights;)
[https://bigdata.fairness.io/](https://bigdata.fairness.io/).

------
jimmywanger
This article falls in the same fallacy as a lot of more postmodern social
science articles.

The whole point of AI and machine learning is to find things that are not
immediately obvious but backed up by the data.

The author of this article is suggesting that if the conclusions of this
research are politically unfavorable, then there has to be bias/racism/sexism
somewhere. race/gender/socioeconomic agnostic.

Which runs immediately counter to both machine learning and research in
general. "Dang it, run the numbers until it supports the conclusion I
support."

It's like the author doesn't understand the basic premise of machine learning
or research. If the datasets are restricted in some way unfairly, that's
something to be looked at. Algorithms are generally fair and unbiased.

------
pram
Should science in general not be attempted until the scientists performing the
research are at adequate diversity levels?

~~~
noir_lord
Be hilarious if the double-slit experiment only works when white males are
watching.

------
zone411
"In the United States, this could result in more surveillance in traditionally
poorer, nonwhite neighborhoods, while wealthy, whiter neighborhoods are
scrutinized even less."

More policing reduces crime. The author seems to think that people living in
these poor, nonwhite neighborhoods would rather see the police resources go to
wealthy, white neighborhoods. But the studies show that minorities and people
living in high-crime neighborhoods mostly do approve of police. There is a lot
of cognitive dissonance here: is crime reduction through more policing in poor
nonwhite neighborhoods a right goal despite sometimes justified skepticism of
the police or not?

[http://www.theatlantic.com/national/archive/2015/02/more-
pol...](http://www.theatlantic.com/national/archive/2015/02/more-police-
managed-more-effectively-really-can-reduce-crime/385390/)

[https://www.ncjrs.gov/pdffiles1/nij/197925.pdf](https://www.ncjrs.gov/pdffiles1/nij/197925.pdf)

------
swiftisthebest
It makes the world worse to try to "include" everyone. There's a reason we
have grades and exclusivity.

Equality is about opportunity, not results.

------
maxander
Its a combination of two separate problems- the skewed demographics of
software engineers and the continued requirement of an intuitive understanding
of the problem AI is being developed to solve. If you're building an AI system
to help police predict crimes, you want to have a detailed enough
understanding of the places its going to be applied to know what the impact of
its predictions would be and how to shape the system to actually produce a
desired result. Like with UX design, going from a specification to a logically
valid implementation isn't necessarily going to produce results people
actually want.

Its much the same way as, say, startups trying to market products to new
mothers or highschool students have a hard time because they can't "eat their
own dogfood" if they're all late-twenty-something white men. Huge markets lie
unserved, I'm sure, because they're demographics that don't tend to produce
software engineers or entrepreneurs.

------
piotrjurkiewicz
> The reason those predictions are so skewed is still unknown

Well, I bet the reason are historical statistics of recidivism, which
presumably were used to train these algorithms.

But hey, this is NYT, they cannot simply acknowledge the existence of
differences in crime statistics between races, so 'the reason remains
unknown'. Hilarious.

------
dracht
The demonization of "white guys" in tech is getting rather appalling.

------
tclover
god damn white guys, why you have to be so racist

~~~
back_beyond
Hell, we can't help it! The definition keeps changing!

------
tim333
They need some political correctness network training for their AI systems.

------
exstudent2
Other HN thread on the same story by Bloomberg:

[https://news.ycombinator.com/item?id=11980557](https://news.ycombinator.com/item?id=11980557)

At least this article has a _slightly_ less hostile title, but the fact
remains: it's not white men's fault that they have been innovative and early
in the field of AI. If one absolutely must drag identity politics into AI it
would be more accurate to say that women should participate more, and that's
on them. Not white men.

Regardless, I don't see how being sexist and racist (as these articles are)
helps anything.

~~~
morninj
Perhaps there are systemic barriers that make it more difficult for women than
men to participate in the AI field.

~~~
exstudent2
The beauty of computer science is that no one can stop you from doing it.
Anyone who can afford a computer can excel in the field if they put their mind
to it. If you want to talk about helping people who can't afford computers,
that would be worthwhile. That's definitely not a gendered discussion though.

EDIT: I fully agree with WalterBright below but have been banned by HN from
continuing my conversation in this thread for daring to suggest insulting
white men might be racist and sexist.

~~~
WalterBright
[https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...](https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog)

In the D language community, we have many strong contributors who we actually
have no idea who they are, other than their online persona that they create
themselves.

We don't know their age, race, gender, religion, nationality, politics,
nothing.

It's as close to a pure meritocracy as is probably humanly achievable.

