
MIT Researcher Exposing Bias in Facial Recognition Tech Triggers Amazon’s Wrath - jonbaer
https://www.insurancejournal.com/news/national/2019/04/08/523153.htm/
======
colllectorof
Models that are used for anything important should be explainable. That is,
you should be able to get a definitive answer as to why a particular result
was achieved _in any particular case_. If a model does not have this property,
it should not be used for anything critical.

Also, people should have the right to know when machine-learned models are
used to make decisions about their lives. They should be able to ask why a
particular decisions was made and get that information.

This is _real_ AI ethics.

~~~
Mirioron
Can you explain how you, as a human being, understand handwriting? Can you
explain it in detail without handwaving some stuff away as "now I detect the
pattern"? Because I'm pretty sure you can't, but you use it for basically
everything in life. So, should a person's eyes not be trusted when it comes to
"something important"?

~~~
colllectorof
_> Can you explain how you, as a human being, understand handwriting?_

We explain how to understand handwriting to every single child that goes to
school. It's not the answer you want, but it's the answer that actually
matters here.

Trying to equivocate AI and human cognition in this way is completely
disingenuous.

Human reasoning is not 100% reliable, but we know very well in which ways it's
unreliable and how to deal with it. We have shared biology and millennia of
experience trying to empathize and communicate with others.

~~~
discobot
We do not explain how to read handwriting, we teach by example - and that is
exactly how ML works.

And your last point is wrong. ML models are studied and understood much better
than human reasoning.

~~~
colllectorof
_> We do not explain how to read handwriting, we teach by example - and that
is exactly how ML works._

Teaching children to read is an interactive process that has pretty much
_nothing_ in common with data steamrolling in modern machine learning.

 _> And your last point is wrong. ML models are studied and understood much
better than human reasoning._

Is that why new ANN architectures are almost universally constructed by trial
and error?

------
YeGoblynQueenne
>> Chris Adzima, senior information systems analyst for the Washington County
Sheriff’s Office in Oregon, said the agency uses Amazon’s Rekognition to
identify the most likely matches among its collection of roughly 350,000 mug
shots. But because a human makes the final decision, “the bias of that
computer system is not transferred over into any results or any action taken,”
Adzima said.

The gentleman is saying that the system selects a small subset of the 350,000
mugshots, but because a human selects one mugshot from this small subset,
there is no bias.

That just makes no sense.

[edited to remove stronger language]

~~~
Mirioron
It does make some sense. Yes, technically, the bias is still there, but it's
lower. Getting some accuracy on image detection isn't that difficult, but
making it more and more accurate is. Going from 50% accuracy to 51% is much
easier than 98% to 99%.

Of course, with the error rates reported on certain groups a human making the
final decision is not enough.

~~~
Retric
The problem is a human would at best reduce but not eliminate the bias.

Let’s assume the photo was not a match for the database. Now a human doing the
final step is essentially pick a random face from a biased sample.

Worse, people are really bad about assuming whatever option they are
considering to be far more likely than the base rate. Read up on an random
obscure disease and suddenly you start thinking it’s a real risk. Sadly,
police do the same thing with criminal suspects resulting in innocent people
in prison.

------
torgian
"Her tests on software created by brand-name tech firms such as Amazon
uncovered much higher error rates in classifying the gender of darker-skinned
women than for lighter-skinned men."

Besides the article's obvious anti-man, anti-white bias, I'm not surprised
that facial recognition software has a hard time analyzing darker skinned
people.

With photography, I've always had a hard time photographing someone who had
dark skin. Lots of light needs to be used, and even then it needs to be
filtered correctly, etc. This goes for any subject that is dark.

Unfortunately, this is going to be a tough problem. Cameras used by law
enforcement and government agencies, (which the article seems to focus on) are
normally pretty shitty; software can only do what it does with whatever input
it gets. So if the lighting and image quality is shitty, then your results
will be as equally shitty.

The article doesnt go into what kind of equipment the MIT researcher was
using, but I will assume that it is a high quality camera. If so, and if the
software is still failing as the article alludes to, then yes, these companies
need to make their software better.

Even so, it's a crapshoot from the get-go, due to the hardware being used.

~~~
false_alarm
I did not notice an anti-man or anti-white bias in the article. What stuck out
to you in the article that seemed biased?

------
tangue
Issues are going beyond face recognition and algorithms : _photography has a
long history of bias_ [1]. Besides the issue of insufficient testing, I had to
ask myself : what if these algorithms are relying on a biased process thinking
it was impartial ?

[1] [https://jezebel.com/the-truth-about-photography-and-brown-
sk...](https://jezebel.com/the-truth-about-photography-and-brown-
skin-1557656792)

~~~
creato
Can anyone explain exactly what is missing in photography due to "bias"? I'm
just an amateur photographer that developed film in high school and plays
around with RAW and Lightroom. I can't think of anything in the process that
isn't simply a matter of light sensitivity and dynamic range. Film and image
sensors don't understand high level semantic features like skin.

The hardest thing to take a photo of is a black haired cat, and I don't think
that's because Adobe is biased agains black cats.

~~~
tangue
Following the link I posted might be a good start.

~~~
creato
Do you really think Kodak didn't want to make a film that could capture dark
subjects until furniture companies asked for it?

The reason I commented is because that article makes a lot of complaints that
drum up outrage. As far as I know are of the complaints all much better
explained by either 1) less light is harder to photograph, or 2) photographing
bright things and dark things at the same time is harder than photographing
just bright things or just dark things.

------
malshe
The article doesn't mention papers so I am linking them here:

Original study [PDF]:
[http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a...](http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf)

New study [PDF]:
[https://www.thetalkingmachines.com/sites/default/files/2019-...](https://www.thetalkingmachines.com/sites/default/files/2019-02/aies-19_paper_223.pdf)

~~~
ulucs
I read the original paper. Wow, they could have tested for so many factors
impacting accuracy but they only looked at mean accuracy for each cluster.

~~~
DarkWiiPlayer
Haven't read the study myself yet, but that sentence alone makes me not want
to waste my time on it.

------
apta
> Her tests on software created by brand-name tech firms such as Amazon
> uncovered much higher error rates in classifying the gender of darker-
> skinned women than for lighter-skinned men.

What about lighter-skinned women? This seems to be phrased on purpose to
incite bias against white men.

~~~
jake-low
Here's the paper [0]. The answer to your question is in Tables 4 and 5 (page
9). To summarize, darker skinned women are misclassified far more often than
any other group, in all three of the tested facial recognition systems. Error
rates for lighter skinned women are between 5x and 10x lower, depending on the
system. Lighter skinned men have the lowest classification error rates by a
wide margin in the IBM and Microsoft systems, and are tied with darker skinned
men for lowest error rate in the Face++ system.

[0]:
[http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a...](http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf)

------
tdaltonc
Beyond the very real intersectional problems highlighted . . .

The fact that the Amazon gender classifier miss-identifies 7% of white females
as male, but 0% of white males as female is very odd. That seems like a JV
level bias tuning mistake. That's not systemic violence so much as it is just
sloppy.

~~~
nitwit005
Most of the time the headline for these articles seems like it should be "Crap
product also biased".

I have some suspicion they haven't focused on bringing down the error rates
for minority groups because they know even their best case isn't good enough.

~~~
tdaltonc
Check out the academic article.[0] It shows that Microsoft and IBM are way way
better.

[0] [http://www.aies-conference.com/wp-
content/uploads/2019/01/AI...](http://www.aies-conference.com/wp-
content/uploads/2019/01/AIES-19_paper_223.pdf)

------
username90
Where is the wrath? Seems like Amazon just hand waved away her criticism, they
didn't really take any actions against her.

------
externalreality
I heard about different research like this about six years ago. So the
algorithm wasn't tested on dark-skinned people. Is this really something new?
Product developers in a wide variety of markets still neglect to take into
account dark skinned peoples when developing their products. Why should it
come as a shock that facial recognition product developers suffer from the
same bias?

~~~
headalgorithm
From the article:

"Those disparities can sometimes be a matter of life or death: One recent
study of the computer vision systems that enable self-driving cars to “see”
the road shows they have a harder time detecting pedestrians with darker skin
tones."

Just think about the consequences of deploying such systems.

~~~
jfk13
I wonder if that's also true for human drivers. I certainly have a harder time
detecting pedestrians wearing darker clothing, at least in some circumstances.

~~~
skookumchuck
That's why road workers, cyclists, and joggers wear brightly colored
reflective vests.

------
salty_biscuits
Is this something simple like class imbalance in the training sets? Would be
pathetic if that were the case. So easily fixed.

------
jothezero
Not the first article about this kind of stuff. Police departments can do
whatever they want to obtain leads (as long as it is legal (Gray area)). I
will be worried when they will start using technology instead of judges.

~~~
acdha
That seems like a shortsighted position because it ignores the cost of
selective enforcement. Everyone breaks laws on a daily basis - jaywalking,
speeding, etc. - and you could have a severely disproportionate cost by
enforcing that more for one group than another, even with every trial being
completely fair.

Here in DC there was an example awhile back prior to legalizing marijuana:
white people apparently used at a higher rate but most of the prosecutions
were of black people both due to heavier police presence and because
demographics meant that white users tended to have more privacy (limited
visibility from the street, more distance between houses/sidewalks to make
smell harder to notice, etc.) which made it harder to get evidence clearly
showing that a specific person had been the one using. The process could be
fair without changing the fact that the results disproportionately impacted
one group.

~~~
CryptoPunk
That's not "selective enforcement". Race is not being selected for. People
committing crimes in public is being selected for. Or people committing crimes
in high-crime areas.

Some races happen to correlate with some selected traits more than others, but
race is not the trait selected for.

It's completely predictable that not all traits of interest for law
enforcement will be distributed equally across all racial groups. To treat
this fact as a sign of systemic racism is to guarantee that you will consider
every society on Earth systemically rac/sex/[group] ist.

~~~
acdha
Selective enforcement isn’t specific to racism — if you have a law which
primarily impacts teenagers while other people break it with far lower penalty
rates, that’s selective enforcement even if everyone is the same race.

This does commonly fall upon racial lines in countries like the United States
with a long history of racial discrimination but it’s not exclusive and it’s
important for anyone building systems to consider pitfalls like this because
we know the users are likely to assume that a computer is unbiased.

~~~
CryptoPunk
Disproportionate impact is not the same thing as "selective enforcement".
Selective enforcement means consciously choosing to enforce a law more
commonly when a particular racial group breaks it. It does not members of a
particular racial group disproportionately having the law enforced against
them because they disproportionately exhibit a particular non-racial trait
that is correlated with higher enforcement.

------
RenRav
It seems like every other month, some facial recognition system is being
attacked because of this.

------
treis
Does anyone have a link to the paper?

------
dosy
I love this, whether the conclusions are valid or not, using science to troll
big tech where they live. ouch, right in the feels.

------
Thermolabile
What does this article have to do with racism and not the failing of the
recognition system? It is supposed to be a facial recognition system, it
doesn’t work on specific people at a large percentage.. It’s brought to the
attention of the manufacturers and should be worked on being fixed.

~~~
swish_bob
And when the manufacturer says "nuh uh" like they did in this case?

------
myopicgoat
Dh

------
deogeo
I sure feel privileged that Amazon's software can track and spy on me
effectively.

------
dna_polymerase
It is amazing to what length those 'activists' go in order to make sure
Amazon's product does work better. Do they know that this stuff will be
largely used for surveillance, military uses and other nasty stuff (like
exposing adult performers on social media)?

If people plan on spending time on activism, there is plenty of real issues
with Amazon.

~~~
omegaworks
Well, when Boston Dynamics' dog robot walks into your house with a gun on its
back, it's about making sure it knows how to differentiate you from the bad
guy it's looking for.

We don't want autonomous killer bots to not be able to tell black people apart
just because their creators can't.

~~~
modriano
America would have to become a drastically worse place for any law enforcement
agency to deploy an autonomous lethal force that had less than 100%
identification accuracy.

~~~
int_19h
It already deploys a manned lethal force with far less than 100%
identification accuracy, so what's the fundamental difference? The arguments
in courts would be the same - "an honest mistake", "incident due to
unfortunate circumstances" etc, wrapped up with a claim that if such broad
latitude to make "mistakes" is not given, then society will descend into
lawless anarchy. But it would be applied to people who deploy the robots,
instead of the people pulling the trigger.

Alternatively, and more likely, measures would be taken to make it not
autonomous on paper, e.g. requiring a human operator to approve any action the
robot is intending to take. In practice, this would likely be one of those
"moral crumple zones" with little practical meaning.

