
Federal study confirms racial bias of many facial-recognition systems - longdefeat
https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/
======
shadowprofile77
The points that this article and many of the comments here seem to make about
the dangers of these racially-caused mistakes miss the more fundamental
problem of this technology being used ever more widely in the first place.
Facial recognition will invariably get better (virtually certainly to the
point of almost completely avoiding mistakes based on a persons ethnic makeup)
and that's what's really terrible, because it hints at the sort of constant
surveillance future we have coming toward us. If anything, it's almost
comforting to know that right now, at the very least, these algorithms don't
yet work well enough to avoid politically unfortunate errors, because the
political consequences of these errors slow down the scenario of total, nearly
error-free ubiquity that anyone who even slightly values privacy should be
very worried about.

~~~
raxxorrax
The inherent problem facial recognition and similar systems have is that they
just rely on an extremely narrow context. But I agree that it is not
constructive to points towards bias (in many discussions actually), since that
will just normalize the presence of the technology itself. Suddenly it just
needs to solve the problem of improving some statistics instead of justifying
the deployment of cameras everywhere in the first place. So the criticism is a
particularly bad one that omits to ask the hard questions.

------
daenz
The article seems to hang on the false positives being higher for certain
minorities, but makes no mention of the false negatives being higher as well.

So while some ethnic minorities might be more likely to match incorrectly to a
known person of interest, they're also more likely to be let through if they
are indeed that person. I think that the first case is definitely more
damaging on the whole, but I still find it misleading to not mention the
specific scope of the "racial bias."

~~~
Barrin92
it doesn't seem relevant. In a country like the US the law pursues criminal
individuals, not criminal groups. The fact that the system also falsely
negatively identified people doesn't somehow 'balance out' false positives,
it's not like the point of these systems is to randomly catch 100 African-
Americans. (although given the political ideologies of some of the people in
the surveillance tech space maybe it is)

The fact that it produces errors in both directions just makes the system even
worse in total.

~~~
AnthonyMouse
> it doesn't seem relevant.

The reason it's relevant is that false positives are always a trade off
against false negatives, so the "easiest" way to reduce false positives is to
increase false negatives. But false negatives are also very bad, so we can't
really do that or we're just trading one problem for another instead of
actually solving anything.

The only real solution is to improve the overall accuracy of the system, but
that's easier said than done. Some of the main sources of the inaccuracy are
intrinsic. The population in question is a smaller proportion of the general
population so there is less training data available. People with darker skin
absorb rather than reflect light, which makes it harder for the system to
identify them.

So the problem isn't some racist schmuck who just needs to be fired and the
system will get more accurate, it's a consequence of demographics and physics.
There may be a solution somebody can find, but there isn't guaranteed to be,
and in the meantime there's not a whole lot you can do other than trying to
find one.

Or maybe discontinuing the system entirely.

~~~
jdavis703
Why not change the training data to reflect an equal amount of the population?
Officially the US has six “races” according to the Census Bureau. So, another
option is to make each race ~16% of the training data.

Some might object to the decreased accuracy this gives whites — but if we’re
going to have any hope of preventing existing systemic inequalities from being
encoded in AI systems, we’ll have to socially engineer the data sets.

~~~
zo1
>"Some might object to the decreased accuracy this gives whites — but if we’re
going to have any hope of preventing existing systemic inequalities from being
encoded in AI systems, we’ll have to socially engineer the data sets. "

So to fix systemic inequalities for some groups, we're going to increase
systemic inequality for certain other groups?

Definitely a step in the wrong direction as it's just going to swing hatred
back the other way. And N-levels later of this "fiddling" you'll end up in a
spot where neither side can reconcile with the other because each side has
legitimate reasons to feel slighted and no one can look past that "hatred".
Just look at the mess that is currently in the middle-east after people
"fiddled", and then "fiddled" again to try fix it.

~~~
pessimizer
> So to fix systemic inequalities for some groups, we're going to increase
> systemic inequality for certain other groups?

No; in order to accurately identify people through appearances, we'll try to
sample evenly throughout the range of appearances and features instead of
sampling based on proportions within the population of the people setting up
the system.

You're can't be seriously making the case that white people are such an
overwhelming portion of the world's population that not concentrating on them
in your training data constitutes racial discrimination. Not only racial
discrimination even, but is a attack that will inspire a "legitimate" counter-
retaliation.

~~~
zo1
No that's not what I'm saying. If you look at the quote I mentioned, the
person was suggesting we "socially engineer" data sets in order to "preventing
existing systemic inequalities from being encoded in AI systems".

This has wider implications and tacit-meanings that I think I picked up on and
definitely implies doing more than just having an equal sampling of each
racial group. How that would be enacted, I don't know. The point is to not
swing the pendulum the other way in order to "fix things" as it'll cause
problems down the line, but to rather just "fix things" in the most non-
intrusive and fair way.

------
formercoder
Darker skin has less contrast between highlights and shadows so isn’t it a
more difficult problem to identify them from a technical perspective?

~~~
shadowgovt
It seriously depends on the technology being used. There isn't any evidence
from human psychology that dark-skinned people have harder-to-recognize faces,
so we're probably just using the wrong approach in the technology currently
available.

~~~
ummonk
Cameras don't have the same dynamic range as human eyes.

~~~
notauser
The very best HDR cameras have roughly the same dynamic ranges as the human
eye.

You can't display all of this range to people because of the limitations of
display technology, but you can feed the full range to a facial recognition
engine.

Security budgets might not stretch to top-end HDR equipment but the price
keeps on coming down. The performance of a modern flagship phone is remarkable
compared to a few years ago - and fixed surveillance cameras can have much
bigger glass and sensors, making it cheaper to get super-human performance.

One new but related issue is that body-worn cameras can capture more low-light
detail than the human eye. Police unions have argued against deploying these
sensors, because, they want the evidential record to show what the officer
could see - not what a cat could see.

~~~
randomcarbloke
Most digital cameras have a dynamic range well in excess of the human eye.

------
daenz
Here's the study, which curiously isn't linked to by WP
[https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf](https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf)

------
surround
The technology will inevitably improve. Real issues will arise when facial-
recognition becomes perfectly accurate (without false positives, racial bias,
etc). Governments will be able to track citizens _anywhere_.

~~~
crooked-v
On the other hand, some governments may find it extremely convenient to use
'accidentally' biased technology to harass and persecute members of disliked
minorities.

~~~
steveeq1
Do you honestly believe this happens in any by-and-large way?

~~~
crooked-v
Right now? No. Could it? Sure. Minorities being 'accidentally' targeted for
harassment in 'coincidental' yet statistically reliable ways is something many
governments have a long history of doing.

------
ydb
Doesn't this also prove, by proxy, that this fact can potentially (if not
certainly) apply to other AI & associated fields?

Forgive my ignorance, but this seems to lend credence to the popular idpol
claim that the white guys programming AI are ignorant of the inherent bias
their models might have.

~~~
zadokshi
You are assuming that AI's are only programmed by white men, and you are
assuming that it hasn't occurred to anyone who is an expert in the field of
facial recognition that their algorithm's might be used with people other than
white people.

These are two bold assumptions. Would you really have us believe that this is
the case? I think you will find that some important information has been left
off. The article suggests that asian created algorithms are better at
recognising asians. It implies that they all have problems with darker skin
tones. Why does the article not explore the reasons why? Perhaps they are
wanting to say that algorithms can be racist, rather than the truth of the
matter, which is there are certain technical issues that are difficult to
overcome with darker skin tones.

~~~
dahfizz
An article about the technicallities of cameras and facial recognition on
darker skin tones would generate a lot less clicks than "Computers are
racist!"

~~~
smt88
I think that "less accuracy for a certain race" \+ "facial recognition used
for finding criminals" is where the word bias (notably "bias" and not
"racism") comes from.

The first sentence of the article backs up its usage of the word bias instead
of inaccuracy:

> "...casting new doubts on a rapidly expanding investigative technique widely
> used by _law enforcement_ across the United States." _emphasis mine_

------
CryptoPunk
Any heuristic will have varying effects with respect to different groups of
individuals, regardless of what property individuals are being grouped by.

It's absolutely irrelevant whether the heuristic is unfair to individuals of a
particular race, or unfair to individuals of a particular nose length.
Prioritizing the former over the latter is a totally arbitrary value system
that deprioritizes the most important metric for maximizing fairness, which is
the overall accuracy rate.

Ultimately any inaccuracy of the heuristic is an instance of the heuristic
treating someone unfairly. The objective should be to minimize the inaccuracy
rate overall, not the inaccuracy rate in relation to politically prioritized
groupings like race.

------
sputknick
it looks to me like they buried the lede in the last paragraph.

'relationship “between an algorithm’s performance and the data used to train
it,”'

To solve this problem they should look at the relationship between the number
of pictures for a given race, and the accuracy in recognizing members of that
race. It probably does best at identifying European descendant faces because
it was developed in a country where European descendants are the largest
ethnic group. I'd bet dollars to donuts that if you can increase the number of
faces for each race this disparity will largely disappear.

------
adriantam
In China, you will heard of the company SenseTime who specialize in facial-
recognition. I bet if you do the same study on that system, you will see the
same racial bias on white people! All the problem is data. Try train on man
faces and apply it on women faces. If you want to solve this problem, gather
more training data on those "minorities" \-- even if that means you need to go
to a foreign country to collect them.

------
flurbix
"Algorithms developed in Asian countries had smaller differences in error
rates between white and Asian faces, suggesting a relationship “between an
algorithm’s performance and the data used to train it,”

Quick, somebody offer the guy who realized this a job.

------
wojcikstefan
Surprise surprise, in a field dominated by white men, the technology is deemed
production-ready when it’s accurate _for white men_.

Not blaming this group or anyone in particular – we all just need the be
cognizant of the fact that we have our blind spots (racial, gender, and
otherwise).

~~~
commandlinefan
> a field dominated by white men

Actually, as far as I can tell, it's dominated by asians - specifically
Indians.

~~~
wojcikstefan
Of course, accurate data is hard to come by, but this report from Reveal from
The Center for Investigative Reporting suggests otherwise, especially at
executive levels:

[https://www.revealnews.org/article/heres-the-clearest-
pictur...](https://www.revealnews.org/article/heres-the-clearest-picture-of-
silicon-valleys-diversity-yet/)

------
ojagodzinski
paywall. $30 for two paragraphs?

------
throwawaysea
The article casts doubt on expanded use of facial recognition but that’s not
the right reaction IMO. Facial recognition systems are used as a first pass
filter. That is they are reducing a large space of possible matches to a
lesser space of possible matches. It is significantly better than expecting
beat cops to match faces on the sidewalk as they look around, for example. No
one, to my knowledge, is relying on facial recognition systems as absolute
matches.

The ACLU rep that is quoted is IMO relying on hyperbole to make his point
(“One false match can lead to missed flights, lengthy interrogations, tense
police encounters, false arrests, or worse”). If the match is close enough
that a human would confirm it as well, and conduct an “interrogation”, then
presumably facial recognition technology did not incrementally add to the
problem of mismatching since a human would also make the same false match.

So if it makes policing more efficient and enables more criminals to be
nabbed, and it is used as a first pass filter (so the false positive rate
doesn’t matter), I am all for it.

~~~
nyolfen
> If the match is close enough that a human would confirm it as well, and
> conduct an “interrogation”, then presumably facial recognition technology
> did not incrementally add to the problem of mismatching since a human would
> also make the same false match.

it seems likely to me that cops might use it as justification for a fishing
expedition, or TSA agents may just go along with it as a cover-your-ass
measure

~~~
SlowRobotAhead
“Sir do I smell marijuana in your vehicle?”

“Sir, this dog detected something on your person”

“Sir, the scanned detected an object under your shirt please step over here”

“Sir, you’ve been flagged by this facial recognition software”

... agreed. I think this is better to head off before that last one can be
abused.

