
Facial Recognition Leads To False Arrest Of Black Man In Detroit - vermontdevil
https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig
======
ibudiallo
Here is a part that I personally have to wrestle with:

> "They never even asked him any questions before arresting him. They never
> asked him if he had an alibi. They never asked if he had a red Cardinals
> hat. They never asked him where he was that day," said lawyer Phil Mayor
> with the ACLU of Michigan.

When I was fired by an automated system, no one asked if I had done something
wrong. They asked me to leave. If they had just checked his alibi, he would
have been cleared. But the machine said it was him, so case closed.

Not too long ago, I wrote a comment here about this [1]:

> The trouble is not that the AI can be wrong, it's that we will rely on its
> answers to make decisions.

> When the facial recognition software combines your facial expression and
> your name, while you are walking under the bridge late at night, in an
> unfamiliar neighborhood, and you are black; your terrorist score is at 52%.
> A police car is dispatched.

Most of us here can be excited about Facial Recognition technology but still
know that it's not something to be deployed in the field. It's by no means
ready. We might even consider the moral ethics before building it as a toy.

But that's not how it is being sold to law enforcement or other entities. It's
_Reduce crime in your cities. Catch criminals in ways never thought possible.
Catch terrorists before they blow up anything._ It is sold as an ultimate
decision maker.

[1]:[https://news.ycombinator.com/item?id=21339530](https://news.ycombinator.com/item?id=21339530)

~~~
zamalek
52% is little better than a coin flip. If you have a million individuals in
your city, your confidence should be in the ballpark of 99.9999% (1 individual
in 1 million). That has really been my concern with this, the software will
report any facial match above 75% confidence. Apart from the fact that it
appalling confidence, no cop will pay attention to that percentage;
immediately arresting or killing the individual.

Software can kill. This software can kill 50% of black people.

~~~
dtwest
Software can kill if we put blind trust in it and give it full control over
the situation. But we shouldn't do that.

Even if it was correct 99% of the time, we need to recognize that software can
make mistakes. It is a tool, and people need to be responsible enough to use
it correctly. I think I agree with your general idea here, but to put all of
the blame on software strikes me as an incomplete assessment. Technically the
software isn't killing anyone, irresponsible users of it are.

~~~
danans
> Technically the software isn't killing anyone, irresponsible users of it
> are.

It's beyond irresponsibility - it's actively malevolent. There unfortunately
are police officers, as demonstrated by recent high profile killings by
police, who will use the thinnest of pretexts, like suspicion of paying with
counterfeit bills, to justify the use of brutal and lethal force.

If such people are empowered by a facial recognition match, what's to stop
them from similarly using that as a pretext for applying disproportionate
brutality?

Even worse, a false positive match triggered arrest may be more likely to
escalate to violence because the person being apprehended would be rightfully
upset that they were being targeted, and appear to be resisting arrest.

~~~
dtwest
My point was that this technology should not be used as evidence, and should
not be grounds to take any forceful action against someone. If a cop abuses
this, it is the cop's fault and we should hold them accountable. If the cop
acted ignorantly because they were lied to by marketers, their boss, or a
software company, those parties should be held accountable as well.

If your strategy is to get rid of all pretexts for police action, I don't
think that is the right one. Instead we need to set a high standard of conduct
and make sure it is upheld. If you don't understand a tool, don't use it. If
you do something horrible while using a tool you don't understand, it is
negligent/irresponsible/maybe even malevolent, because it was your
responsibility to understand it before using it.

A weatherman saying there is a 90% chance of rain is not evidence that it
rained. And I understand the fear that a prediction can be abused, and we need
to make sure it isn't abused. But abolishing the weatherman isn't the way to
do it.

~~~
danans
> If your strategy is to get rid of all pretexts for police action, I don't
> think that is the right one.

Not at all.

> Instead we need to set a high standard of conduct and make sure it is upheld

Yes, but we should be real about what this means. The institution of law
enforcement is rotten, which is why it protects bad actors to such a degree.
It needs to be cleaved from its racist history and be rebuilt nearly from the
ground up. Better training in interpreting results from an ML model won't be
enough by a long shot.

------
danso
This story is really alarming because as described, the police ran a face
recognition tool based on a frame of grainy security footage and got a
positive hit. Does this tool give any indication of a confidence value? Does
it return a list (sorted by confidence) of possible suspects, or any other
kind of feedback that would indicate even to a layperson how much uncertainty
there is?

The issue of face recognition algorithms performing worse on dark faces is a
major problem. But the other side of it is: would police be more hesitant to
act on such fuzzy evidence if the top match appeared to be a middle-class
Caucasian (i.e. someone who is more likely to take legal recourse)?

~~~
strgcmc
I think the NYT article has a little more detail:
[https://www.nytimes.com/2020/06/24/technology/facial-
recogni...](https://www.nytimes.com/2020/06/24/technology/facial-recognition-
arrest.html)

Essentially, an employee of the facial recognition provider forwarded an
"investigative lead" for the match they generated (which does have a score
associated with it on the provider's side, but it's not clear if the score is
clearly communicated to detectives as well), and the detectives then put the
photo of this man into a "6 pack" photo line-up, from which a store employee
then identified that man as being the suspect.

Everyone involved will probably point fingers at each other, because the
provider for example put large heading on their communication that, "this is
not probable cause for an arrest, this is only an investigative lead, etc.",
while the detectives will say well we got a hit from a line-up, blame the
witness, and the witness would probably say well the detectives showed me a
line-up and he seemed like the right guy (or maybe as is often the case with
line-ups, the detectives can exert a huge amount of bias/influence over
witnesses).

EDIT: Just to be clear, none of this is to say that the process worked well or
that I condone this. I think the data, the technology, the processes, and the
level of understanding on the side of the police are all insufficient, and I
do not support how this played out, but I think it is easy enough to provide
at least some pseudo-justification at each step along the way.

~~~
treis
I'm becoming increasingly frustrated with the difficulty in accessing primary
source material. Why don't any of these outlets post the surveillance video
and let us decide for ourselves how much of a resemblance there is.

~~~
BEEdwards
Even if the guy was an exact facial match, that doesn't justify the complete
lack of basic police work to establish it was him.

~~~
czbond
Absolutely agree - and the consequences to a personal citizen for the lack of
that basic police work can be long lastingly negative.

------
mnw21cam
This is a classic example of the false positive rate fallacy.

Let's say that there are a million people, and the police have photos of
100,000 of them. A crime is committed, and they pull the surveillance of it,
and match against their database. They have a funky image matching system that
has a false positive rate of 1 in 100,000 people, which is _way_ more accurate
than I think facial recognition systems are right now, but let's just roll
with it. Of course, on average, this system will produce one positive hit per
search. So, the police roll up to that person's home and arrest them.

Then, in court, they get to argue that their system has a 1 in 100,000 false
positive rate, so there is a chance of 1 in 100,000 that this person is
innocent.

Wrong!

There are ten people in the population of 1 million that the software would
comfortably produce a positive hit for. They can't all be the culprit. The
chance isn't 1 in 100,000 that the person is innocent - it is in fact at least
9 out of 10 that they are innocent. This person just happens to be the one
person out of the ten that would match that had the bad luck to be stored in
the police database. Nothing more.

~~~
Buttons840
There's a good book called "The Drunkards Walk", that describes a woman who
was jailed after having 2 children die from SIDS. They argued that the odds of
this happening is 1 in a million (or something like that), so probably the
woman is a baby killer. The prosecution had statisticians argue this. The
woman was found guilty.

She later won on appeal in part because the defense showed that the testimony
and argument of the original statisticians were wrong.

This stuff is so easy to get wrong. A little knowledge of statistics can be
dangerous.

~~~
Polylactic_acid
And even if the original stats were right. A 1 in a million event happens to
about 100 people per day in the US.

~~~
PudgePacket
Sure.. But the case being discussed has a maximum frequency of 1 in a million
every 18 (2 terms of childbirth) months, further reduced by needing to be a
woman of reproductive age, fertile, etc etc.

This case of "one in a million" does not happen frequently.

------
ghostpepper
He wasn't arrested until the shop owner had also "identified" him. The cops
used a single frame of grainy video to pull his driver's license photo, and
then put that photo in a lineup and showed the store clerk.

The store clerk (who hadn't witnessed the crime and was going off the same
frame of video fed into the facial recognition software) said the driver's
license photo was a match.

There are several problems with the conduct of the police in this story but
IMHO the use of facial recognition is not the most egregious.

~~~
malwarebytess
The story is the same one that all anti-surveillance, anti-police
militarization, pro-privacy, and anti-authoritarian people foretell. Good
technology will be used enable, amplify, and justify civil rights abuses by
authority figures from your local beat cop, to a faceless corporation, a
milquetoast public servant, or the president of the United States.

Our institutions and systems (and maybe humans in general) are not robust
enough to cleanly handle these powers, and we are making the same mistake over
and over and over again.

~~~
coffeemaniac
Correct, and this has been the story with every piece of technology or tool
we've ever given to police. We give them body cameras and they're turned off
or used to create FPS-style snuff films of gunned down citizens. Give them
rubber bullets and they're aimed at protesters eyeballs. Give them tasers and
they're used as an excuse to shoot someone when the suspect "resists." Give
them flashbangs and they'll throw them into an infant's crib. Give them mace
and it's used out of car windows to punish journalists for standing on the
sidewalks.

The mistake is to treat any police department as a good-faith participant in
the goal of reducing police violence. Any tool you give them will be used to
brutalize. The only solution is to give them less.

------
js2
> "I picked it up and held it to my face and told him, 'I hope you don't think
> all Black people look alike,' " Williams said.

I'm white. I grew up around a sea of white faces. Often when watching a movie
filled with a cast of non-white faces, I will have trouble distinguishing one
actor from another, especially if they are dressed similarly. This sometimes
happens in movies with faces similar to the kinds I grew up surrounded by, but
less so.

So unfortunately, yes, I probably do have more trouble distinguishing one
black face from another vs one white face from another.

This is known as the cross-race effect and it's only something I became aware
of in the last 5-10 years.

Add to that the fallibility of human memory, and I can't believe we still even
use line ups. Are there any studies about how often line ups identify the
wrong person?

[https://en.wikipedia.org/wiki/Cross-
race_effect](https://en.wikipedia.org/wiki/Cross-race_effect)

~~~
SauciestGNU
I lived in South Africa for a while and heard many times, with various degrees
of irony, "you white people all look the same" from black South Africans. So
yeah it's definitely a cross-racial recognition problem, and it's probably
also a problem with distinguishing between members of visible minorities using
traits beyond the most noticable othering characteristic.

------
Anthony-G
There is just so much wrong with this story. For starters:

The shoplifting incident occurred in October 2018 but it wasn’t until March
2019 that the police uploaded the security camera images to the state image-
recognition system but the police still waited until the following January to
arrest Williams. Unless there was something special about that date in
October, there is no way for anyone to remember what they might have been
doing on a particular day 15 months previously. Though, as it turns out, the
NPR report states that the police did not even try to ascertain whether or not
he had an alibi.

Also, after 15 months, there is virtually no chance that any eye-witness (such
as the security guard who picked Williams out of a line-up) would be able to
recall what the suspect looked like with any degree of certainty or accuracy.

This WUSF article [1] includes a photo of the actual “Investigative Lead
Report” and the original image is far too dark for a anyone (human or
algorithm) to recognise the person. It’s possible that the original is better
quality and better detail can be discerned by applying image-processing
filters – but it still looks like a very noisy source.

That same “Investigative Lead Report” also clearly states that “This document
is not a positive identification … and is _not_ probable cause to arrest.
Further investigation is needed to develop probable cause of arrest”.

The New York Times article [2] states that this facial recognition technology
that the Michigan tax-payer has paid millions of dollars for is known to be
biased and that the vendors do “not formally measure the systems’ accuracy or
bias”.

Finally, the original NPR article states that

> "Most of the time, people who are arrested using face recognition are not
> told face recognition was used to arrest them," said Jameson Spivack

[1] [https://www.wusf.org/the-computer-got-it-wrong-how-facial-
re...](https://www.wusf.org/the-computer-got-it-wrong-how-facial-recognition-
led-to-a-false-arrest-in-michigan/)

[2] [https://www.nytimes.com/2020/06/24/technology/facial-
recogni...](https://www.nytimes.com/2020/06/24/technology/facial-recognition-
arrest.html)

~~~
cwkoss
Really seems like most police departments in our country are incompetent,
negligent, ineffective and systemically racist.

Many of these cops are earning $200k plus annually! Our law enforcement system
is ridiculous and needs an overhaul.

~~~
octodog
It gets even crazier to think about when you realise that cities like Detroit
are overwhelmingly Black. The police there are just not providing good value
for the people who live there.

Brookings had a great post about this the other day:
[https://www.brookings.edu/blog/how-we-rise/2020/06/11/to-
add...](https://www.brookings.edu/blog/how-we-rise/2020/06/11/to-add-value-to-
black-communities-we-must-defund-the-police-and-prison-systems/)

------
jandrewrogers
It isn't just facial recognition, license plate readers can have the same
indefensibly Kafka-esque outcomes where no one is held accountable for
verifying computer-generated "evidence". Systems like in the article make it
so cheap for the government to make a mistake, since there are few
consequences, that they simply accept mistakes as a cost of doing business.

Someone I know received vehicular fines from San Francisco on an almost weekly
basis solely from license plate reader hits. The documentary evidence sent
with the fines clearly showed her car had been misidentified but no one ever
bothered to check. She was forced to fight each and every fine because they
come with a presumption of guilt, but as soon as she cleared one they would
send her a new one. The experience became extremely upsetting for her, the
entire bureaucracy simply didn't care.

It took threats of legal action against the city for them to set a flag that
apparently causes violations attributed to her car to be manually reviewed.
The city itself claimed the system was only 80-90% accurate, but they didn't
believe that to be a problem.

~~~
vmception
That suddenly reminded me why I feel so privileged to not own a car, a
distinct contrast from when I was a teenager and felt it was a rite of
passage!

I had forgotten about the routine of fighting traffic tickets multiple times a
year as a fact of life. Let alone fender benders. I had only been reveling in
the lack of a frustrating commute.

Last decade I did get a car for 3 months, and the insurance company was so
thrilled that I was "such a good driver" because of my "spotless record" for
many years. Little do they know I just don't drive and perhaps have now less
experience than others. Although tangentially, their risk matrix actually
might be correct, if I can afford to live in dense desirable areas then maybe
it is less likely that I would be going fast and getting into circumstances
that pull from their insurance pool at larger amounts.

They probably thought "one of the largest companies in the world probably
chauffeurs him down the highway in a bus anyway"

~~~
nitwit005
This is a big reason I wish it was easier to not have a car in the US. There's
always the potential to get things like parking tickets, and you have to deal
with license, insurance, parking permit, etc.

The volume of tickets issued is quite staggering, and each one is a huge
annoyance for someone.

------
danso
Since the NPR is a 3 minute listen without a transcript, here's the ACLU's
text/image article: [https://www.aclu.org/news/privacy-technology/wrongfully-
arre...](https://www.aclu.org/news/privacy-technology/wrongfully-arrested-
because-face-recognition-cant-tell-black-people-apart/)

And here's a 1st-person account from the arrested man:
[https://www.washingtonpost.com/opinions/2020/06/24/i-was-
wro...](https://www.washingtonpost.com/opinions/2020/06/24/i-was-wrongfully-
arrested-because-facial-recognition-why-are-police-allowed-use-this-
technology/)

~~~
TedDoesntTalk
As soon as I saw it was audio only, i left the site. Why do sites do this? How
many people actually stick to the page and listen to that?

~~~
danso
NPR does transcribe (many, most?) its audio stories, but usually there's a
delay of a day or so – the published timestamp for this story is 5:06AM (ET)
today.

edit: looks like there's a text version of the article. I'm assuming this is a
CMS issue: there's an audio story and a "print story", but the former hadn't
been linked to the latter:
[https://news.ycombinator.com/item?id=23628790](https://news.ycombinator.com/item?id=23628790)

~~~
dhosek
They transcribe all their stories. Back before the web was widespread, you
could call or write NPR and have them mail a transcript to you.

------
vermontdevil
From ACLU article:

 _Third, Robert’s arrest demonstrates why claims that face recognition isn’t
dangerous are far-removed from reality. Law enforcement has claimed that face
recognition technology is only used as an investigative lead and not as the
sole basis for arrest. But once the technology falsely identified Robert,
there was no real investigation._

I fear this is going to be the norm among police investigations.

------
vmception
> Federal studies have shown that facial-recognition systems misidentify Asian
> and black people up to 100 times more often than white people.

The idea behind inclusion is that this product would have never made it to
production if the engineering teams, product team, executive team and board
members represented the population. But enough representation so that there is
a countering voice is even better.

Would have just been "this edge case is not an edge case at all, axe it."

Accurately addressing a market is the point of the corporation more than an
illusion of meritocracy amongst the employees.

~~~
JangoSteve
This is so incredibly common, it's embarrassing. I was on an expert panel
about "AI and Machine Learning in Healthcare and Life Sciences" back in
January, and I made it a point throughout my discussions to keep emphasizing
the amount of bias inherent in our current systems, which ends up getting
amplified and codified in machine learning systems. Worse yet, it ends up
justifying the bias based on the false pretense that the systems built are
objective and the data doesn't lie.

Afterward, a couple people asked me to put together a list of the examples I
cited in my talk. I'll be adding this to my list of examples:

* A hospital AI algorithm discriminating against black people when providing additional healthcare outreach by amplifying racism already in the system. [https://www.nature.com/articles/d41586-019-03228-6](https://www.nature.com/articles/d41586-019-03228-6)

* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. [https://www.nejm.org/doi/full/10.1056/NEJMsa1507092](https://www.nejm.org/doi/full/10.1056/NEJMsa1507092)

* The dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. [https://jamanetwork.com/journals/jamadermatology/article-abs...](https://jamanetwork.com/journals/jamadermatology/article-abstract/2688587)

And some other relevant, but not healthcare examples as well:

* When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English". [https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...](https://fortune.com/2019/08/16/google-jigsaw-perspective-racial-bias/)

* When Amazon's AI recruiting tool inadvertantly filtered out resumes from women. [https://www.reuters.com/article/us-amazon-com-jobs-automatio...](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G)

* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon. [https://www.propublica.org/article/machine-bias-risk-assessm...](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing)

And here's some good news though:

* A hospital used AI to enable care and cut costs (though the reporting seems to over simplify and gloss over enough to make the actual analysis of the results a little suspect). [https://www.healthcareitnews.com/news/flagler-hospital-uses-...](https://www.healthcareitnews.com/news/flagler-hospital-uses-ai-create-clinical-pathways-enhance-care-and-slash-costs)

~~~
mtgp1000
>When AI criminal risk prediction software used by judges in deciding the
severity of punishment for those convicted predicts a higher chance of future
offence for a young, black first time offender than for an older white repeat
felon.

>When Amazon's AI recruiting tool inadvertantly filtered out resumes from
women

>When Google's hate speech detecting AI inadvertantly censored anyone who used
vernacular referred to in this article as being "African American English

There's simply no indication that these aren't statistically valid priors. And
we have mountains of scientific evidence to the contrary, but if dared post
anything (cited, published literature) I'd be banned. This is all based on the
unfounded conflation between equality of outcome and equality of opportunity,
and the erasure of evidence of genes and culture playing a role in behavior
and life outcomes.

This is bad science.

~~~
JangoSteve
> There's simply no indication that these aren't statistically valid priors.
> And we have mountains of scientific evidence to the contrary, but if dared
> post anything (cited, published literature) I'd be banned.

I'd consider reading the sources I posted in my comment before responding with
ill-conceived notions. Literally every single example I posted linked to the
peer-reviewed scientific evidence (cited, published literature) indicating the
points I summarized.

The only link I posted without peer-reviewed literature was the last one with
the positive outcome, and that's the one I commented had suspect analysis.

~~~
mtgp1000
Let's just consider an example; where do you draw the line in the following
list? To avoid sending travelers through unsafe areas:

1\. Google's routing algorithm is conditioned on demographics

2\. Google's routing algorithm is conditioned on income/wealth

3\. Google's routing algorithm is conditioned on crime density

4\. Google's routing algorithm cannot condition on anything that would
disproportionately route users away from minority neighborhoods

I think the rational choice, to avoid forcing other people to take risks that
they may object to, is somewhere between 2 and 3. But the current social
zeitgeist seems only to allow for option four, since an optimally sampled
dataset will have very strong correlations between 1-3, to the point that in
most parts of the us they would all result in the same routing bias.

~~~
JangoSteve
This is exactly why I suggested actually reading the sources I posted before
responding. The Google example has nothing to do with routing travelers. It
was an algorithm designed to detect sentiment in online comments and to auto-
delete any comments that were classified as hate-speech. The problem was that
it mis-classified entire dialects of English (meaning it completely failed at
determining sentiment for certain people), deleting all comments from the
people of certain cultures (unfairly, disproportionately censoring a group of
people). That's the dictionary definition of bias.

~~~
mtgp1000
You're completely missing my point. And the purpose of my hypothetical. So let
me try it with your example:

>The problem was that it mis-classified entire dialects of English (meaning it
completely failed at determining sentiment for certain people), deleting all
comments from the people of certain cultures

What happens in the case that a particular culture _is_ more hateful? Do we
just disregard any data that indicates socially unacceptable bias?

What, only Nazis are capable of hate speech?

~~~
JangoSteve
> What happens in the case that a particular culture is more hateful? Do we
> just disregard any data that indicates socially unacceptable bias?

That's not what was happening. If you read the link, you'll see the problem is
that the AI/ML system was mis-classifying non-hateful speech as hateful, just
because of the dialect being used.

If it were the case that the culture _was_ more hateful, then it wouldn't have
been considered "mis-classification."

> You're completely missing my point.

I'm not missing your point; it's just not a well-reasoned or substantiated
point. Here were your points:

> There's simply no indication that these aren't statistically valid priors.

We do have every indication that this wasn't what was happening in literally
every single example I posted. You just have to read them.

> And we have mountains of scientific evidence to the contrary, but if dared
> post anything (cited, published literature) I'd be banned.

You say that, and yet you keep posting your point without any evidence
whatsoever. Meanwhile, every single example I posted did cite peer-reviewed,
published scientific evidence.

> This is all based on the unfounded conflation between equality of outcome
> and equality of opportunity, and the erasure of evidence of genes and
> culture playing a role in behavior and life outcomes.

Again, peer-reviewed published literature disagrees. Reading it explains why
the point that it's all unfounded conflation is incorrect.

------
gentleman11
The discussion about this tech revolves around accuracy and racism, but the
real threat is in global unlimited surveillance. China is installing 200
million of facial recognition cameras right now to keep the population under
control. It might be the death of human freedom as this technology spreads

Edit: one source says it is 400 million new cameras:
[https://www.cbc.ca/passionateeye/m_features/in-xinjiang-
chin...](https://www.cbc.ca/passionateeye/m_features/in-xinjiang-china-
surveillance-technology-is-used-to-help-the-state-control)

------
seebetter
Reminds me of this-

Facial recognition technology flagged 26 California lawmakers as criminals.
(August 2019)

[https://www.mercurynews.com/2019/08/14/facial-recognition-
te...](https://www.mercurynews.com/2019/08/14/facial-recognition-technology-
flagged-26-california-lawmakers-as-criminals-this-bill-to-ban-the-tech-is-
headed-to-the-senate/)

------
sneak
Another reason that it's absolutely insane that the state demands to know
where you sleep at night in a free society. These clowns were able to just
show up at his house and kidnap him.

The practice of disclosing one's residence address to the state (for sale to
data brokers[1] and accessible by stalkers and the like) when these kinds of
abuses are happening is something that needs to stop. There's absolutely no
reason that an ID should be gated on the state knowing your residence. It's
none of their business. (It's not on a passport. Why is it on a driver's
license?)

[1]: [https://www.newsweek.com/dmv-drivers-license-data-
database-i...](https://www.newsweek.com/dmv-drivers-license-data-database-
integrity-department-motor-vehicles-1458141)

------
w_t_payne
Perhaps we, as technologists, are going about this the wrong way. Maybe,
instead of trying to reduce the false alarm rate to an arbitrarily low number,
we instead develop CFAR (Constant false alarm rate) systems, so that users of
the system _know_ that they will get some false alarms, and develop procedures
for responding appropriately. In that way, we could get the benefit of the
technology, whilst also ensuring that the system as a whole (man and machine
together) are designed to be robust and have appropriate checks and balances.

~~~
gwd
If you follow the link, you'll see that the computer report had this message
right at the top in massive letters:

THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN _INVESTIGATIVE LEAD
ONLY_ AND IS _NOT_ PROBABLE CAUSE TO ARREST. FURTHER INVESTIGATION IS NEEDED
TO DEVELOP PROBABLE CAUSE TO ARREST.

I mean, what else could the technologists have done?

~~~
Mangalor
I wonder if the problem here is just the way traditional policing works, not
even the technology.

~~~
noisy_boy
It is indeed the way traditional policing works. The average police officer
things facial recognition is the visual equivalent of Google and since he/she
can similarly rely on search results - they either have no idea about false
positive biases based on race or worse, probably are too lazy to dig deeper.

Though the suffering of the victims of such wrong matches is real, one
consolation is that more of such cases will hopefully bring about the much
needed scepticism in the results so that some old-fashioned
validation/investigation is done.

------
hpoe
I don't think using the facial recognition is necessarily wrong to help
identify probable suspects, but arresting someone based on a facial match
algorithm is definitely going too far.

Of course really I blame the AI/ML hucksters for part of this mess who have
sold us the idea of machines replacing rather than augmenting human decision
making.

~~~
dx87
Yeah, facial recognition can be useful in law enforcement, as long as it's
used responsibly. There was a man who shot people at a newspaper where I
lived, and when apprehended, he refused to identify himself, and apparently
their fingerprint machine wasn't working, so they used facial recognition to
identify him.

[https://en.wikipedia.org/wiki/Capital_Gazette_shooting](https://en.wikipedia.org/wiki/Capital_Gazette_shooting)

~~~
YPCrumble
> as long as it’s used responsibly

At what point can we decide that people in positions of power are not and will
not ever be responsible enough to handle this technology?

Surely as a society we shouldn’t continue to naively assume that police are
“responsible” like we’ve assumed in the past?

~~~
dx87
Agreed, I'm not saying we can currently assume they are responsible, but in
some hypothetical future where reforms have been made and they can be trusted,
I think it would be fine to use. I don't think we should use current bad
actors to decide that a technology is completely off limits in the future.

------
czbond
A few things I just don't have the stomach for as an engineer, writing
software that: \- impacts someones health \- impacts someones finances \-
impacts someones freedoms

Call me weak, but I think about the "what ifs" a bit too much in those cases.
What if my bug keeps them from selling their stock and they lose their
savings? What if the wrong person is arrested, etc?

~~~
Barrin92
Why would anyone call you weak, that's principled and it's the correct
attitude. It's the people who don't think about the consequences of the
products they help build that are the problem.

~~~
czbond
Thank you for that really helpful reply. I re-framed my thoughts based on it.

------
at_a_remove
I think that your prints, DNA, and so forth must be, in the interests of
fairness, utterly erased from all systems in the case of false arrest. With
some kind of enormous, ruinous financial penalty in place for the
organizations for non-compliance, as well as automatic jail times for involved
personnel. These things need teeth to happen.

------
rusty__
any defence lawyer with more than 3 brain cells would have an absolute field
day deconstructing a case brought solely on the basis of a facial recognition.
What happened to the idea that police need to gather a variety of evidence
confirming their suspicions before an arrest is issued. Even a state
prosecutor wouldn't authorize a warrant based on such flimsy methods.

~~~
ARandomerDude
True but the defendant is still financially, and in many cases professionally,
ruined.

------
jackklika
The company that developed this software is Dataworks plus, according to the
article. Name and shame.

------
FpUser
And then in some states employers are allowed to ask have you eve been
arrested (never mind convicted of any crime) on employment application. Sure,
keep putting people down. One day it might catch up with China's social
scoring policies.

------
MikusR
Is that different from somebody getting arrested based on mistaken eyewitness.

~~~
Enginerrrd
The difference is that is a known problem, but with ML, a large fraction of
the population thinks it's infallible. Worse, its reported confidence for an
individual face may be grossly overstated, since that is based on all the data
it was trained on, rather than the particular subset you may be dealing with.

~~~
raxxorrax
large fraction of the population and ML marketing both believe that.

I still think it insane. We have falling crime rates and we still arm
ourselves as fast as we can. Humanity could live without face recognition and
we wouldn't even suffer any penalties. Nope, people need to sell their
evidently shitty ML work.

~~~
treis
(1) We still have extreme levels of crime compared to other first world
countries even if it is in decline

(2) Your argument strikes me as somewhat similar to "I feel fine why should I
keep taking my medicine?". It's not exactly the same as the medicine is
scientifically proven to cure disease while it's impossible to measure the
impact of police on crime. But "things are getting better so we should change
what we're doing" is not a particularly sound logical argument.

~~~
raxxorrax
Crimes rates dropped even faster in countries with more rehabilitative
approaches and long before some countries began to upgrade their police forces
because of unrelated fears. It was more about giving people a second chance in
all that.

Criminologists aren't certain about surveillance having a positive or negative
effects on crime. We have more than 40 studies with mixed results. What is
certain with that this kind of surveillance isn't responsible for the falling
crime rates described. Most data is from the UK. Currently I don't think
countries without surveillance fair worse on crime. Maybe quite to the
contrary.

"what we're doing" is not equivalent to increasing video surveillance or
generally increasing armament in civil spaces. It may be sound logic if you
extend the benefit of the doubt but it may also just be a false statement.

Since surveillance is actually constitutionally forbidden in many countries,
on could argue that deployment would "increase crime".

In some other sound logic it might just be a self reinforcing private prison
industry with economic interests to keep a steady supply of criminals. Would
also be completely sound.

But all these discussions are quite dishonest, don't you think? I just don't
want your fucking camera in my face.

------
cpeterso
What is a unique use case for facial recognition that cannot be abused and has
no other alternative solution?

Even the "good" use cases like unlocking your phone have security problems
because malicious people can use photos or videos of your face and you can't
change your face like you would a breached username and password.

------
renewiltord
I've got to be honest: I'm getting the picture the police here aren't very
competent. I know I know, POSIWID and maybe they're very competently aiming at
the current outcome. But don't they just look like a bunch of idiots?

------
crazygringo
In this _particular_ case, computerized facial recognition is _not_ the
problem.

Facial recognition produces _potential_ matches. It's still up to humans to
look at footage themselves and _use their judgment_ as to whether it's
actually the same person or not, as well as to judge whether other elements
fit the suspect or not.

The problem here is 100% on the cop(s) who made that call for themselves, or
intentionally ignored obvious differences. (Of course, without us seeing the
actual images in question, it's hard to judge.)

There are plenty of dangers with facial recognition (like using it at scale,
or to track people without accountability), but this one doesn't seem to be
it.

~~~
ncallaway
> The problem here is 100% on the cop(s) who made that call for themselves

I disagree. There is plenty of blame on the cops who made that call for
themselves, true.

But there doesn't have to be a single party who is at fault. The facial
recognition software is _badly flawed_ in this dimension. It's well
established that the current technologies are racially biased. So there's at
least some fault in the developer of that technology, and the purchasing
officer at the police department, and a criminal justice system that allows it
to be used that way.

Reducing a complex problem to a single at-fault person produces an analysis
that will often let other issues continue to fester. Consider if the FAA
always stopped the analysis of air-crashes at: "the pilot made an error, so we
won't take any other corrective actions other than punishing the pilot". Air
travel wouldn't nearly as safe as it is today.

While we should hold these officers responsible for their mistake (abolish QI
so that these officers could be sued civilly for the wrongful arrest!), we
should also fix the other parts of the system that are obviously broken.

~~~
dfxm12
_The facial recognition software is badly flawed in this dimension. It 's well
established that the current technologies are racially biased._

Who decided to use this software for this purpose, _despite these bad flaws
and well established bias_? The buck stops with the cops.

~~~
moron4hire
There's also the company that built the software and marketed it to law
enforcement.

Even disregarding the moral hazard of selecting an appropriate training set,
the problem is that ML-based techniques are inherently biased. That's the
entire point, to boil down a corpus of data into a smaller model that can
generate guesses at results. ML is not useful without the bias.

The problem is that bias is OK in some contexts (guessing at letters that a
user has drawn on a digitizer) and absolutely wrong in others (needlessly
subjecting an innocent person to the judicial system and all of its current
flaws). The difference is in four areas, how easily one can correct for false
positives/negatives, how easy it is to recognize false output, how the data
and results relate to objective reality, and how destructive bad results may
be.

When Amazon product suggestions start dumping weird products on me because
they think viewing pages is the same as showing interest in the product (vs.
guffawing at weird product listings that a Twitter personality has found), the
damage is limited. It's just a suggestion that I'm free to ignore. In
particularly egregious scenarios, I've had to explain why weird NSFW results
were showing up on my screen, but thankfully the person I'm married to trusts
me.

When a voice dictation system gets the wrong words for what I am saying,
fixing the problem is not hard. I can try again, or I can restart with a
different modality.

In both of the previous cases, the ease of detection of false positives is
simplified by the fact that I know what the end result _should_ be. These
technologies are assistive, not generative. We don't use speech recognition
technology to determine _what_ we are attempting to say, we use it to speed up
getting to a predetermined outcome.

The product suggestion and dictation issues are annoying when encountering
them because they are tied to an objective reality: finding products I want to
buy, communicating with another person. They're only "annoying" because the
mitigation is simple. Alternatively, you can just dispense with the real world
entirely. When a NN "dreams" up pictures of dogs melting into a landscape,
that is completely disconnected from any real thing. You can't take the
hallucinated dog pictures for anything other than generative art. The purpose
of the pictures is to look at the weird results and just say, "ah, that was
interesting".

But facial recognition and "depixelization" fails on the first three counts,
because they are attempts to reconnect the ML-generated results to a thing
that exists in the real world, we don't know what the end results should be,
and we (as potential users of the system) don't have any means of adjusting
the output or escaping to a different system entirely. And when combined with
the purpose of law enforcement, it fails on the fourth aspect, in that the
modern judicial system in America is singularly optimized for prosecuting
people, not determining innocence or guilt, but getting plea bargain deals out
of people. Only 10% criminal cases go to trial. 99% of civil suits end in a
settlement rather than a judgement (with 90% of the cases settling before ever
going to trial). Even in just this case of the original article, this person
and his family have been traumatized, and he has lost at least a full day of
productivity, if not much, much more from the associated fallout.

When a company builds and markets a product that harms people, they should be
held liable. Due to the very nature of how machine vision and learning
techniques work, they'll never be able to address these problems. And the
combination of failure in all four categories makes them particularly
destructive.

~~~
dfxm12
_When a company builds and markets a product that harms people, they should be
held liable._

They should be, however a company building and marketing a harmful product is
a separate issue from cops using specious evidence to arrest a man.

Cops (QI aside), are responsible for the actions they take. They shouldn't be
able to hide behind "the tools we use are bad", especially when (as a parent
poster said), the tool is known to be bad in the first place and the cops
still used it.

~~~
ncallaway
> Cops (QI aside), are responsible for the actions they take. They shouldn't
> be able to hide behind "the tools we use are bad", especially when (as a
> parent poster said), the tool is known to be bad in the first place and the
> cops still used it.

But literally no one in this thread is arguing to _not_ hold them responsible.

Everyone agrees that _yes, the cops and PD are responsible_. It's just that
some people are arguing that there are other parties that __also __bear
responsibility.

No one thinks the cops should be able to hide behind the fact that the tool is
bad. I think these cops should be fired, sued for a wrongful arrest. I think
QI should be abolished so wronged party can go after the house of the officer
that made the arrest in a civil court. I think the department should be on the
hook for a large settlement payment.

But I _also_ think the criminal justice system should enjoin future
departments from using this known bad technology. I think we should _also_ be
mad at the technology vendors that created this bad tool.

------
TedDoesntTalk
> Even if this technology does become accurate (at the expense of people like
> me), I don’t want my daughters’ faces to be part of some government
> database.

Stop using Amazon Ring and similar doorbell products.

------
aritraghosh007
The pandemic has accelerated the use of no-touch surfaces specially at places
like an airport that are more inclined to now use a face recognition security
kiosk. What's not clear is the vetting process for these (albeit
controversial) technologies. What if Google thinks person A is an offender but
Amazon thinks otherwise. Can they be used as counter evidence? What is the
gold standard for surveillance?

------
zro
NPR article about the same, if you prefer to read instead of listen:
[https://www.npr.org/2020/06/24/882683463/the-computer-got-
it...](https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-
facial-recognition-led-to-a-false-arrest-in-michig)

I'll be watching this case with great interest

------
ChrisMarshallNY
Sadly, there's plenty more where that came from.

------
loup-vaillant
And now the poor guy has an arrest record. Which wouldn't be a problem in
reasonable legislations, where it's nobody's business whether you've been
arrested or not, as long as you've not been _convicted_.

But in the US, I've heard that it can make it harder to get a job.

I believe I'm starting to get a feel for how the school to prison pipeline may
work.

------
linuxftw
Wait until you hear about how garbage and unscientific fingerprint
identification is.

~~~
_underfl0w_
Speaking of pseudoscience, didn't most police forces just start phasing out
polygraphs in the last decade?

~~~
linuxftw
Unlikely unless they were compelled by law or found something else to replace
it, and I think it's the latter. Something about machine learning and such.

------
aussieguy1234
In alot of police departments around the world, the photo database used is the
drivers license database.

There is clothing available that can confuse facial recognition systems. What
would happen if, next time you go for your drivers license photo, you wore a T
shirt designed to confuse facial recognition, for example like this one?
[https://www.redbubble.com/i/t-shirt/Anti-Surveillance-
Clothi...](https://www.redbubble.com/i/t-shirt/Anti-Surveillance-Clothing-by-
Naamiko/24714049.1YYVU?u)

------
MertsA
I would love to see police trying to take a crack at this from the other side
of things. Instead of matching against a database, Set up a style GAN and come
up with a mask of the original photo or video to isolate just the face and
have the discriminator try to match the face. Then at the end you can see the
generated face with a decent pose and more importantly look through the range
of generated faces that result in a reasonable match to give a somewhat decent
idea of how confident you should be about any identification.

------
hkai
While this case is bad enough, mistakes like this are not the biggest concern.
Mistakenly arrested people are (hopefully) eventually released, even though
they have to go through quite a bit of trouble.

The consequence that is much worse would be mass incarceration of certain
groups, because the AI is too good at catching people who actually did
something.

This second wave of mass incarceration will lead to even more single parent
families and poor households, and will reinforce the current situation.

------
whatshisface
How does computerized facial recognition compare in terms of racial bias and
accuracy to human-brain facial recognition? Police are not exactly perfect in
either regard.

~~~
suizi
Face recognition widens the scope of how many people can be harassed.

~~~
_underfl0w_
While also enabling finger-pointing, e.g. the police can say "We aren't racist
or aren't at fault. The system is just faulty." while the engineers behind the
facial recognition tech can say that they, "Were just doing their job. The
police should've heeded their disclaimers, etc."

------
ineedasername
It's supposed to be a cornerstone of "innocent until proven guilty" legal
systems that it is better to have 10 guilty people go free than to deprive a
single innocent person of their freedom. It seems like the needle has been
moving in the wrong direction on that. I'm not sure it that's just my
impression on things, or if it's because there's more awareness with the
internet/social networking of issues...

------
tantalor
No mention of whether a judge signed a warrant for the arrest. In what world
can cops just show up and arrest you on your front lawn based on their hunch?

------
redorb
If it's statistically proven to not work with black people then I think the
only options are

1) Make it avoid black people, i.e. they aren't stored in the database and
aren't processed when scanned.

2) Put a 5 year hiatus on commercial / public use.

Either of these things are more acceptable than too many false positives. #1
is really interesting to me as a thought experiment because it makes everyone
think twice.

~~~
fatso784
The issue is that it's hard to determine who is considered "black" and who is
not since race is a made-up fiction constructed by white supremacists that is
now so engrained in American society we find it difficult if not impossible to
reflect on it. Are we using the Fitzpatrick skin color scale? Are we
continuing the hypodescent rule (an algorithm, btw)?

Maybe we just outlaw face recognition in criminal justice entirely.

~~~
redorb
It definitely shouldn't be allowed as the main evidence - perhaps the answer
to skin tone is to use the data and see where the algo starts to get too many
false positives..

The largest - over arching idea - is to get everyone to think twice by making
the majority think twice - If white people think 'its only for us!?' it'll
make them really study the effects.. (I'm white.)

------
alex_young
This technology will never be ready to use like this.

Similarly we shouldn’t collect vast databases of fingerprints or DNA and
search them for every crime.

Why? Because error rates are unavoidable. There is some uncertainty, and in
large enough numbers you will find false matches with perfect DNA matching.

We must keep our senses and use these technologies to help us rather than find
the hidden bad guy.

------
atum47
well, I'm going to take something out of my chest. every time I shared a
project here using machine learning people always gave me crap. saying my
models were simplistic, or I did something wrong or the solution didn't work
100% of the time. well, I studied ML back in college. the basics, the
algorithms that started all, linear regression, perceptron, adaline, knn,
kmeans... and guess what? ML doesn't work 100% of the time. I always wanted to
see how people would react when a car driven by ml hits something or when they
based an important decision based on the classification of an nn. ML should be
used along side human intelligence not by itself. you don't blindly trusts a
black box.

------
kwonkicker
Due process should not be abandoned in favour of automation. This was police
negligence as much as it was a software mismatch.

One more thing, the article was being to dramatic about the whole incident.

------
d--b
The worst part is they use facial recognition which finds someone that looks
like the suspect, and then they put the guy in a lineup and have him
identified by the victim. Wtf?

~~~
jl2718
That’s a good point. The selection from a lineup then gives less information.
So the real issue is poorly understood priors.

I’m pretty sure that this can be used fairly with a rigorous Bayesian
treatment.

------
neonate
The prosecutor and the police chief should personally apologize to his
daughters, assuming that would be age appropriate.

------
paulorlando
I've been thinking this sort of event has become inevitable. Tech development
and business models support extending the environments in which we collect
images and analyze them. Confidence values lead to statistical guilt. I wrote
about it here if interested: [https://unintendedconsequenc.es/inevitable-
surveillance/](https://unintendedconsequenc.es/inevitable-surveillance/)

------
mistercool
relevant:
[https://www.theregister.com/2020/06/24/face_criminal_ai/](https://www.theregister.com/2020/06/24/face_criminal_ai/)

------
nebulous1
> In Williams' case, police had asked the store security guard, who had not
> witnessed the robbery, to pick the suspect out of a photo lineup based on
> the footage, and the security guard selected Williams.

Great job police

------
bosswipe
Boston just banned facial recognition, as have San Francisco, Oakland and a
bunch of other cities.

You can join this movement by urging your local government officials to follow
suit.

------
throwawaysea
A human still confirmed the match right? That makes this not a facial
recognition issue but something else.

~~~
aaronmdjones
A human who had only seen the same grainy security footage that the algorithm
saw.

------
blackrock
TOTAL FAIL.

------
VWWHFSfQ
sounds like this guy is about to get a big payday.

~~~
ncallaway
That and we might get some kind of judicial ruling that current incarnations
of facial recognition software are racially based.

It would be a great result if a court declared that the use of racially biased
facial recognition software is a violation of the 14th amendment violation,
and enjoined PDs from using such software unless it can be demonstrated to be
free of racial bias.

~~~
ChrisMarshallNY
Might have something to do with the...monochromatic...makeup of most software
company C-suites.

Fortunately, that is changing, but not that quickly.

