Hacker News new | past | comments | ask | show | jobs | submit login
Why facial recognition has led to false arrests (nautil.us)
121 points by dnetesn on Aug 24, 2020 | hide | past | favorite | 103 comments



>But the next steps, where investigators first confirm the match, then seek more evidence for an arrest, were poorly done and Williams was brought in. He had to spend 30 hours in jail and post a $1,000 bond before he was freed. //

Seems more like "why lack of rigour in policing has lead to false arrests".

Facial recognition is evidence, not proof. They clearly didn't view the video and compare it to the person's photo if during post-arrest interview they could immediately tell it was the wrong person?

Do they also just arrest people based on other 'witness' reports without any effort towards corroboration?


This is what social psychologists call "automation bias," which has been empirically observed in all kinds of situations from all kinds of people: https://en.wikipedia.org/wiki/Automation_bias

When we are told that a prediction is from an automated system, most people are more likely to believe it and not subject it to the same skepticism and critical thinking that we do to predictions from humans. Even if there is a human-in-the-loop system that requires a human to confirm it, that's often just laundering the machine's prediction through a human. When we are primed by being told this is what The System predicted, we're more likely to override our normal processes. This is especially the case when The System has worked 99% of the time --- which is still a lot of false positives.

Although this can also be the case with priming from predictions or judgments by other humans. The Asch conformity experiments involved planting fake test subjects in a group activity where subjects had to give answers to a very obvious perception question, and the real test subject could hear everyone else's answers. The fake subjects went first and all gave the blatantly wrong answer. The percent of people who conform varies with how you run the experiment, but typically a majority of the subjects end up giving the obvious wrong answer that everyone else did: https://en.wikipedia.org/wiki/Asch_conformity_experiments


It's also partially the blame of companies that sell the software, as it's them that should really insist on a need for users of their systems to manually double check every automated result - but "our software makes mistakes" doesn't make for a great sales pitch. So instead of training people to doubt the results and use them only as a suggestion, they greatly exaggerate the capabilities of their software and make officers lazy about fact-checking the finds.


So give your human validator a lineup - not 1 match but say the top 5 matches plus a few chosen randomly from lower down the confidence list, presented in a random order with no information on which was the algorithm's top choice.


I can see it happening when checking takes a long time - but they're probably man-days into an arrest at the point they're sitting in an interview room post-arrest. Against, the LEO looking at their tablet screen prior to arresting the guy, seeing he looks different and not arresting him?

Are the LEOs getting bonuses for raw number-of-arrests?


It depends on the department, but raw arrest counts are absolutely part of individual officer performance metrics, with low arrest count officers being "pressured to find something" at the end of the month.

https://www.nbcnewyork.com/news/local/nypd-officers-arrest-q...


Which is absolutely a big issue with our policing system. Most of the time they just arrest easy targets like sex workers or people who could be charged with 'public misconduct' or something like that.


> Are the LEOs getting bonuses for raw number-of-arrests?

There may be a subconscious bias here... if you spend your entire year looking at murder suspects, you probably see the guy on the other side as just another scumbag... and probably want to make sure the victim gets justice (even if it's wrong)


When LEOs manage to get a conviction, they now have irrefutable proof (a conviction in a court of law) that they were right in the first place to make sure that scumbag was convicted. It’s a self-perpetuating cycle of increasing prejudice.


It's real interesting going through forensics courses and working with computer forensics professionals and seeing the opposite.

Since you're not briefed on the crime and only given a list of tasks to perform (ie, locate pictures of a vehicle), there might be less of a bias here.


Are the LEOs getting bonuses for raw number-of-arrests?

A good number are nazis, so if the software enables them to enjoy life better by harassing a few more black youth, that's enough of a reward.


No, that’s way too easy an answer. It’s also othering the problem away; essentially saying “non-nazis would never behave this way”. But they would, have, and do.

Since this is a programmer-rich forum, let’s talk about bug fixing for a moment. When you are looking for a bug, it’s easy to convince yourself a number of things, like: “It’s a random glitch, just restart it and it won’t happen again.”, or “It’s this library which is buggy, it’ll be fixed when we upgrade it to the latest patch.”. Now, of course, we programmers are then punished mightily for our arrogance when the bug keeps happening. But what would happen to bug fixing if this never happened? If we could just declare that the bug was this or that, and move on?


Obviously the police deal with crime and disorder every day, this would attract people who are more on the right of the political spectrum.

This is exacerbated by deep-seated structural factors in the US, policing by consent is unknown, there is high income inequality and segregation, the level of training is pitiful, it's no surprise that Amber Guyger's instagram feed looked the way it did. Of course those types would use any tool they are handed to have fun at the people's expense. The system selects for them.


This is still just othering police, and thereby avoiding solving any problem. I.e. if the problem is caused by some inherent factor in the people who choose to become police officers, then we can simply blame them and move on; easy hate clicks galore!

But I think police are molded by their experiences, and made by various vicious cycles to become the stereotypical bad police. I described one such cycle above; here is another: http://tailsteak.com/archive.php?num=545 If this is correct, then anybody, even the most well-meaning angelic people, would, over time, become bad police, but anyone could still merely claim that all police are bad because they are “right-wing” (as if that automatically is a bad thing) or racist, and nothing could ever be solved.


Police are also moulded by politics, they answer to someone, which usually isn't the locals. Consider Louisville, KY, where there have been demonstrations for 3 months now against the Breonna Taylor shooting, with no visible results. Such an environment attracts a certain kind of person, with a predictable outcome.


I would assume that the cycle of police recruitment and retirement is too slow to have this big of an effect. To affect police behavior, the iteration cycle must be fast (which I assume it isn’t), or the pressure must be huge (which we should have seen evidence of, I would think, since the large disparity between boss and subordinate would create some leaks).


Happens all the time. I want a status of "Developer in denial" in JIRA.


>Do they also just arrest people based on other 'witness' reports without any effort towards corroboration?

Pretty much. They will investigate for alibis and if there are fingerprints/DNA they will do that test. But absent other physical evidence a single eyewitness is good enough. Unsurprisingly they often are wrong. Netflix's Innocence Project has an episode where this happened. A woman was raped and thought she saw her assailant some time later. He was subsequently convicted and eventually exonerated due to DNA evidence.

Practically speaking there is going to be at least something else pointing to an accused. Otherwise, they wouldn't be showing that picture to a witness. But once you're suspected the standard is the witness picking the correct photo out of a line up of 6. That's enough to be arrested and often enough to be convicted.


The police get all of the grief for bullshit like this. Prosecutors, despite their high minded idealism when they move on from that role, are a bigger issue.

Witnesses are easy to address at trial, and if you're lucky enough to have a competent lawyer, it's often possible to neutralize them even if they are telling the truth.

Systems are harder to cross-examine. Once successfully litigated, prosecutors can use precedent to avoid any scrutiny of the methods, operational processes, or other aspects.

I use a jury I served on as an example -- the police found a person who was almost certainly guilty of running over a person in a park late at night. The key piece of information was when it happened due to surrounding events. The key witness was out of play because he was arrested for perjury -- evidence came out immediately before trial indicating that he was driving a car after stumbling out of a bar, and he had testified that he was a passenger to the grand jury. So they had to scramble to use traffic light cameras (which are synced to GPS/cellular time) to establish a window when the defendant was in a particular area. Because of the events, the defense was able to cross-examine someone from the camera vendor, who was ripped up on the stand -- evidence thrown out and case settled for a reckless driving misdemeanor.

That event, where the vendor was questioned (and in this case failed to satisfy the court) is pretty rare. It's often assumed that systems work perfectly, while they often do not. For example, DNA matching is sound science, but forensic labs sometimes have unsound practices and controls.

If police are under pressure to close cases, and prosecutors are incentivized towards collecting scalps, the natural outcome with automated systems is going to be people going to jail because of bullshit tech.


AIUI, and AFAIR, DNA matching has a false positive rate of 1 in a million; making a lot of false matches at the level of population of a country. That statistic is probably a few years old, but I'm not sure they've changed the tests used ?


People read “99.97% match” and take it as “basically 100%”, where it means “could be one of these 100k Americans of 330 mil population” in actuality


I wonder if actually making that explicit would reduce automation bias.

Something like, "99% match, statistically this person is one of 10,000 out a million people in our city who would match"


Or better yet, actually teaching proper statistics to cops, so that they can figure out p values of data as they collate evidence.


Heh. It's seemingly impossible to teach cops to care about basic things like not to kneel on people's necks or how to respect cries for mercy, and you think they're going to care about statistics? All the training and education in the world can't fix bad incentives. What is needed is automatic compensation for everyone falsely arrested by the police, and for criminal laws (eg "no murder") to be enforced for police officers.


> Seems more like "why lack of rigour in policing has lead to false arrests".

The worst part of this is that police are often very well, often more than software engineers, and this is the quality of work they put out.

The median salary for cops is more than $105,000 a year[1] and some cops can make over $250,000+ a year[2] with over time. They can retire with full benefits and a pension after only 20 years of working.

This level of laziness and ineptitude shouldn't be accepted from what are often the highest paid government workers on the books.

[1] https://www.nj.com/news/2017/05/how_much_is_the_median_cop_s...

[2] https://www.nj.com/somerset/2019/11/4-cops-in-this-nj-town-e...


> The median salary for cops is more than $105,000 a year[1]...

From what I can tell 105k a year is the highest median police salary for any state in the country. Why bring up New Jersey specifically and act like it's representative? Either use the number for the whole country or the number for Michigan, which is where the events in the article took place (and where the median salary is less than half the NJ amount).


> Why bring up New Jersey specifically and act like it's representative? Either use the number for the whole country or the number for Michigan

No, I'll use New Jersey because it's the state that I'm most familiar with. New York City, which is across a river from NJ and is the largest city in the US and has the largest police department, has similar rates for their police force.

You will find that police officers are some of highest paid government employees in towns, counties and states, and are the highest paid government employees in some cases.

In places where the median salary for police officers are lower, median salaries for software engineers are much lower, too.

My point is that police officers often make more than a lot of software engineers on this site do, and they're rewarded with benefits that are better than most engineers', along with a full pension after working for only 20 years. Yet we expect higher quality of work from junior engineers working on dating apps than we do the cops whose lazy investigative work costs people their freedom.


> No, I'll use New Jersey because it's the state that I'm most familiar with.

That's fine; you should say something like "In New Jersey, the median salary for cops is...". Stating only that "The median salary for cops is...", and then giving the number for your home state, which just happens to be the largest number in the country, in a conversation about Michigan specifically and the US broadly, is simply misleading. Yes, even though your sources make it clear it's NJ data. Your sources are supposed to back up your statements, not clarify their incompleteness.

I don't disagree with the point you're trying to make. I just saw a surprising statement, checked the source, and realized that the statement was missing some critical context. And not even context that would make sense, like the Michigan number vs. the US number. This is context that no reader could possibly infer from the conversation or your wording. Making arguments that way weakens positions, it doesn't strengthen them.


The answer, unfortunately, is likely yes. An entirely uncorroborated witness report isn't probable cause, but the standard for corroboration required is very low. There was a case Draper v. United States where an informant said "the guy's coming back from Chicago wearing suchandsuch clothes on September 9", and the police arrested him based on matching that description, with no corroboration of the underlying accusation that he was in Chicago to buy drugs.


> Facial recognition is evidence, not proof

The cops don't need proof for an arrest. That's why the standard is called probable cause. The cops just need a reason, not proof. [0]

(I don't think a facial recognition match should be enough for probable cause either)

[0] https://www.law.cornell.edu/wex/probable_cause


> Seems more like "why lack of rigour in policing has lead to false arrests".

I agree with this.

> Facial recognition is evidence, not proof.

I don't think facial recognition is even evidence. At most, it's filtration/highlighting.


If policing was rigorous, we wouldn't need judges or juries.

Police should be assumed to not be rigorous in pursuit of their duties. They should be assumed to look for positive pieces of evidence, not exculpatory ones.

Given those two facts about the world, giving them more tools that are only capable of producing positive pieces[1] of evidence (within a confidence level below 100%) will result in more false arrests.

[1] These facial recognition systems point them at suspects - they never exonerate a pre-supposed suspect.


This certainly fits with the police folks that pulled over an SUV and arrested the driver due to a supposed plate match—with a plate belonging to a stolen motorcycle.


In my language, evidence and proof translate to the same word: "bewijs".


If the people in question are black, then yes, arrests or worse happen quite regularly without due diligence.


Do you have support for shortcutting of police processes based on skin colour? Like, they arresting officer fills out the 'corroborating evidence' part of the paperwork and says 'none, suspect is black' and that's fine?

Or, you just suppose that to be true?

AIUI certainty of march is much higher for light-skinned people: do officers get data on match certainty?

Forgive my suspicion, but flagrant assumption of racism fuels societal division and strong evidence can avoid unnecessary animosities.


There are a preponderance of YouTube videos of encounters like this:

> In one especially notorious encounter, police in 2009 arrested a distinguished black Harvard professor, Henry Louis Gates Jr., at his own home in Cambridge, Mass., after a white woman called the police to report a possible break-in.

https://www.washingtonpost.com/national/public-outrage-legis...

Breonna Taylor was not only shot to death in her sleep, but was also not given critical aid for twenty minutes after they discovered she was injured:

> Mr. Walker told investigators that Ms. Taylor coughed and struggled to breathe for at least five minutes after she was shot, according to The Louisville Courier Journal. She received no medical attention for more than 20 minutes after she was struck, The Courier Journal reported, citing dispatch logs.

https://www.nytimes.com/article/breonna-taylor-police.html

That, I would say, is a "shortcutting" of police procedure. "Avoiding unnecessary animosity" is less important than facing the problem American society has with racism headlong. Given the evidence that we have, where is the data that it is not happening?


Before listing those two anecdotes, did you check that there's not also a large amount of similar incidents victimizing whites? And then compared the amounts? Or did you simply see it on the news, and assumed it was representative of a larger trend, just like someone watching Fox would assume blacks are more likely to be criminal, because (I'm assuming) the crimes Fox chooses to report on feature black perpetrators?

Wouldn't it be better to rely on statistics, rather than cherry-picked anecdotes? For example, the number killed by police, normalized to some difficult to fudge measure of crime incidence, such as the homicide rate?

How do those two examples measure up against the 406 whites and 259 blacks killed by police in 2019 [1]?

[1] https://mappingpoliceviolence.org/nationaltrends


Well, black people constitute ~13% of the population versus white people's 73%. So there should be almost 6x as many white people, or ~1,500, killed by police as black people.

Police brutality is a leading cause of deaths among young men, but especially so for young black men:

> Among men of all races, ages 25 to 29, police killings are the sixth-leading cause of death, according to a study led by Frank Edwards of Rutgers University, with a total annual mortality risk of 1.8 deaths per 100,000 people. Accidental death, a category that includes automotive accidents and drug overdoses, was the biggest cause at 76.6 deaths per 100,000, and followed by suicide (26.7), other homicides (22.0), heart disease (7.0), and cancer (6.3).

> [...]

> For a black man, the risk of being killed by a police officer is about 2.5 times higher than that of a white man. “Our models predict that about 1 in 1,000 black men and boys will be killed by police over the life course,” the authors write.

https://www.washingtonpost.com/business/2019/08/08/police-sh...

There are plenty of statistics if you look for them. Which is why, in my mind, the onus is on skeptics to prove that police brutality does not especially affect the black population.


> black people constitute ~13% of the population versus white people's 73%

https://mappingpoliceviolence.org/nationaltrends looks at whites and Hispanics separately. Non-hispanic whites are only 60% of US population: https://en.wikipedia.org/wiki/Demographics_of_the_United_Sta...

You also failed to normalize to the crime rate. That kind of analysis would lead you to conclude that the police are shockingly biased against males, to a far, far greater degree than against blacks.


"These contain some criminal mugshots, but the bulk of the images comes from non-criminal sources such as passports and state driver’s license compilations; that is, the databases mostly expose ordinary, generally innocent citizens to criminal investigation."

Worth noting that a mugshot does not imply criminality. As the lead off in the article shows, lots of innocent people get arrested and thus, have mugshots on file.


Fairly banal crimes can also lead to mugshots. I had to have one taken over a parking violation.


I mean, if your algorithm is accurate 99.9% of the time, and you're using it 24/7 in a country with millions of inhabitants, you should expect tens of thousands of false positives every day. This is why some wet blankets have been strangely skeptical of delegating all responsability to The Algorithm.


Having spent time in the industry with one of the supposed “top companies”, the accuracy was only good when the model was over-fit on good training data, real world performance wasn’t close nor was it evaluated much.


If only it was as good as 99.9, it's probably amazing if its 50.


I love how you just handwave away the possibility of destroying someone's life with a false positive. This is an area like automated driving where the error rate needs to be 0 if it's going to be relied upon as truth.


I'm actually agreeing with you, perhaps the sarcastic tone didn't go through.


Most people are incompetent. The tax lawyer down the street, the pharmacist, the dental assistant. It's just that most people aren't in a position to hurt others because there are checks:

- the recipient checks the pharmacist's work

- the dentist checks his assistant's work

- the state allows the tax lawyer some leeway

16% of Americans have an IQ less than 85. 50% have an IQ less than 100. If you build a tool that requires a person to have average IQ, it will fail with half of all users.

That doesn't mean it's a bad idea: after all, you only need to convince the buyer, not the user.


What's the incentive to reduce false arrests? Are the arrestors' jobs at stake? Is their wallet at stake?

Why put in the extra effort if there are no consequences for being wrong?

Most of the time it seems the worst-case scenario for the perpetrator of the false arrest is being relegated to desk-duty or a paid suspension. If there's a settlement it's the tax-payers that foot the bill. Until that changes, there's basically no reason for them to make the effort to reduce false arrests. This isn't a problem that will be solved with recognition algorithms getting a few percentage points more accurate every year. It will require actual reform.


This article appears in my HN feed with an article about throwing away React. The React article explains that having code that prevents engineers from doing whatever they want to is essential. If we can't trust engineers coding, why do we trust police arresting? Restraints are needed.


The best algorithm that NIST tested had a 0.01% false positive rate and it was uniform across races.

Consider an airport like Heathrow, which uses facial recognition widely. 220,000 people travel through heathrow per day. With a 0.01% false positive rate, you are falsely flagging 22 people per day.

You can not rely on facial recognition to jail people.


Facial recognition seems a bit like shoe print id. A valuable clue, but not enough to ruin somebody's life over. That is so obvious that anecdotes about false arrest seem like obvious human incompetence.


I don't think it is valuable. Especially with falling crime rates, developing such systems is basically bullshit job in a bullshit industry.

Exaggerated, but I haven't really seen neither need nor success of any facreg system.

Sure, security agencies might be able to supply some cases when it helped to catch a criminal, but the side effects suggest it isn't even slightly worth it.


Falling crime rates? For some categories of crime, sure, but e.g. murder rates this year have risen sharply in major US cities.

https://nyti.ms/3gyoVqd - "At least 336 people have been murdered in Chicago through July 2, according to the Chicago Police Department; because murders typically increase in the summer, the city is on track to match the 778 deaths in 2016, its deadliest year since the mid-1990s. <...> Chicago is not alone. Before the coronavirus hit, homicides were escalating nationwide in early 2020, and although the lockdown brought a pause, they began rising again as the stay-at-home measures were lifted. A national study showed that homicide rates fell in 39 of 64 major cities during April and began creeping up in May."

https://www.nytimes.com/2020/08/04/nyregion/nyc-shootings-co... - "On Sunday, another 19 people were shot in New York City, one fatally. Through the first seven months of this year, shootings were up 72 percent over the same period last year and murders rose 30 percent, even as reports of other violent crimes like rape, assault and robbery fell."


I think these problems might have to do with a current conflict within the US society that needs to be talked about and surveillance or the lack of it is neither the reason nor the solution.


Indeed. Historically, reducing contradictions and tensions in society is a much more durable solution than ramping up enforcement.


I don't understand why you think it's not a solution.

Witnesses refusing to talk to cops is a problem. Given the "current conflict", witnesses may be even less willing to talk than usual. And camera surveillance with a good facial recognition system (i.e. one that has been trained and certified to recognize faces of all ages, ethnicities, and phenotypes, not just white college students) would certainly seem like a solution that could help track down murderers.


UK has loads of camera surveillance, it didn't really help with the crime rates. Seems better to avoid people getting into crimes. Meaning: reduce the difference between super rich and very poor, ensure people have the ability to have a good life (basically: afford to start a family), etc.

Anyway, that's what helped in NL.


And it doesn't matter if the facial recognition is done by a human or a machine. They both produce false results on a regular basis.


Problem here is for arrest they claim to be using not facial recognition but witness identification. In theory it sounds like good.

But the problem is when you identify some similar face to suspect then show this with some random faces, you are actually giving a multiple choice to witness, which is biased for the match you have found.


I'm not trying to be snarky here, but can anybody point to an example of machine learning used in law enforcement that hasn't lead to inequitable outcomes?


Equally non snarky, what is something in law enforcement that has led to equitable outcomes?


The more I look into the history and tactics of modern policing the more this becomes apparent.

Police are willing to use broken machine learning technology because it “works for them” - they don’t necessarily care about accuracy they just want someone locked up.


The police are fundamentally a low paid blue collar labor force with union protections. They are not your college professor or the protagonist of a murder mystery, but more like a plumber or construction worker.

The moment you understand that, it's not a mystery that they sort of blindly accept tools that seem to work without deep introspection. They are not equipped to be a data analytics shop, they are equipped to fill out paperwork and sometimes get into tussles.


> The police are fundamentally a low paid blue collar labor force with union protections

Please keep in mind that this is not true worldwide. In various countries you're only admitted after 1.5-2 years of police-specific education. You might still not be a police officer if they seem you unfit.

Also, you don't seem to value plumbers or construction workers. A colleague used to be a construction worker. It's very interesting to hear the knowledge he has on that. Plus pretty damn handy for understanding how to improve your house. Just because he did a lot of work with his hands doesn't mean he isn't damn bright.


At no point did I indicate I don't value blue collar workers such as plumbers or construction workers, quite the opposite. At most I said the police were low paid blue collar workers - and in general I do believe most blue collar workers are underpaid. For what it's worth, I do also value our police forces, even with their flaws. But a blue collar profession has a different set of knowledge handling than a white collar one - it's not that one is better or worse, but they are intrinsically different ways of knowing.

> Please keep in mind that this is not true worldwide

You're correct and my comment is US centric. What I find interesting is that some US cities have seemingly moved farther from a more educated police force, removing requirements around having college degrees and reducing trainings.


Enforcement of the laws is generally an equitable outcome. I'm not saying the law is perfect, but the burden of crime falls disproportionately on the least privileged. In a city with a lot of burglars, the rich buy insurance and private security and shotguns while the poor just don't get to feel safe in their homes.


Ah, but the laws themselves are unjust. Stealing $5 from your work till is an imprisonable offense, stealing $100 from your employee's paycheck may result in you having to pay it back.


The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges, to beg in the streets, and to steal bread.

   - Anatole France


Again, I agree the law is unjust in some ways! Many forms of wage theft ought to be crimes.

What doesn't make sense to me is the implication that decriminalizing petty theft would make the law more just. If people go around stealing $5 all the time, Walmart can absorb that loss - the guy who's flat broke until his paycheck this Friday can't.


> Stealing $5 from your work till is an imprisonable offense

Unless you pointed a gun or knife at your coworker, no it's not.

> but the laws themselves are unjust

Then address the laws and move to change them.


> Unless you pointed a gun or knife at your coworker, no it's not.

Technically, yes, it is in most states if the judge wanted to throw the book at you (for example, if you've ever committed a crime before) - https://www.criminaldefenselawyer.com/crime-penalties/federa... - and not only that but three strikes laws can result in imprisonment even if it wasn't.


>Equally non snarky, what is something in law enforcement that has led to equitable outcomes?

DNA.

I can't think of anything that was developed by law enforcement for law enforcement though.



Could you just pick the clearest cut one, and type a few sentences explaining it?

edit: not trying to get you to do my work for me, but for an existence proof people don't commonly list a bunch of things by name without explaining how they apply. They list one thing (or one class of things, or how to construct a thing), and exactly how it applies.


I sampled a few of those links, and without exception they all touted the true positive matches without any mention of false positives, nor of what consequences were visited upon the innocents who were pipelined in to the criminal system by them.


Anyone who understands testing knows that you cannot eliminate both false positives and false negatives and that you have to pick one.

That's why these systems always have to be combined with good policework. What's being asked for is a test that cannot be passed.

This pretty much applies across the board with evidence testing.


No, what's being asked for is evidence that the system being developed does not result in an increase in inequity in the real world. That's testable, and achievable. The list of promos/booster articles posted is not any form of demonstration of that.


There may be some winners in here but every NY Post article I've ever read about policing appears to be dictated directly by NYPD and posted without challenge.


Every method of finding criminals result in innocent people being arrested from time to time. That's the nature of arrests.

Maybe there is something uniquely bad about facial recognition, but to show that, compare it to other techniques, not to a perfection standard.


There are several issues, though.

1. As discussed in the article, due to training sets or methodology, these systems are poor at distinguishing non-Caucasian faces. Or at least poor relative to a human looking at the image and the person together.

2. Police (though not just them) have an unfounded trust in the machines.

3. Police take this unfounded trust and push through arrests before they realistically should (like in the opening story of the article).

4. People arrested for felonies are fucked in the US. It can take years to get your record expunged. Until then your name shows up in lots of searches, you can lose licenses to conduct some kinds of business (or be unable to obtain them). (Expunging can also be very expensive.)

You may not mind innocent people being caught up in the dragnet, but as a society we probably should care a lot more than we do.


Yeah, those issues may be big problems. Or they may not.

My point is that the way to know is to compare this tool with others and weigh pros and cons.


The open question here is who does this effect. If the result is 1000 wrong arrests out of 10mil, that might be acceptable. But if ever one of those arrests is that 1 poor guy who get's arrested every time he steps out of his house because his face is a universal match, that's a very different result. Ditto if this leads to large numbers of incorrect arrests in certain groups (race, gender, age etc).

This is the problem with anything involving people: distributions matter a lot and distributions are hard...


> Williams’ case is a signal to stop the ad hoc adoption of facial recognition before an injustice occurs that cannot be undone.”

Arguably, Williams' case is that injustice. It should not have happened in the first place (although anyone in the field could have told you it would be inevitable).

Clearly the machine-human interface is insufficent. A big honking warning is still insufficient to overcome automation bias (see other top level comment).

Perhaps the "lineup" functionality needs to be baked in. The system won't divulge details until a(n) operator(s) selects hits from a pool.


I worked in a Law Enforcment office that used Facial Recognition. Every single time we would give officers a potential match, it came with a disclaimer that it was a POTENTIAL match. It was up to the detectives and officers to actually do their job and find other pillars of information.

This isn't a failure of facial recognition, its a failure of the stupidest people I went to high school with thinking its some sort of perfect system.


If the software vendor knows the end user will not use their product as intended, at what point is the vendor complicit in the impacts of that misuse?


You can't paint a proper moral picture of a situation by imagining your responsibility is compartmentalized and that you operate in an environment where others will or at least could in theory operate ideally.

Such an approximation must as part of the moral calculus be incrementally replaced with a more accurate picture that takes into account the actual foibles of your fellow humans actually behave.

If given how people actually behave evil or injustice is done that is your fault. Not partially your fault, your fault guilt doesn't divide it just multiplies. You are I think bound not to do the least you can do but rather the most you can do. The best idea I've heard in this thread is to always show multiple results with no indication of which was higher priority. Force a human into the loop.


Given the well-documented penchant of police to use “fits the description” as a pretext to harass the public, especially people of color, I would say it is borderline negligent to allow the use of algorithmically generated probable cause justifications.


The real question is: overall did face recognition lead to a higher proportion of false arrests, and even more importantly false convictions.

Clearly, facial recognition can make mistakes that lead to false arrests. Of course, it is important to minimize that number, that alone doesn't tell us if facial recognition is justified.

The thing is:

- An arrest is not a conviction, it is important to know (if possible) how much of these facial recognition false arrests led to false convictions. I suspect not a lot, because judges and attorneys are still human.

- How much false arrests did facial recognition prevent? It can happen if the system correctly identifies the culprit before an innocent is arrested. It can also be used to cross-check false/unreliable testimony.

Note: I am not talking about the privacy implication of facial recognition, only its effect on criminal investigations.


Those police should have been required to make sure he was a match. Feel like it was mostly due to racism and thats fckin horrible. Then they blame it on the computer. like yeah but its also their fault for not double checking.


The issue highlighted in this article is that it seems that police deemed facial recognition as reliable as DNA so that if the computer says it's that person then they are already guilty.

This is a simple issue that could be easily solved by making sure that facial recognition should be treated as a very good lead but not as a smoking gun, so that the police approaches "suspects" more neutrally before deciding whether to arrest them.


So how do you get police to behave that way?


As if humans have never misidentified people or arrested the wrong person?

I find these articles very silly.

Maybe the photo on the drivers license was not good enough, so only in the real world could the police see that it was not the same person.

In any case, why not scan the database automatically (the evil, evil facial recognition), and then double check by humans?

Even when humans police, I think it is always just about probabilities. Then they follow up (ideally) and try to drive the probability of being correct higher.

Obviously nobody should be tried automatically without any recourse.


From the article:

> “The cops looked at each other. I heard one say that ‘the computer must have gotten it wrong.’” Williams learned that in investigating a theft from the store, a facial recognition system had tagged his driver’s license photo as matching the surveillance image. But the next steps, where investigators first confirm the match, then seek more evidence for an arrest, were poorly done and Williams was brought in. He had to spend 30 hours in jail and post a $1,000 bond before he was freed.

This article just explained that someone who was clearly not the right person was still arrested by human police officers and charged even though those very same human police realized it was the wrong person.

The police wouldn't have even shown up at this random person's house if it weren't for the technology. They were the failsafe, and they failed too.


That's not the fault of the face recognition, though. And it would have happened in the same way if a human had misclassified the image.

Do they propose nobody should ever look at images of suspects and try to identify them?


> why not scan the database automatically (the evil, evil facial recognition), and then double check by humans?

Because people tend to think "The computer said this was right, therefore it is unequivocally the correct answer." This is obviously incorrect, but is how many laypeople view it, even if instructed otherwise.


Then people have to relearn, but it is not the fault of the algorithm.

False identifications by humans happen a lot, too. I think it used to be an especially big issue for black people.

Like if a black person commits a crime, and police does a lineup for a witness to identify the culprit, they would just point to the one black person in the lineup and think they did it. Surely computers can at least do as good as that, probably better.


In fact that's exactly what happened. The police realized it was the wrong guy and still brought him in, he still lose 30 hours and had to post bail.

I cant even imagine if two morons showed up to my house, saw that I was the wrong person, and then arrested me in front of my family anyway.


But then it can't possibly be the "people just believe computers" effect, because the article explicitly states the police realized the computer was wrong.


Then in this case it's the same as phishing emails. People know it's wrong, and yet still follow along anyways.

This is not necessarily a forgone conclusion, but occurs to often to dismiss.


Nothing indicates that it is specific to algorithmic facial recognition. Actual humans make mistakes identifying suspects all the time. At the very least, the article should address that and compare the reactions, to prove people are more likely to act despite knowing better when the id was made by a computer.


A reasonable point.


On hacker news, can we please submit https link instead of http. An S goes a long way in this hostile internet environment now days.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: