Hacker News new | past | comments | ask | show | jobs | submit login
Facial Recognition Leads To False Arrest Of Black Man In Detroit (npr.org)
661 points by vermontdevil 12 days ago | hide | past | favorite | 279 comments





Here is a part that I personally have to wrestle with:

> "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.

When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.

Not too long ago, I wrote a comment here about this [1]:

> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

> When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.

Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.

But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

[1]:https://news.ycombinator.com/item?id=21339530


52% is little better than a coin flip. If you have a million individuals in your city, your confidence should be in the ballpark of 99.9999% (1 individual in 1 million). That has really been my concern with this, the software will report any facial match above 75% confidence. Apart from the fact that it appalling confidence, no cop will pay attention to that percentage; immediately arresting or killing the individual.

Software can kill. This software can kill 50% of black people.


Software can kill if we put blind trust in it and give it full control over the situation. But we shouldn't do that.

Even if it was correct 99% of the time, we need to recognize that software can make mistakes. It is a tool, and people need to be responsible enough to use it correctly. I think I agree with your general idea here, but to put all of the blame on software strikes me as an incomplete assessment. Technically the software isn't killing anyone, irresponsible users of it are.


> Technically the software isn't killing anyone, irresponsible users of it are.

It's beyond irresponsibility - it's actively malevolent. There unfortunately are police officers, as demonstrated by recent high profile killings by police, who will use the thinnest of pretexts, like suspicion of paying with counterfeit bills, to justify the use of brutal and lethal force.

If such people are empowered by a facial recognition match, what's to stop them from similarly using that as a pretext for applying disproportionate brutality?

Even worse, a false positive match triggered arrest may be more likely to escalate to violence because the person being apprehended would be rightfully upset that they were being targeted, and appear to be resisting arrest.


A former employer recently got a fraudulent restraining order against me. I’m homeless and encounter the police all the time. I consider it a probable contributing factor to my death, which they are almost certainly pleased about. Nobody in any way has ever seen me as violent, but now I am in a national “workplace violence” protection order database, aka. violent and/or unstable. I am homeless and would rather continue my career than fight it. It seems like it could make people with less to lose turn violent. I feel anger and disappointment like never before. (OpenTable is the company, their engineering leadership are the drivers of this).

My point was that this technology should not be used as evidence, and should not be grounds to take any forceful action against someone. If a cop abuses this, it is the cop's fault and we should hold them accountable. If the cop acted ignorantly because they were lied to by marketers, their boss, or a software company, those parties should be held accountable as well.

If your strategy is to get rid of all pretexts for police action, I don't think that is the right one. Instead we need to set a high standard of conduct and make sure it is upheld. If you don't understand a tool, don't use it. If you do something horrible while using a tool you don't understand, it is negligent/irresponsible/maybe even malevolent, because it was your responsibility to understand it before using it.

A weatherman saying there is a 90% chance of rain is not evidence that it rained. And I understand the fear that a prediction can be abused, and we need to make sure it isn't abused. But abolishing the weatherman isn't the way to do it.


> If your strategy is to get rid of all pretexts for police action, I don't think that is the right one.

Not at all.

> Instead we need to set a high standard of conduct and make sure it is upheld

Yes, but we should be real about what this means. The institution of law enforcement is rotten, which is why it protects bad actors to such a degree. It needs to be cleaved from its racist history and be rebuilt nearly from the ground up. Better training in interpreting results from an ML model won't be enough by a long shot.


> Technically the software isn't killing anyone, irresponsible users of it are.

Sure, but at this point, we know how irresponsible users often are, we know this to be a an absolute fact. If the fact of user’s irresponsibility isn’t the centerpiece of our conversations, then we’re being incredibly irresponsible ourselves.

The material manifestations of how these tools will be used has to remain at the center if researchers place any value whatsoever on our ethical responsibilities.


Yep, There are so many psychology studies that show groupthink , people using statements of an authority as a way remove individual responsibility and people overriding their own perceptions to agree with an authority.

"I guess the computer got it wrong" is a terrifying thing for a police officer to say.


Software flies rockets, planes, ships, cars, factories and just about everything else. Yet somehow LE shouldn't be using it because... they are dumb? Everyone else is smart tho.

If you fly a plane, drive a car or operate a factory, your livelihood and often your life depends on your constantly paying attention to the output of the software and making constant course-correcting adjustments if necessary. And the software itself often has the ability to avoid fatal errors built in. You rely on it in a narrow domain because it is highly reliable within that domain. For example, your vehicle's cruise control will generally not suddenly brake and swerve off the road so you can relax your levels of concentration to some extent. If it were only 52% likely to be maintaining your velocity and heading from moment to moment, you wouldn't trust it for a second.

Facial recognition software doesn't have the level of reliability that control software for mechanical systems has. And if a mistake is made, the consequences to the LEO have been historically minimal. Shoot first and ask questions later has been deemed acceptable conduct, so why not implicitly trust in the software? If it's right and you kill a terrorist, you're a hero. If it's wrong and you kill a civilian, the US Supreme Court has stated, "Where the officer has probable cause to believe that the suspect poses a threat of serious physical harm, either to the officer or to others, it is not constitutionally unreasonable to prevent escape by using deadly force." The software provides probable cause, the subject's life is thereby forfeit. From the perspective of the officer, seems a no-brainer.


Were you asleep for all coverage of the 737 MAX MCAS, or the technical failures that contributed to multiple warships casually driving into other ships?

https://features.propublica.org/navy-accidents/uss-fitzgeral...

https://features.propublica.org/navy-uss-mccain-crash/navy-i...

Software allows us to work very efficiently because it can speed work up. It can speed us up when fucking things up just as well.


Airbus has been fly-by-wire for something like 5 decades. They did have some issues but they were solved. So will be 737.

Did you respond to the wrong comment? I don’t believe I implied anything close to what you just said.

You articulated very well what scares me about the next 15 years.

I have written great software, yet it sometimes had bugs or un-intended consequences. I cannot imagine how I'd feel if it were to accidentally alter someones life negatively like this.


The major problem with any solution we have to contend with is the fact that the ratio of appropriate to inappropriate police interactions is unlikely to change regardless of the system or official procedure, so any system that increases the number of police interactions must therefore increase the number of inappropriate police interactions.

Consider that not everyone understands how machine learning, and specifically classifier algorithms work. When a police officer is told the confidence level is above 75% he's going to think that's a low chance of being wrong. He does not have the background in math to realize that given a large enough real population size being classified via facial recognition, a 75% confidence level is utterly useless.

The reported 75% confidence level is only valid when scanning a population size that is at most as large as the training data set's. However, we have no way of decreasing that confidence level to be accurate when comparing against the real world population size of an area without simply making the entire real population the training set. And none of that takes circumstances like low light level or lens distortion into account. The real confidence of a match after accounting for those factors would put nearly all real world use cases below 10%.

Now imagine that the same cop you have to explain this to has already been sold this system by people who work in sales and marketing. Any expectation that ALL police officers will correctly assess the systems results and behave accordingly fails to recognize that cops are human, and above all, cops are not mathematicians or data scientists. Perhaps there are processes to give police officers actionable information and training that would normally avoid problems, but all it takes is one cop getting emotional about one possible match for any carefully designed system to fail.

Again, the frequency of cops getting emotional or simply deciding that even a 10% possibility that someone they are about to question might be dangerous is too high a risk, is unlikely to change. So,providing them a system which increases their number of actionable leads and therefore interactions with the public can only increase the number incidents where police end up brutalizing or even killing someone innocent.


> But we shouldn't do that.

The average human sucks at understanding probabilities.

Until we can prove that most people handling this system are capabable of smart decision making, which the latest police scandals do not lead to believe right now, those systems should not be used.


> Software can kill if we put blind trust in it and give it full control over the situation. But we shouldn't do that.

Do you work in a commercial software firm? Have you ever seen your salespeople talk with their customer contacts?

The salespeople and marketing departments at the firms that make this technology and target law enforcement markets are, 100%, full stop, absolutely making claims that you can trust the software to have full control over the situation, and you, the customer, should not worry about whether the software should or should not have that control.

Being able to use something "irresponsibly" and disclaim responsibility because AI made the decision is. a. selling. point. Prospective customers want. to. give. up. that. authority. and. that. responsibility.

Making the sort of decisions we ask this shit to make is hard, if you're a human, because it's emotionally weighty and fraught with doubt, and it should be, because the consequences of making the wrong decision are horrific. But if you're a machine, it's not so hard, because we didn't teach the machines to care about anything other than succeeding at clearly-defined tasks.

It's very easy to make the argument that the machines can't do much more, because that argument is correct given what tech we have currently. But that's not how the tech is sold--it becomes a miracle worker, a magician, because that's what it looks like to laypeople who don't understand that it's just a bunch of linear algebra cobbled together into something that can decide a well-defined question. Nobody's buying a lump of linear algebra, but many people are quite willing to buy a magical, infallible oracle that removes stressful, difficult decisions from their work, especially in the name of doing good.

tl;dr capitalism is a fuck. we can pontificate about the ethical use of the Satan's toys as much as we like; all that banter doesn't matter much when they're successfully sold as God's righteous sword.


>Technically the software isn't killing anyone, irresponsible users of it are.

Irresponsible users, yes, but in the users who are using the software as it was marketed for use.


Any software developer will tell how the marketing and sales departments will say or spin ANYTHING they can get away with to sell the product.

Maybe software is more like laws, judges generally aren’t guilty of issuing death sentences

> 99.9999%

Then the justice system would implode. Judicial policy is "software" too, and nobody holds the judiciary or police to that absurd level of excellence, even if we're talking about the death penalty.


> even if we're talking about the death penalty.

And that's also the core argument why some countries abolished death penalty.


The justice system would implode if half the innocent people strong-armed into taking plea deals (with threats of much harsher sentences if they go to court) chose not to take them. That “software” is already buggy AF and needs some fundamental fixes. Setting a high standard for some crazy new AI stuff is a smaller change than fixing what’s already broken.

I would hope that the software/ML engineers who wrote it knows about probability theory, and why the prior probability should be set at 0.0001% or so.

So that if we print 52% on the screen, that means we've already gathered like 30-bits of evidence (30 coin flips all coming up heads), at which point the suspicion would be real.


What a great comment. This encapsulates my concerns about the topic eloquently. The technology is not ready for use.

Just to be clear, parent is describing fictional software, not the system in the article. You seem to be conflating the two.

Amusing.

At this point facial recognition is fictional software.


It would have to be even higher than that level of accuracy, because every person is going to be 'tested' multiple times.... if everyone's face is scanned 100 times a day, the number of false positives is going to be even higher.

We shouldn’t assume those tests and errors are independent, they probably aren’t, but you are right that the overall error rate would be inflated.

It’s not at all obvious to me that the accuracy threshold should scale with city size. Some small town shouldn’t use a system that is 1000x less accurate.

This shit right here. This is why I don't stop for the inventory control alarms at department store doorways if they go off. I know I've paid, and little sirens are just a nuisance at this point.

This is why I've never stopped for receipt checks, because it's my receipt, and I've paid. The security theatre is just bad comedy.

Just because the machine says I've done a no no, doesn't mean I can't come back and win a lawsuit later. It doesn't abdicate cops from their jobs. I have a winning complexion, so I'll never enjoy a false positive, but if I do, I'll make sure it bankrupts whatever startup incubator garbage decided to shill a replacement for real law enforcement.


Is this an attitude that is safe for people of all races, though?

Also, can everyone afford to pursue lawsuits?


> Is this an attitude that is safe for people of all races, though?

Yes, it is. Security cannot stop you for bypassing alarms and receipt checks. They have to have definitive proof that you stole something before they can lay a hand on you. Even in membership stores like Costco, the most they can do is cancel your membership. If they do touch you, there are plenty of lawyers who will take your case and only collect payment if you win.


Theory & Law != What Actually Happens

> If they do touch you, there are plenty of lawyers who will take your case and only collect payment if you win.

This falls squarely into the genre of "yes, you are technically right, but you may have spent a week in jail and thousands to tens of thousands of dollars of time and money to prove it, for which you will not be fully compensated."


My point is that anyone can walk by and nothing will likely happen, but if it does, there is some recourse, and the unlikely event is just as unlikely regardless of race.

That’s just false. Brown people get shot in malls for much less.

> Brown people get shot in malls for much les

Not by security or police, so my point still stands.



It's not exclusive to brown people, nor does it happen more to brown people.


That doesn't separate out the non-justified killings that we're talking about here. Also, there's no indication that race is the primary cause of the killings.

How could you? Nobody separates out the non justified killings. That’s half the problem. Your world seems much safer and more just than mine.

I am pretty sure in most places they can't touch you even if you do steal something.

So out of curiosity, you just roll out of Costco with a cart full of food and gear while the receipt checker tries to stop you?

I believe in a regular store you can just roll by the security without letting them check your stuff. They can make a citizens arrest or call the cops if they think you stole something, but at great risk of a lawsuit if they are wrong. However, Costco is a private club. You agree to their terms and conditions as a member of that club, and you must abide by the receipt check or they can ask you to leave. That was my understanding of the situation a decade ago, things may have changed.

At most Costco can cancel his membership. In other stores, if they find regular abusers, they can get restraining orders. Fry's electronics used to do that against a few customers.

Worse - an AI decision puts an obligation on the user to follow it. What do I mean? Well - imagine you are a cop, you get an auto flag to arrest someone and use your discretion to overide it. The person goes on to do something completely different; like they are flagged as a murderer but then go and kill someone DUI. You will be flayed, pilloried. So basically safety first, just do the arrest. The secret is that these systems should not be making calls in this kind of context because they just aren't going to be good enough, it's like cancer diagnosis - the oncologist should have the first say, the machine should be a safety net.

You see it everywhere with AI and other tools. We overly trust them. Even when doctors have a high confidence in their diagnosis, they accept wrong AI-recommended conclusion that contradicts it.

https://www.nature.com/articles/s41591-020-0942-0

Bit like with self driving cars - if it's not perfect we don't know how to integrate it with people


That's interesting. I can imagine in cases like this it's not necessarily that the doctor doubts their own diagnosis, but rather the AI is essentially offering to relieve them of responsibility for it either way.

It's like in human hierarchies - it's often not the person who is more likely to make the best decision who gets to decide, it's the one who is going to bear the consequences of being wrong.


> The trouble is not that the AI can be wrong

Exactly what I thought when I've read about this. It's not like humans are great at matching faces either. In fact machines have been better at facial recognition for over a decade now. I bet there are hundreds of people (of all races) in prison right now who are there simply because they were mis-identified by a human. Human memory, even in the absence of bias and prejudice, is pretty fallible.

There is a notion of "mixture of experts" in machine learning. It's when you have two or more models that are not, by themselves, sufficiently good to make a robust prediction, but that make different kinds of mistakes, and you use the consensus estimate. The resulting estimate will be better than any model in isolation. The same should be done here - AI should be merely a signal, it is not a replacement for detective work, and what's described in the article is just bad policing. AI has very little to do with that.


Can you add any details about how an automated system was used to fire you? I'm not familiar with systems like that.

We had a whole discussion about it here a couple years ago: https://news.ycombinator.com/item?id=17350645

Thanks, that's insane. Sometimes I look at the HR group where I work and I'm astonished at how much (relatively easy to automate) work is still done manually or semi-manually. For example, today I had to send an email to a specific individual for what should be a simple form in the modern & best-of-breed HR system we use.

After reading your story, I am very glad that we probably have in aggregate 2 or 3 full-time employees doing things that might be automated away. It's not like that prevents mindless bureaucracy of all sorts, but something like your situation would certainly never happen.


Seconded! Fascinating situation.

The problem is not with the technology, but with how it's used. A medical test is also not 100% error-proof which is why a professional needs to interpret the results, sometimes conducting other tests or disregarding it completely.

A cop stopping someone that has a resemblance to a criminal for questioning seems like a good thing to me, as long as the cop knows that there's a reasonable chance it's the wrong guy.


>But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

None of those selling points logically lead to the conclusion that it is the ultimate decisions maker.


This story is really alarming because as described, the police ran a face recognition tool based on a frame of grainy security footage and got a positive hit. Does this tool give any indication of a confidence value? Does it return a list (sorted by confidence) of possible suspects, or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?

The issue of face recognition algorithms performing worse on dark faces is a major problem. But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?


I think the NYT article has a little more detail: https://www.nytimes.com/2020/06/24/technology/facial-recogni...

Essentially, an employee of the facial recognition provider forwarded an "investigative lead" for the match they generated (which does have a score associated with it on the provider's side, but it's not clear if the score is clearly communicated to detectives as well), and the detectives then put the photo of this man into a "6 pack" photo line-up, from which a store employee then identified that man as being the suspect.

Everyone involved will probably point fingers at each other, because the provider for example put large heading on their communication that, "this is not probable cause for an arrest, this is only an investigative lead, etc.", while the detectives will say well we got a hit from a line-up, blame the witness, and the witness would probably say well the detectives showed me a line-up and he seemed like the right guy (or maybe as is often the case with line-ups, the detectives can exert a huge amount of bias/influence over witnesses).

EDIT: Just to be clear, none of this is to say that the process worked well or that I condone this. I think the data, the technology, the processes, and the level of understanding on the side of the police are all insufficient, and I do not support how this played out, but I think it is easy enough to provide at least some pseudo-justification at each step along the way.


That's interesting. In many ways, it's similar to the "traditional" process I went through when reporting a robbery to the NYPD 5+ years ago: they had software where they could search for mugshots of all previously convicted felons living in a x-mile radius of the crime scene, filtered by the physical characteristics I described. Whether the actual suspect's face was found by the software, it was ultimately too slow and clunky to paginate through hundreds of results.

Presumably, the facial recognition software would provide an additional filter/sort. But at least in my situation, I could actually see how big the total pool of potential matches and thus have a sense of uncertainty about false positives, even if I were completely ignorant about the impact of false negatives (i.e. what if my suspect didn't live within x-miles of the scene, or wasn't a known/convicted felon?)

So the caution re: face recognition software is how it may non-transparently add confidence to this already very imperfect filtering process.

(in my case, the suspect was eventually found because he had committed a number of robberies, including being clearly caught on camera, and in an area/pattern that was easy to narrow down where he operated)


> and the detectives then put the photo of this man into a "6 pack" photo line-up, from which a store employee then identified that man as being the suspect.

This is absurdly dangerously. The AI will find people who look like the suspect, that’s how the technology works. A lineup as evidence will almost guarantee a bad outcome, because of course the man looks like the suspect!


The worse part is that the employee wasn't a witness to anything. He was making the "ID" from the same video the police had.

I'm also half guessing if the "lineup" was 5 White people and the a photo of the victim.

> Essentially, an employee of the facial recognition provider forwarded an "investigative lead" for the match they generated (which does have a score associated with it on the provider's side, but it's not clear if the score is clearly communicated to detectives as well)

This is the lead provided:

https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...

Note that it says in red and bold emphasis:

THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS NOT PROBABLE CAUSE TO ARRREST. FURTHER INVESTIGATION IS NEEDED TO DEVELOP PROBABLE CAUSE TO ARREST.


Dear god the input image they used to generate that is TERRIBLE! It could be damn near any black male.

The real negligence here is whoever tuned the software to spit out a result for that quality of image rather than a "not enough data, too many matches, please submit a better image" error.


I'm not even sure that's definitely a black man, rather than just any person with some kind of visor or mask. There does seem to be a face in the noise, but human brains are primed to see face shapes.

The deeper reform that needs to happen here is that every person falsely arrested and/or prosecuted needs to be automatically compensated for their time wasted and other harm suffered. Only then will police departments have some incentive for restraint. Currently we have a perverse reverse lottery where if you're unlucky you just lose a day/month/year of your life. With the state of what we're actually protesting I'm not holding my breath (eg the privileged criminals who committed the first degree murder of Breonna Taylor still have yet to be charged), but it's still worth calling out the smaller injustices that criminal "justice" system inflicts.


>The deeper reform that needs to happen here is that every person falsely arrested and/or prosecuted needs to be automatically compensated for their time wasted and other harm suffered.

I agree here, but doing that may lead to the prosecutors trying extra hard to find something to charge a person with after they are arrested, even if it was something trivial that would often go un-prosecuted.

Getting the details right seems tough, but doable.


>Currently we have a perverse reverse lottery where if you're unlucky you just lose a day/month/year of your life

that's what happens if you're lucky


You're also looking at a scan of a small print out with poor contrast and brightness. There's probably a lot more detail there at full resolution, brightened up to show the face, and then enhanced contrast that the computer is seeing.

This is why you should be scared of this tech. Computer assisted patsy finder. No need to find the right guy when the ai will happily cough up 20 people nearby who kinda sorta look like the perp enough to stuff them into a lineup in front of a confused and highly fallible witness.

Yep, the potential for abuse here is insane.

I'm becoming increasingly frustrated with the difficulty in accessing primary source material. Why don't any of these outlets post the surveillance video and let us decide for ourselves how much of a resemblance there is.

Even if the guy was an exact facial match, that doesn't justify the complete lack of basic police work to establish it was him.

Absolutely agree - and the consequences to a personal citizen for the lack of that basic police work can be long lastingly negative.

Do they have it? Police haven't always been forthcoming in publishing their evidence.

If they don't how are they describing the quality of video and clear lack of resemblance?

I don't know what passage you're describing, but this one is implied to be part of a narrative that is told from the perspective of Mr. Williams, i.e. he's the one who remembers "The photo was blurry, but it was clearly not Mr. Williams"

> The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.

> “Is this you?” asked the detective.

> The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.

All the preceding grafs are told in the context of "this what Mr. Williams said happened", most explicitly this one:

> “When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection.

According to the ACLU complaint, the DPD and prosecutor have refused FOIA requests regarding the case:

https://www.aclu.org/letter/aclu-michigan-complaint-re-use-f...

> Yet DPD has failed entirely to respond to Mr. Williams’ FOIA request. The Wayne County Prosecutor also has not provided documents.


Maybe it's just me, but "we just took his word for it" doesn't strike me as particularly good journalism if that's what happened. If they really wrote these articles without that level of basic corroboration then that's pretty bad.

It's a common technique in journalism to describe and attribute someone's recollection of events in a series of narrative paragraphs. It does not imply "we just took his word for it", though it does imply that the reporter finds his account to be credible enough to be given some prominent space.

This arrest happened 6 months ago. Who else besides the suspect and the police do you believe reporters should ask for "basic corroboration" of events that took place inside a police station? Or do you think this story shouldn't be reported on at all until the police agree to give additional info?


It should at least be very clear at the paragraph level what is established fact and what is speculation/opinion.

Well, it was “according to someone familiar with the matter”

>> I don't know what passage you're describing,

The 4th sentence says: "Detectives zoomed in on the grainy footage..."


Because they're not in the business of providing information, transparency or journalism.

They are in the business of exposing you to as many paid ads as possible. And they believe providing outgoing links reduces their ability to do that.


>They are in the business of exposing you to as many paid ads as possible.

NPR is a non-profit that is mostly funded by donations. They only have minimal paid ads on their website to pay for running costs - they could easily optimize the news pages to increase ad revenue but they don't because it would get in the way of their goals.


I can see why you'd only get 6 guys together for a physical "6 pack" line-up.

But for a photo lineup I can't imagine why you don't have least 25 photos to pick from.


Excellent point. In fact, the entire process of showing the witness the photos should be recorded, and double blind. I.e the officer showing the person should not know anything about the lineup.

> the detectives then put the photo of this man into a "6 pack" photo line-up, from which a store employee then identified that man

This is not correct. The "6-pack" was shown to a security firm's employee, who had viewed the store camera's tape.

"In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him." [1]

[1] ibid.


Just a tip in case it happens to anyone - Never, ever agree to be in a lineup.

It wasn't just that the employee picked the man out of 6 pack; the employee they interviewed wasn't even a witness to the crime in the first place.

>into a "6 pack" photo line-up

How did the people in the 6 pack photo line-up match up against the facial recognition? Were they likely matches?


No clue about the likelihood of police using similar facial recognition matches for the rest, but normally the alternates need to be around the same height, build, and complexion as the subject. I would think including multiple potential matches would be a huge no-no simply because your alternates need to be people who you know are not a match. If you just grab the 6 most similar faces and ask the victim to choose, what do you do when they pick the third closest match?

Well you may know some people are not a match because you know where they were, for example pictures could be of people who were incarcerated at the time of the crime.

Even worse, the employee who was asked to pick him out of a line up hadn't even witnessed the crime in the first place.

> Does this tool give any indication of a confidence value?

Yes.

> Does it return a list (sorted by confidence) of possible suspects,

Yes.

> ... or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?

Yes it does. It also states in large print heading “THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION IT IS AN INVESTIGATIVE LEAD AND IS NOT PROBABLE CAUSE TO ARREST”.

You can see a picture of this in the ACLU article.

The police bungled this badly by setting up a fake photo lineup with the loss prevention clerk who submitted the report (who had only ever seen the same footage they had).

However, tools that are rife for misuse do not get a pass because they include a bold disclaimer. If the tool/process can not prevent misuse, the tool/process is broken and possibly dangerous.

That said, we have little data on how often the tool results in catching dangerous criminals versus how often it misidentifies innocent people. We have little data on if those innocent people tend to skew toward a particular demographic.

But I have a fair suspicion that dragnet techniques like this unfortunately can be both effective and also problematic.


I think the software would be potentially less problematic if the victim/witness were given access, and (ostensibly) could see the pool of matches and how much/little the top likely match differed from the less confident matches.

> The police bungled this badly by setting up a fake photo lineup...*

FWIW, this process is similar to traditional police lineups. The witness is shown 4-6 people – one who is the actual suspect, and several that vaguely match a description of the suspect. When I was asked to identify a suspect in my robbery, the lineup included an assistant attorney who would later end up prosecuting the case. The police had to go out and find tall slight-skinned men to round out the lineup.

> ... with the loss prevention clerk who submitted the report (who had only ever seen the same footage they had).

Yeah, I would hope that this is not standard process. The lineup process is already imperfect and flawed as it is even with a witness who at least saw the crime first-hand.


Intresting and related, a team made a neat "face depixelizer" that takes a pixelated image and uses machine learning to generate a face that should match the pixelated image.

What's hilarious is that it makes faces that look nothing like the original high-resolution images.

https://twitter.com/Chicken3gg/status/1274314622447820801


Interesting... Neat... Hilarious... In light of the submission and the comment you're responding to, these are not the words I would choose.

I think there's genuine cause for concern here, especially if technologies like these are candidates for inclusion in any real law enforcement decision-making.


That should be called a face generator, not a depixelizer.

Basically. The faces look plausible but less useful than the original blurred image.

What's sad is that a tech entrepreneur will definitely add that feature and sell it to law enforcement agencies that believe in CSI magic: https://www.youtube.com/watch?v=Vxq9yj2pVWk

And another entrepreneur can add a feature to generate 10 different faces which match the same pixelation, and sell it to the defence.

A better strategy might be to pixelate a photo of each member of the jury, than de-pixelate it through the same service, and distribute the before and after. Maybe include the judge and prosecutor.

Doubt that many people can afford to hire an expert witness, or hire someone to develop bespoke software for their trial.

Ironically, if the police had used and followed the face depixelizer then we may not have had the false arrest of a black man - not because of accuracy but because it doesn't produce many black faces

I wonder if this is trained on the same, or similar, datasets.

One of the underlying models, PULSE, was trained on CelebAHQ, which is likely what the results are mostly white-looking. StyleGAN, which was trained on the much more diverse (but sparse) FFHQ dataset does come up with a much more diverse set of faces[1]...but PULSE couldn't get them to converge very closely on the pixelated subjects...so they went with CelebA [2].

[1] https://github.com/NVlabs/stylegan [2] https://arxiv.org/pdf/2003.03808.pdf (ctrl+f ffhq)


People are not good at understanding uncertainty and its implications, even if you put it front and center. I used to work in renewable energy consulting and I was shocked by how aggressively uncertainty estimates are ignored by those whose goals they threaten.

In this case, it's incumbent on the software vendors to ensure that less-than-certain results aren't even shown to the user. American police can't generally be trusted to understand nuance and/or do the right thing.


I blame TV shows like CSI and all the other crap out there that make pixelated images look like something you could "Zoom" into or something the computer can still understand even if the eye does not. Because of this, non tech people do not really understand that pixelated images have LOST information. Add that to the racial situation in the U.S. and the the inaccuracy of the tool for black people. Wow, this can lead to some really troublesome results

I lose hours every day just yelling "enhance" at my computer screen. Hasn't worked yet, but any day now...

Bladerunner

> But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?

Honest question: does race predict legal recourse when decoupled from socioeconomic status, or is this an assumption?


Race and socioeconomic status are deeply intertwined. Or to be more blunt - US society has kept black people poorer. To treat them as independent variables is to ignore the whole history of race in the US.

> To treat them as independent variables is to ignore the whole history of race in the US.

Presumably the coupling of the variables is not binary (dependent or independent) but variable (degrees of coupling). Presumably these variables were more tightly coupled in the past than in the present. Presumably it's useful to understand precisely how coupled these variables are today because it would drive our approach to addressing these disparities. E.g., if the variables are loosely coupled then bias-reducing programs would have a marginal impact on the disparities and the better investment would be social welfare programs (and the inverse is true if the variables are tightly coupled).


>Honest question: does race predict legal recourse when decoupled from socioeconomic status, or is this an assumption?

I think the issue is that regardless of the answer, it isn't decoupled in real world scenarios.

I think the solution isn't dependent upon race either. It is to ensure everyone have access to legal recourse regardless of socioeconomic status. This would have the side effect of benefiting races correlated with lower socioeconomic status more.


> I think the issue is that regardless of the answer, it isn't decoupled in real world scenarios.

Did you think I was asking about non-real-world scenarios? And how do we know that it's coupled (or rather, the degree to which it's coupled) in real world scenarios?

> I think the solution isn't dependent upon race either. It is to ensure everyone have access to legal recourse regardless of socioeconomic status. This would have the side effect of benefiting races correlated with lower socioeconomic status more.

This makes sense to me, although I don't know what this looks like in practice.


Like social democracy

Middle class black people often get harassed by police, and there is a long history of far steeper sentences for convictions for drugs used more by the black population (crack) than that used more by the white population (cocaine).

So unequal treatment based on race has quite literally been a feature of the US justice system, independent of socioeconomic status.


I’m aware, but that doesn’t answer my question about access to legal recourse.

Once you are convicted, and are subject to one of the disproportionate sentences often given to black people, nothing short of a major change to how sentencing law works can provide legal recourse. See: https://www.sentencingproject.org/issues/racial-disparity/

If you survive violence at the hands of law enforcement and are not convicted of a crime, or if you don't and your family wants to hold law enforcement accountable, then the first option is to ask the local public prosecutor to pursue criminal charges against your attackers.

Depending on where you live could be a challenge, given the amount of institutional racial bias in the justice system, and how closely prosecutors tend to work with police departments. After all, if prosecutors were going after police brutality cases aggressively, there likely wouldn't be as much of a problem as there is.

If that's fruitless, you would need to seek the help of a civil rights attorney to push your case in the the legal system and/or the media. This is where a lot of higher profile cases like this end up - and often only because they were recorded on video.


> The issue of face recognition algorithms performing worse on dark faces is a major problem.

This needs to be coupled with the truth that people (police) without diverse racial exposure are terrible at identifying people outside of their ethnicity. In the photo/text article they show the top of the "Investigative Lead Report" as an image. You mean to say that every cop who saw the two images side by side did not stop and say "hey, these are not the same person!" They did not, and that's because their own brains' could not see the difference.

This is a major reason police forces need to be ethnically diverse. Just that enables those members of the force who never grew up or spent time outside their ethnicity can learn to tell a diverse range of similar but different people outside their ethnicity apart.


It wouldn't make it into the newspapers, so it doesn't matter.

This is a classic example of the false positive rate fallacy.

Let's say that there are a million people, and the police have photos of 100,000 of them. A crime is committed, and they pull the surveillance of it, and match against their database. They have a funky image matching system that has a false positive rate of 1 in 100,000 people, which is way more accurate than I think facial recognition systems are right now, but let's just roll with it. Of course, on average, this system will produce one positive hit per search. So, the police roll up to that person's home and arrest them.

Then, in court, they get to argue that their system has a 1 in 100,000 false positive rate, so there is a chance of 1 in 100,000 that this person is innocent.

Wrong!

There are ten people in the population of 1 million that the software would comfortably produce a positive hit for. They can't all be the culprit. The chance isn't 1 in 100,000 that the person is innocent - it is in fact at least 9 out of 10 that they are innocent. This person just happens to be the one person out of the ten that would match that had the bad luck to be stored in the police database. Nothing more.


See also: Privileging the hypothesis.

If I'm searching for a murderer in a town of 1000, it takes about 10 independent bits of evidence to get the right one. And when I charge someone, I must already have the vast majority of that evidence. To say "oh well we don't know that it wasn't Mr. or Mrs. Doe, let's bring them in" is itself a breach of the Does' rights. I'm ignoring 9 of the 10 bits of evidence!

Using a low-accuracy facial recognition system and a low-accountability lineup procedure to elevate some random man who did nothing wrong from presumed-innocent to 1-in-6 to prime suspect, without having the necessary amount of evidence, is committing the exact same error and is nearly as egregious as pulling a random civilian out of a hat and charging them.


There's a good book called "The Drunkards Walk", that describes a woman who was jailed after having 2 children die from SIDS. They argued that the odds of this happening is 1 in a million (or something like that), so probably the woman is a baby killer. The prosecution had statisticians argue this. The woman was found guilty.

She later won on appeal in part because the defense showed that the testimony and argument of the original statisticians were wrong.

This stuff is so easy to get wrong. A little knowledge of statistics can be dangerous.


And even if the original stats were right. A 1 in a million event happens to about 100 people per day in the US.

Sure.. But the case being discussed has a maximum frequency of 1 in a million every 18 (2 terms of childbirth) months, further reduced by needing to be a woman of reproductive age, fertile, etc etc.

This case of "one in a million" does not happen frequently.


> A 1 in a million event happens to about 100 people per day in the US.

This is a meaningless statement, you could choose literally any number for this statement, because you are missing the denominator.


In the legal world it apparently goes by the moniker "Prosecutor's Fallacy":

https://en.wikipedia.org/wiki/Prosecutor%27s_fallacy


See also: the paper "Why Most Published Research Findings Are False". Previously on HN: https://news.ycombinator.com/item?id=1825007

Definitely they should have everyone's 3d image in the system. DNA too.

He wasn't arrested until the shop owner had also "identified" him. The cops used a single frame of grainy video to pull his driver's license photo, and then put that photo in a lineup and showed the store clerk.

The store clerk (who hadn't witnessed the crime and was going off the same frame of video fed into the facial recognition software) said the driver's license photo was a match.

There are several problems with the conduct of the police in this story but IMHO the use of facial recognition is not the most egregious.


The story is the same one that all anti-surveillance, anti-police militarization, pro-privacy, and anti-authoritarian people foretell. Good technology will be used enable, amplify, and justify civil rights abuses by authority figures from your local beat cop, to a faceless corporation, a milquetoast public servant, or the president of the United States.

Our institutions and systems (and maybe humans in general) are not robust enough to cleanly handle these powers, and we are making the same mistake over and over and over again.


Correct, and this has been the story with every piece of technology or tool we've ever given to police. We give them body cameras and they're turned off or used to create FPS-style snuff films of gunned down citizens. Give them rubber bullets and they're aimed at protesters eyeballs. Give them tasers and they're used as an excuse to shoot someone when the suspect "resists." Give them flashbangs and they'll throw them into an infant's crib. Give them mace and it's used out of car windows to punish journalists for standing on the sidewalks.

The mistake is to treat any police department as a good-faith participant in the goal of reducing police violence. Any tool you give them will be used to brutalize. The only solution is to give them less.


It is not clear to me that the person who identified him was shop owner or clerk. From the nyt article: https://www.nytimes.com/2020/06/24/technology/facial-recogni...

"The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police"

"In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him. (Ms. Johnston declined to comment.)"


I think you're correct that the person was not an owner or clerk. IMHO the salient point is that the person was not any sort of eyewitness but merely comparing the same grainy photo as the algorithm.

More importantly, the person wasn't an eyewitness, and the 6-pack photo array was window dressing to make the outside technician appear to be an eyewitness.

Yes, this is a story of police misconduct. The regulation of facial recognition that is required is regulation against police/authority stupidity. The FR system aids in throwing away misses, leaving investigative leads. But if a criminal is not in the FR database to begin with, any results of the FR are wastes of time.

> "I picked it up and held it to my face and told him, 'I hope you don't think all Black people look alike,' " Williams said.

I'm white. I grew up around a sea of white faces. Often when watching a movie filled with a cast of non-white faces, I will have trouble distinguishing one actor from another, especially if they are dressed similarly. This sometimes happens in movies with faces similar to the kinds I grew up surrounded by, but less so.

So unfortunately, yes, I probably do have more trouble distinguishing one black face from another vs one white face from another.

This is known as the cross-race effect and it's only something I became aware of in the last 5-10 years.

Add to that the fallibility of human memory, and I can't believe we still even use line ups. Are there any studies about how often line ups identify the wrong person?

https://en.wikipedia.org/wiki/Cross-race_effect


I lived in South Africa for a while and heard many times, with various degrees of irony, "you white people all look the same" from black South Africans. So yeah it's definitely a cross-racial recognition problem, and it's probably also a problem with distinguishing between members of visible minorities using traits beyond the most noticable othering characteristic.

There is just so much wrong with this story. For starters:

The shoplifting incident occurred in October 2018 but it wasn’t until March 2019 that the police uploaded the security camera images to the state image-recognition system but the police still waited until the following January to arrest Williams. Unless there was something special about that date in October, there is no way for anyone to remember what they might have been doing on a particular day 15 months previously. Though, as it turns out, the NPR report states that the police did not even try to ascertain whether or not he had an alibi.

Also, after 15 months, there is virtually no chance that any eye-witness (such as the security guard who picked Williams out of a line-up) would be able to recall what the suspect looked like with any degree of certainty or accuracy.

This WUSF article [1] includes a photo of the actual “Investigative Lead Report” and the original image is far too dark for a anyone (human or algorithm) to recognise the person. It’s possible that the original is better quality and better detail can be discerned by applying image-processing filters – but it still looks like a very noisy source.

That same “Investigative Lead Report” also clearly states that “This document is not a positive identification … and is not probable cause to arrest. Further investigation is needed to develop probable cause of arrest”.

The New York Times article [2] states that this facial recognition technology that the Michigan tax-payer has paid millions of dollars for is known to be biased and that the vendors do “not formally measure the systems’ accuracy or bias”.

Finally, the original NPR article states that

> "Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them," said Jameson Spivack

[1] https://www.wusf.org/the-computer-got-it-wrong-how-facial-re...

[2] https://www.nytimes.com/2020/06/24/technology/facial-recogni...


Really seems like most police departments in our country are incompetent, negligent, ineffective and systemically racist.

Many of these cops are earning $200k plus annually! Our law enforcement system is ridiculous and needs an overhaul.


It gets even crazier to think about when you realise that cities like Detroit are overwhelmingly Black. The police there are just not providing good value for the people who live there.

Brookings had a great post about this the other day: https://www.brookings.edu/blog/how-we-rise/2020/06/11/to-add...


It isn't just facial recognition, license plate readers can have the same indefensibly Kafka-esque outcomes where no one is held accountable for verifying computer-generated "evidence". Systems like in the article make it so cheap for the government to make a mistake, since there are few consequences, that they simply accept mistakes as a cost of doing business.

Someone I know received vehicular fines from San Francisco on an almost weekly basis solely from license plate reader hits. The documentary evidence sent with the fines clearly showed her car had been misidentified but no one ever bothered to check. She was forced to fight each and every fine because they come with a presumption of guilt, but as soon as she cleared one they would send her a new one. The experience became extremely upsetting for her, the entire bureaucracy simply didn't care.

It took threats of legal action against the city for them to set a flag that apparently causes violations attributed to her car to be manually reviewed. The city itself claimed the system was only 80-90% accurate, but they didn't believe that to be a problem.


I agree that's bad, and license plate readers come with their own set of problems.

But being biased by the skin color of the driver is (AFAIK) not one of them. Which is exactly the problem with vision systems applied to humans, at least the ones we've seen deployed so far.

If a system discriminates against a specific population, that's very different from (indiscriminately) being unreliable.


That suddenly reminded me why I feel so privileged to not own a car, a distinct contrast from when I was a teenager and felt it was a rite of passage!

I had forgotten about the routine of fighting traffic tickets multiple times a year as a fact of life. Let alone fender benders. I had only been reveling in the lack of a frustrating commute.

Last decade I did get a car for 3 months, and the insurance company was so thrilled that I was "such a good driver" because of my "spotless record" for many years. Little do they know I just don't drive and perhaps have now less experience than others. Although tangentially, their risk matrix actually might be correct, if I can afford to live in dense desirable areas then maybe it is less likely that I would be going fast and getting into circumstances that pull from their insurance pool at larger amounts.

They probably thought "one of the largest companies in the world probably chauffeurs him down the highway in a bus anyway"


This is a big reason I wish it was easier to not have a car in the US. There's always the potential to get things like parking tickets, and you have to deal with license, insurance, parking permit, etc.

The volume of tickets issued is quite staggering, and each one is a huge annoyance for someone.


Since the NPR is a 3 minute listen without a transcript, here's the ACLU's text/image article: https://www.aclu.org/news/privacy-technology/wrongfully-arre...

And here's a 1st-person account from the arrested man: https://www.washingtonpost.com/opinions/2020/06/24/i-was-wro...


The mods can change this link to https://www.npr.org/2020/06/24/882683463/the-computer-got-it...

The linked story is audio only and is associated with the Morning Edition broadcast, but the full story appears under our Special Series section.

(I work for NPR)



As soon as I saw it was audio only, i left the site. Why do sites do this? How many people actually stick to the page and listen to that?

> How many people actually stick to the page and listen to that?

I just did. 3 minutes wasn't that bad and I wasn't somewhere where it would be a problem.

> Why do sites do this?

NPR is a radio network. I have seen that often they do transcribe their clips. I am not sure what the process they have for that looks like, but it seems this particular clip didn't get transcribed.

Edit: looks like they do have a transcription mentioned elsewhere in the thread. So seems like some kind of UI fail.


NPR does transcribe (many, most?) its audio stories, but usually there's a delay of a day or so – the published timestamp for this story is 5:06AM (ET) today.

edit: looks like there's a text version of the article. I'm assuming this is a CMS issue: there's an audio story and a "print story", but the former hadn't been linked to the latter: https://news.ycombinator.com/item?id=23628790


They transcribe all their stories. Back before the web was widespread, you could call or write NPR and have them mail a transcript to you.

Well, if anyone were going to do it, you'd think no one would be surprised about it being the "National Public Radio"

Accessibility still matters, or should still matter even if you’re a radio station, but probably especially if you’re a news radio station.

NPR is fantastic when it comes to accessibility by providing transcripts. I linked the page thinking the transcript will come later as they usually do. But turns out it was a wrong link. See elsewhere for the correct link.

How many TV shows have audio descriptions of non verbal parts of what you see on screen?

More than zero. It's called closed captioning, isn't it? I've quite often seen closed-captioning that put brief written descriptions of non-verbal depictions in bracket, and it's not entirely common either

https://www.automaticsync.com/captionsync/what-qualifies-as-... (see section: "High Quality Captioning")


Close captioning is for people who can’t hear.

I am not aware of many TV shows that offer audio commentary for the visually impaired.

Here is an example of one that does.

https://www.npr.org/2015/04/18/400590705/after-fan-pressure-...


Sorry, I thought that since we were originally talking about transcriptions of radio news broadcasts and accessibility for the hard of hearing that closed-captioning would be appropriate and relevant. But your point is well met.

Most people are going to hear the story on the radio or in a podcast app / RSS feed. It’s useful to have the story indexed on a shareable web link where it can be played on different platforms without any setup. If I wanted to share a podcast episode with friends in a group chat, a link like this would be a good way to do it. Since this is more of a long-form text discussion forum I’d probably look for a text format before posting here.

Why do radio sites post audio?

NPR's text-only article served to me:

https://text.npr.org/s.php?sId=882683463


From ACLU article:

Third, Robert’s arrest demonstrates why claims that face recognition isn’t dangerous are far-removed from reality. Law enforcement has claimed that face recognition technology is only used as an investigative lead and not as the sole basis for arrest. But once the technology falsely identified Robert, there was no real investigation.

I fear this is going to be the norm among police investigations.


> Federal studies have shown that facial-recognition systems misidentify Asian and black people up to 100 times more often than white people.

The idea behind inclusion is that this product would have never made it to production if the engineering teams, product team, executive team and board members represented the population. But enough representation so that there is a countering voice is even better.

Would have just been "this edge case is not an edge case at all, axe it."

Accurately addressing a market is the point of the corporation more than an illusion of meritocracy amongst the employees.


This is so incredibly common, it's embarrassing. I was on an expert panel about "AI and Machine Learning in Healthcare and Life Sciences" back in January, and I made it a point throughout my discussions to keep emphasizing the amount of bias inherent in our current systems, which ends up getting amplified and codified in machine learning systems. Worse yet, it ends up justifying the bias based on the false pretense that the systems built are objective and the data doesn't lie.

Afterward, a couple people asked me to put together a list of the examples I cited in my talk. I'll be adding this to my list of examples:

* A hospital AI algorithm discriminating against black people when providing additional healthcare outreach by amplifying racism already in the system. https://www.nature.com/articles/d41586-019-03228-6

* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. https://www.nejm.org/doi/full/10.1056/NEJMsa1507092

* The dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. https://jamanetwork.com/journals/jamadermatology/article-abs...

And some other relevant, but not healthcare examples as well:

* When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English". https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...

* When Amazon's AI recruiting tool inadvertantly filtered out resumes from women. https://www.reuters.com/article/us-amazon-com-jobs-automatio...

* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon. https://www.propublica.org/article/machine-bias-risk-assessm...

And here's some good news though:

* A hospital used AI to enable care and cut costs (though the reporting seems to over simplify and gloss over enough to make the actual analysis of the results a little suspect). https://www.healthcareitnews.com/news/flagler-hospital-uses-...


I agree 100% about how common it is. The industry also pays lip service about doing something about it. My last job was at a research institution and we had a data ethics czar, who's a very smart (Stats phd) guy and someone I consider a friend. A lot of his job was to go around the org and conferences talking about things like this.

While there's a lot of head nodding, nothing is ever actually addressed in day to day operations. Data scientists barely know what's going on when they throw things through TensorFlow. What matters is the outcome and the confusion matrix at the end.

I say this as someone who works in data and implements AI/ML platforms. Mr. Williams needs to find the biggest ambulance chasing lawyer and file civil suits not only the law enforcement agencies involved, but top down everyone at DataWorks from the president to the data scientist to the lowly engineer who put this in production.

These people have the power to ruin lives. They need to be made an example of and held accountable for the quality of their work.


Sounds like a license for developing software is inevitable then.

>When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon.

>When Amazon's AI recruiting tool inadvertantly filtered out resumes from women

>When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English

There's simply no indication that these aren't statistically valid priors. And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned. This is all based on the unfounded conflation between equality of outcome and equality of opportunity, and the erasure of evidence of genes and culture playing a role in behavior and life outcomes.

This is bad science.


Please post your sources. Your comments about

> the erasure of evidence of genes and culture playing a role in behavior and life outcomes

are concerning.


> There's simply no indication that these aren't statistically valid priors. And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned.

I'd consider reading the sources I posted in my comment before responding with ill-conceived notions. Literally every single example I posted linked to the peer-reviewed scientific evidence (cited, published literature) indicating the points I summarized.

The only link I posted without peer-reviewed literature was the last one with the positive outcome, and that's the one I commented had suspect analysis.


Let's just consider an example; where do you draw the line in the following list? To avoid sending travelers through unsafe areas:

1. Google's routing algorithm is conditioned on demographics

2. Google's routing algorithm is conditioned on income/wealth

3. Google's routing algorithm is conditioned on crime density

4. Google's routing algorithm cannot condition on anything that would disproportionately route users away from minority neighborhoods

I think the rational choice, to avoid forcing other people to take risks that they may object to, is somewhere between 2 and 3. But the current social zeitgeist seems only to allow for option four, since an optimally sampled dataset will have very strong correlations between 1-3, to the point that in most parts of the us they would all result in the same routing bias.


This is exactly why I suggested actually reading the sources I posted before responding. The Google example has nothing to do with routing travelers. It was an algorithm designed to detect sentiment in online comments and to auto-delete any comments that were classified as hate-speech. The problem was that it mis-classified entire dialects of English (meaning it completely failed at determining sentiment for certain people), deleting all comments from the people of certain cultures (unfairly, disproportionately censoring a group of people). That's the dictionary definition of bias.

You're completely missing my point. And the purpose of my hypothetical. So let me try it with your example:

>The problem was that it mis-classified entire dialects of English (meaning it completely failed at determining sentiment for certain people), deleting all comments from the people of certain cultures

What happens in the case that a particular culture is more hateful? Do we just disregard any data that indicates socially unacceptable bias?

What, only Nazis are capable of hate speech?


> What happens in the case that a particular culture is more hateful? Do we just disregard any data that indicates socially unacceptable bias?

That's not what was happening. If you read the link, you'll see the problem is that the AI/ML system was mis-classifying non-hateful speech as hateful, just because of the dialect being used.

If it were the case that the culture was more hateful, then it wouldn't have been considered "mis-classification."

> You're completely missing my point.

I'm not missing your point; it's just not a well-reasoned or substantiated point. Here were your points:

> There's simply no indication that these aren't statistically valid priors.

We do have every indication that this wasn't what was happening in literally every single example I posted. You just have to read them.

> And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned.

You say that, and yet you keep posting your point without any evidence whatsoever. Meanwhile, every single example I posted did cite peer-reviewed, published scientific evidence.

> This is all based on the unfounded conflation between equality of outcome and equality of opportunity, and the erasure of evidence of genes and culture playing a role in behavior and life outcomes.

Again, peer-reviewed published literature disagrees. Reading it explains why the point that it's all unfounded conflation is incorrect.


The discussion about this tech revolves around accuracy and racism, but the real threat is in global unlimited surveillance. China is installing 200 million of facial recognition cameras right now to keep the population under control. It might be the death of human freedom as this technology spreads

Edit: one source says it is 400 million new cameras: https://www.cbc.ca/passionateeye/m_features/in-xinjiang-chin...


Reminds me of this-

Facial recognition technology flagged 26 California lawmakers as criminals. (August 2019)

https://www.mercurynews.com/2019/08/14/facial-recognition-te...


Another reason that it's absolutely insane that the state demands to know where you sleep at night in a free society. These clowns were able to just show up at his house and kidnap him.

The practice of disclosing one's residence address to the state (for sale to data brokers[1] and accessible by stalkers and the like) when these kinds of abuses are happening is something that needs to stop. There's absolutely no reason that an ID should be gated on the state knowing your residence. It's none of their business. (It's not on a passport. Why is it on a driver's license?)

[1]: https://www.newsweek.com/dmv-drivers-license-data-database-i...


Perhaps we, as technologists, are going about this the wrong way. Maybe, instead of trying to reduce the false alarm rate to an arbitrarily low number, we instead develop CFAR (Constant false alarm rate) systems, so that users of the system know that they will get some false alarms, and develop procedures for responding appropriately. In that way, we could get the benefit of the technology, whilst also ensuring that the system as a whole (man and machine together) are designed to be robust and have appropriate checks and balances.

If you follow the link, you'll see that the computer report had this message right at the top in massive letters:

THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS _NOT_ PROBABLE CAUSE TO ARREST. FURTHER INVESTIGATION IS NEEDED TO DEVELOP PROBABLE CAUSE TO ARREST.

I mean, what else could the technologists have done?


Well .. it looks like it isn't enough, and additional human-factors oriented requirements need to be documented - like deliberately engineering the system to produce a certain number of false alarms so the police always have to do additional work to discard them - as opposed to only sometimes.

I wonder if the problem here is just the way traditional policing works, not even the technology.

It is indeed the way traditional policing works. The average police officer things facial recognition is the visual equivalent of Google and since he/she can similarly rely on search results - they either have no idea about false positive biases based on race or worse, probably are too lazy to dig deeper.

Though the suffering of the victims of such wrong matches is real, one consolation is that more of such cases will hopefully bring about the much needed scepticism in the results so that some old-fashioned validation/investigation is done.


We, as technologists, probably largely agree with this argument, but those in charge are all laymen: officers, Dept. leaders, politicians, even lawyers, can all be ignorant of that nuance, and will probably remain so unless walked through in detail, and then continuously badgered about it.

I don't think using the facial recognition is necessarily wrong to help identify probable suspects, but arresting someone based on a facial match algorithm is definitely going too far.

Of course really I blame the AI/ML hucksters for part of this mess who have sold us the idea of machines replacing rather than augmenting human decision making.


In a world where some police forces don't use polygraph lie detectors because they are deemed too inaccurate, it baffles me that people would make an arrest based on a facial recognition hit from poor quality data.

But no, its AI, its magical and it must be right.


This seems similar to self-driving cars where people hold the computer to much higher standards than humans. I don't have solid proof, but I suspect that using facial recognition with a reasonable confidence threshold and reasonable source images is more accurate than eyewitness ID. If for no other reason than the threshold for a positive eyewitness ID is laughably bad.

The current best practice is to have a witness pick out the suspect from 6 photos. It should be immediately obvious that right off the bat there's a 17% chance of the witness randomly picking the "right" person. It's a terrible way to do things and it's no surprise that people are wrongly convicted again and again on eyewitness testimony.


You need a much higher standard standard of accuracy for facial recognition because it is applied indiscriminately to a large population. If it has 99.9% accuracy and you apply it to a population of 10,000 people, you will get on average 10 false positives.

I think it is very wrong. Faces are anything but unique. Having a particular face should not result in you being a suspect. Only once actual policing results in you becoming a suspect then this might be a low quality extra signal.

> Having a particular face should not result in you being a suspect. Only once actual policing results in you becoming a suspect then this might be a low quality extra signal.

Having a picture or just a description of the face is one of the most important pieces of information the police has in order to do actual policing. You can be arrested for just broadly matching the description if you happen to be in the vicinity.

Had the guy been convicted of anything just based on that evidence, this would be a scandal. As it is, a suspect is just a suspect and this kind of thing happens all the time, because humans are just as fallible. It's just not news when there's no AI involved.


A face of which there is only a description is not going to work if there aren't any special identifying marks unless you get an artist involved or one of those identikit sets to reconstruct the face. An AI is just going to spit out some generic representation of what it was trained on rather than the specifics of the face of an actual suspect.

Faces generated by AI means should not count as 'probable cause' to go and arrest people. They should count as fantasy.


> Faces generated by AI means should not count as 'probable cause' to go and arrest people.

They don't:

https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...

There was further work involved, there was a witness who identified the man on a photo lineup, and so on. The AI did not identify anyone, it gave a "best effort" match. All the actual mistakes were made by humans.


Those hucksters should be worried about the Supreme Court swatting away their business model, because that's where I see this headed.

I don't think they'll worry about that. Even if that did happen there are foreign markets who would still invest in this.

Yeah, facial recognition can be useful in law enforcement, as long as it's used responsibly. There was a man who shot people at a newspaper where I lived, and when apprehended, he refused to identify himself, and apparently their fingerprint machine wasn't working, so they used facial recognition to identify him.

https://en.wikipedia.org/wiki/Capital_Gazette_shooting


From the wiki article and the linked news articles, the police picked him up at the scene of the crime. He also had smoke grenades (used in the attack) when they found him.

> Authorities said he was not carrying identification at the time of his arrest and was not cooperating. … an issue with the fingerprint machine ultimately made it difficult to identify the suspect, … A source said officials used facial recognition technology to confirm his identity.

https://en.wikipedia.org/wiki/Capital_Gazette_shooting#Suspe...

> Police, who arrived at the scene within a minute of the reported gunfire, apprehended a gunman found hiding under a desk in the newsroom, according to the top official in Anne Arundel County, where the attack occurred.

https://www.washingtonpost.com/local/public-safety/heavy-pol...

This doesn't really seem like an awesome use of facial recognition to me. He was already in custody after getting picked up at the crime scene. I doubt he would have been released if facial recognition didn't exist.


> as long as it’s used responsibly

At what point can we decide that people in positions of power are not and will not ever be responsible enough to handle this technology?

Surely as a society we shouldn’t continue to naively assume that police are “responsible” like we’ve assumed in the past?


Agreed, I'm not saying we can currently assume they are responsible, but in some hypothetical future where reforms have been made and they can be trusted, I think it would be fine to use. I don't think we should use current bad actors to decide that a technology is completely off limits in the future.

> Surely as a society we shouldn’t continue to naively assume that police are “responsible” like we’ve assumed in the past?

Of course we shouldn't assume it, but we absolutely should require it.

Uncertainty is a core part of policing which can't be removed.


I don't think there is such a thing as responsible use of facial recognition technology by law enforcement.

The technology is certainly not robust enough to be trusted to work correctly at that level yet. Even if it was improved I think there is a huge moral issue with the police having the power to use it indiscriminately on the street.


A few things I just don't have the stomach for as an engineer, writing software that: - impacts someones health - impacts someones finances - impacts someones freedoms

Call me weak, but I think about the "what ifs" a bit too much in those cases. What if my bug keeps them from selling their stock and they lose their savings? What if the wrong person is arrested, etc?


Why would anyone call you weak, that's principled and it's the correct attitude. It's the people who don't think about the consequences of the products they help build that are the problem.

Thank you for that really helpful reply. I re-framed my thoughts based on it.

I think that your prints, DNA, and so forth must be, in the interests of fairness, utterly erased from all systems in the case of false arrest. With some kind of enormous, ruinous financial penalty in place for the organizations for non-compliance, as well as automatic jail times for involved personnel. These things need teeth to happen.

any defence lawyer with more than 3 brain cells would have an absolute field day deconstructing a case brought solely on the basis of a facial recognition. What happened to the idea that police need to gather a variety of evidence confirming their suspicions before an arrest is issued. Even a state prosecutor wouldn't authorize a warrant based on such flimsy methods.

True but the defendant is still financially, and in many cases professionally, ruined.

Getting a layer that would advise beyond pleading guilty for a reduced sentence is not a default option.

The company that developed this software is Dataworks plus, according to the article. Name and shame.

And then in some states employers are allowed to ask have you eve been arrested (never mind convicted of any crime) on employment application. Sure, keep putting people down. One day it might catch up with China's social scoring policies.

Is that different from somebody getting arrested based on mistaken eyewitness.

The difference is that is a known problem, but with ML, a large fraction of the population thinks it's infallible. Worse, its reported confidence for an individual face may be grossly overstated, since that is based on all the data it was trained on, rather than the particular subset you may be dealing with.

large fraction of the population and ML marketing both believe that.

I still think it insane. We have falling crime rates and we still arm ourselves as fast as we can. Humanity could live without face recognition and we wouldn't even suffer any penalties. Nope, people need to sell their evidently shitty ML work.


(1) We still have extreme levels of crime compared to other first world countries even if it is in decline

(2) Your argument strikes me as somewhat similar to "I feel fine why should I keep taking my medicine?". It's not exactly the same as the medicine is scientifically proven to cure disease while it's impossible to measure the impact of police on crime. But "things are getting better so we should change what we're doing" is not a particularly sound logical argument.


Crimes rates dropped even faster in countries with more rehabilitative approaches and long before some countries began to upgrade their police forces because of unrelated fears. It was more about giving people a second chance in all that.

Criminologists aren't certain about surveillance having a positive or negative effects on crime. We have more than 40 studies with mixed results. What is certain with that this kind of surveillance isn't responsible for the falling crime rates described. Most data is from the UK. Currently I don't think countries without surveillance fair worse on crime. Maybe quite to the contrary.

"what we're doing" is not equivalent to increasing video surveillance or generally increasing armament in civil spaces. It may be sound logic if you extend the benefit of the doubt but it may also just be a false statement.

Since surveillance is actually constitutionally forbidden in many countries, on could argue that deployment would "increase crime".

In some other sound logic it might just be a self reinforcing private prison industry with economic interests to keep a steady supply of criminals. Would also be completely sound.

But all these discussions are quite dishonest, don't you think? I just don't want your fucking camera in my face.


> The difference is that is a known problem, but with ML, a large fraction of the population thinks it's infallible.

I don't think anybody actually believes that.

I'm pretty sure the exact opposite is true: People expect AI to fail, because they see it fail all the time in their daily use of computers, for example in voice recognition.

> Worse, its reported confidence for an individual face may be grossly overstated, since that is based on all the data it was trained on, rather than the particular subset you may be dealing with.

At the end of the day, this is still human error. A human compared the faces and decided they looked alike enough to go ahead. The whole thing could've happened without AI, it's just that without AI, processing large volumes of data is infeasible.


I think the human error was made possible because of AI: the AI can search millions of records. The police / detective cannot and will only search a very small set, limiting the search by other means.

The probability of finding an innocent with a similar enough face so that the witness can be fouled is much higher with AI.


The suspect said the picture looked nothing like him. When he was shown the picture, he picked up the picture, put it in front of his face, and said "I hope you don't think all black people look alike".

I see this all the time when working with execs. I have to continually remind even very smart people with STEM undergrad and even graduate degrees that a computer vision system cannot magically see things that are invisible to the human eye.

"the computer said so" is way stronger than you would think.


Not even close to the same thing. People aren't very reliable witnesses either, But they are pretty good at identifying people they actually know.

It's also poor practice to search a database using a photo or even DNA to go fishing for a suspect. A closest match will generally be found even if the actual perpetrator isn't in the database. I think on some level the authorities know this, which is why they dont seed the databases with their own photos and DNA.


Yes.

A computer can make a mistake across literally any person who has a publicly available photo (which is almost everyone).

Also, the facial recognition technologies are provably extremely racially biased.


It's like asking "is mass surveillance that different from targeted surveillance"?

Yes, of course it is. Orders of magnitude more people could be negatively and undeservedly affected this for no other reason than the fact that it's now cheap enough and easy enough to use by the authorities.

Just to give one example I came up with right now, in the future the police could stop you, take your picture and automatically have it go through its facial recognition database. Kind of like "stop and scan".

Or if the street cameras get powerful enough (and they will), they could take your picture automatically while driving and then stop you.

Think of it like a "TSA system for the roads". A lot more people will be "randomly picked" by these systems from the roads.


Nope. But It's certainly far more accurate than eyewitnesses. And will reduce the frequency of false positives. Compare this to "suspect is a 6' male approx 200lbs with a white shirt and blue jeans" and then having police frantically pick up everyone in the area that meets this description.

This is the story that gets attention though. Despite it representing an improvement in likely every potential metic you can measure.

The response is what is interesting to me. It triggers a 1984 reflex resulting in people attempting to reject a dramatic enchantment in law enforcement ostensibly because it is not perfect. Or because they believe it a threat to privacy. I think people who are rejecting it should dig deep into their assumptions and reasoning to examine why they are really opposed to technology like this.


> think people who are rejecting it should dig deep into their assumptions and reasoning to examine why they are really opposed to technology like this.

Because a false positive ruins lives? Is that not sufficient? This man’s arrest record is public and won’t disappear. Many employers won’t hire if you have an arrest record (regardless of conviction). His reputation is also permanently smeared. These records are permanently public and in fact some counties publish weekly arrest records on their websites and in newspapers (not that newspapers matter much anymore)

Someday this technology may be better and work more reliably. We’re not there yet. Right now it’s like the early days of voice recognition from the ‘90s.


This will ruin lives far less frequently than the existing (worse) procedures.

But as the founders of this country wisely understood, human error is preferable to systematic error. That is the principle under which juries, wildly fallible, exist.

Human error is preferable, even if it is more frequent than the alternative, when it comes to justice. The more human the better.

Humans can be held accountable.


What is a unique use case for facial recognition that cannot be abused and has no other alternative solution?

Even the "good" use cases like unlocking your phone have security problems because malicious people can use photos or videos of your face and you can't change your face like you would a breached username and password.


I've got to be honest: I'm getting the picture the police here aren't very competent. I know I know, POSIWID and maybe they're very competently aiming at the current outcome. But don't they just look like a bunch of idiots?

In this particular case, computerized facial recognition is not the problem.

Facial recognition produces potential matches. It's still up to humans to look at footage themselves and use their judgment as to whether it's actually the same person or not, as well as to judge whether other elements fit the suspect or not.

The problem here is 100% on the cop(s) who made that call for themselves, or intentionally ignored obvious differences. (Of course, without us seeing the actual images in question, it's hard to judge.)

There are plenty of dangers with facial recognition (like using it at scale, or to track people without accountability), but this one doesn't seem to be it.


> The problem here is 100% on the cop(s) who made that call for themselves

I disagree. There is plenty of blame on the cops who made that call for themselves, true.

But there doesn't have to be a single party who is at fault. The facial recognition software is badly flawed in this dimension. It's well established that the current technologies are racially biased. So there's at least some fault in the developer of that technology, and the purchasing officer at the police department, and a criminal justice system that allows it to be used that way.

Reducing a complex problem to a single at-fault person produces an analysis that will often let other issues continue to fester. Consider if the FAA always stopped the analysis of air-crashes at: "the pilot made an error, so we won't take any other corrective actions other than punishing the pilot". Air travel wouldn't nearly as safe as it is today.

While we should hold these officers responsible for their mistake (abolish QI so that these officers could be sued civilly for the wrongful arrest!), we should also fix the other parts of the system that are obviously broken.


The facial recognition software is badly flawed in this dimension. It's well established that the current technologies are racially biased.

Who decided to use this software for this purpose, despite these bad flaws and well established bias? The buck stops with the cops.


I guess the argument would be that some companies are pushing- actively selling- the technology to PDs. My experience listening to the sales pitch by our sales team - of tech I helped develop; they would not only ignore the caveats attached to the products by engineering but straight out sell features that were not done, not even in the roadmap, or just physically impossible to implement as sold. With that in mind I can see how the companies selling these solutions are responsible as well.

Sure, and that was one of the parties I listed as being at fault:

> purchasing officer at the police department

However, if the criminal justice system decides that this is an acceptable use of software, then the criminal justice system itself also bears responsibility.

The developer of the software also bears the responsibility for developing, marketing, and selling the software for the police department.

I agree that the PD bears the majority of the culpability here, but I disagree that it bears every ounce of fault that could exist in this scenario.


The cops, the politicians who fund them, the voters who elect the politicians (and possible some of the higher up police ranks), the marketers who sold it to the politician and cops, the management that directed marketing to sell to law enforcement, the developers who let management sell a faulty product, the developers who produced a faulty product.

Plenty of blame to go around.


There's also the company that built the software and marketed it to law enforcement.

Even disregarding the moral hazard of selecting an appropriate training set, the problem is that ML-based techniques are inherently biased. That's the entire point, to boil down a corpus of data into a smaller model that can generate guesses at results. ML is not useful without the bias.

The problem is that bias is OK in some contexts (guessing at letters that a user has drawn on a digitizer) and absolutely wrong in others (needlessly subjecting an innocent person to the judicial system and all of its current flaws). The difference is in four areas, how easily one can correct for false positives/negatives, how easy it is to recognize false output, how the data and results relate to objective reality, and how destructive bad results may be.

When Amazon product suggestions start dumping weird products on me because they think viewing pages is the same as showing interest in the product (vs. guffawing at weird product listings that a Twitter personality has found), the damage is limited. It's just a suggestion that I'm free to ignore. In particularly egregious scenarios, I've had to explain why weird NSFW results were showing up on my screen, but thankfully the person I'm married to trusts me.

When a voice dictation system gets the wrong words for what I am saying, fixing the problem is not hard. I can try again, or I can restart with a different modality.

In both of the previous cases, the ease of detection of false positives is simplified by the fact that I know what the end result should be. These technologies are assistive, not generative. We don't use speech recognition technology to determine what we are attempting to say, we use it to speed up getting to a predetermined outcome.

The product suggestion and dictation issues are annoying when encountering them because they are tied to an objective reality: finding products I want to buy, communicating with another person. They're only "annoying" because the mitigation is simple. Alternatively, you can just dispense with the real world entirely. When a NN "dreams" up pictures of dogs melting into a landscape, that is completely disconnected from any real thing. You can't take the hallucinated dog pictures for anything other than generative art. The purpose of the pictures is to look at the weird results and just say, "ah, that was interesting".

But facial recognition and "depixelization" fails on the first three counts, because they are attempts to reconnect the ML-generated results to a thing that exists in the real world, we don't know what the end results should be, and we (as potential users of the system) don't have any means of adjusting the output or escaping to a different system entirely. And when combined with the purpose of law enforcement, it fails on the fourth aspect, in that the modern judicial system in America is singularly optimized for prosecuting people, not determining innocence or guilt, but getting plea bargain deals out of people. Only 10% criminal cases go to trial. 99% of civil suits end in a settlement rather than a judgement (with 90% of the cases settling before ever going to trial). Even in just this case of the original article, this person and his family have been traumatized, and he has lost at least a full day of productivity, if not much, much more from the associated fallout.

When a company builds and markets a product that harms people, they should be held liable. Due to the very nature of how machine vision and learning techniques work, they'll never be able to address these problems. And the combination of failure in all four categories makes them particularly destructive.


When a company builds and markets a product that harms people, they should be held liable.

They should be, however a company building and marketing a harmful product is a separate issue from cops using specious evidence to arrest a man.

Cops (QI aside), are responsible for the actions they take. They shouldn't be able to hide behind "the tools we use are bad", especially when (as a parent poster said), the tool is known to be bad in the first place and the cops still used it.


> Cops (QI aside), are responsible for the actions they take. They shouldn't be able to hide behind "the tools we use are bad", especially when (as a parent poster said), the tool is known to be bad in the first place and the cops still used it.

But literally no one in this thread is arguing to not hold them responsible.

Everyone agrees that yes, the cops and PD are responsible. It's just that some people are arguing that there are other parties that also bear responsibility.

No one thinks the cops should be able to hide behind the fact that the tool is bad. I think these cops should be fired, sued for a wrongful arrest. I think QI should be abolished so wronged party can go after the house of the officer that made the arrest in a civil court. I think the department should be on the hook for a large settlement payment.

But I also think the criminal justice system should enjoin future departments from using this known bad technology. I think we should also be mad at the technology vendors that created this bad tool.


This is why I wrote "also", not "instead".

You are being downvoted but you are 100% right.

The justification for depriving someone of their liberty lies solely with the arresting officer. They can base that on whatever they want, as long as they can later justify it to a court.

For example, you might have a trusted informant who could tell you who committed a local burglary, just this on its own could be legitimate grounds to make an arrest. The same informant might walk into a police station and tell the same information to someone else, for that officer, it might not be sufficient to justify an arrest.


> Even if this technology does become accurate (at the expense of people like me), I don’t want my daughters’ faces to be part of some government database.

Stop using Amazon Ring and similar doorbell products.


The pandemic has accelerated the use of no-touch surfaces specially at places like an airport that are more inclined to now use a face recognition security kiosk. What's not clear is the vetting process for these (albeit controversial) technologies. What if Google thinks person A is an offender but Amazon thinks otherwise. Can they be used as counter evidence? What is the gold standard for surveillance?

And now the poor guy has an arrest record. Which wouldn't be a problem in reasonable legislations, where it's nobody's business whether you've been arrested or not, as long as you've not been convicted.

But in the US, I've heard that it can make it harder to get a job.

I believe I'm starting to get a feel for how the school to prison pipeline may work.


NPR article about the same, if you prefer to read instead of listen: https://www.npr.org/2020/06/24/882683463/the-computer-got-it...

I'll be watching this case with great interest


Sadly, there's plenty more where that came from.

Wait until you hear about how garbage and unscientific fingerprint identification is.

Speaking of pseudoscience, didn't most police forces just start phasing out polygraphs in the last decade?

Unlikely unless they were compelled by law or found something else to replace it, and I think it's the latter. Something about machine learning and such.

In alot of police departments around the world, the photo database used is the drivers license database.

There is clothing available that can confuse facial recognition systems. What would happen if, next time you go for your drivers license photo, you wore a T shirt designed to confuse facial recognition, for example like this one? https://www.redbubble.com/i/t-shirt/Anti-Surveillance-Clothi...


I would love to see police trying to take a crack at this from the other side of things. Instead of matching against a database, Set up a style GAN and come up with a mask of the original photo or video to isolate just the face and have the discriminator try to match the face. Then at the end you can see the generated face with a decent pose and more importantly look through the range of generated faces that result in a reasonable match to give a somewhat decent idea of how confident you should be about any identification.

While this case is bad enough, mistakes like this are not the biggest concern. Mistakenly arrested people are (hopefully) eventually released, even though they have to go through quite a bit of trouble.

The consequence that is much worse would be mass incarceration of certain groups, because the AI is too good at catching people who actually did something.

This second wave of mass incarceration will lead to even more single parent families and poor households, and will reinforce the current situation.


It's supposed to be a cornerstone of "innocent until proven guilty" legal systems that it is better to have 10 guilty people go free than to deprive a single innocent person of their freedom. It seems like the needle has been moving in the wrong direction on that. I'm not sure it that's just my impression on things, or if it's because there's more awareness with the internet/social networking of issues...

How does computerized facial recognition compare in terms of racial bias and accuracy to human-brain facial recognition? Police are not exactly perfect in either regard.

Face recognition widens the scope of how many people can be harassed.

While also enabling finger-pointing, e.g. the police can say "We aren't racist or aren't at fault. The system is just faulty." while the engineers behind the facial recognition tech can say that they, "Were just doing their job. The police should've heeded their disclaimers, etc."

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: