Hacker News new | past | comments | ask | show | jobs | submit login

I'm not defending Rite-Aid at all, but that's not the problem here.

It's pretty easy for the system to show the source photo to an employee, and then they themselves can judge if they feel confident the shopper is actually the same person.

Facial recognition software should always be implemented under the assumption that it makes plenty of false positives, and human review is always necessary.

(What you're talking about is 1,280 false positives a day, or less than 1 per store per day.)

Again -- not defending Rite-Aid here. Just saying that there's nothing wrong with the statistics of it.




"It's pretty easy for the system to show the source photo to an employee, and then they themselves can judge if they feel confident the shopper is actually the same person."

I would contest that. Determining if someone is the same person as a low-quality photo is not necessarily something a human can do, and that's especially true if the human in question A: basically doesn't care and is likely to screw it up every way it is possible to screw it up and B: is influenced by whether or not they're willing to directly confront someone who is nominally a criminal while they're being paid the de facto local minimum wage.

This plan is all sorts of infeasible and frankly stupid, and the purely human concerns are sufficient for that determination even before we add the tech in.


You are correct. In fact there is research that humans are not particularly good at matching people to photos... this 2014 study showed a 15% error rate (with photos from identification documents, high-quality standardized photos, not low-quality random angle ones!), and the real kicker is passport officers (who do this as part of their job all day and presumably care about doing it well) surprisingly performed no better than random undergrads...

https://qz.com/251791/passport-officers-are-no-better-at-per...

(Someone is now going to suggest, right, that's why we should have computers do it instead, bringing the circle back around again... but just no).


Which is precisely why you combine the two systems.

If humans get it wrong 15% of the time, that means they're reducing false positives by 85%.

That's the whole point of combining facial recognition with human review.

You still have to decide what to do with that final information considering that it's still not perfect.

But you should never be relying on facial recognition by itself. Even if humans are imperfect, they still improve the accuracy, and can make the final call "nope I'm just not sure, I'm not going to take action".


The error reduction from applying a different system applies if the source of the error does not manifest itself in both systems (if errors are uncorrelated, you can get an improvement that looks like that implied by the simple application of statistics, but not if they are correlated).

As an example, you can combine human review with automatic facial detection of identical twins and likely not see much (any?) reduction in error rate at all.

Two independent "85% accurate" humans are not 97.7% accurate for identical twins either.


The question is how correlated are human and machine errors. I would guess fairly strongly, in which case human review would add little additional accuracy.


> I would guess fairly strongly

No, evidence suggests they are not. We're all very aware of the horror stories of AI misidentifying species. Different AI systems are different, but in general AI seems to make quite different classes of mistakes from humans. Our brains work very differently from current statistical models.

So human review absolutely adds accuracy, you can generally assume. And it adds human-accountable judgment which is just as important.


Ultimately it's an empirical question, and unless someone has published the research, you and I will not be able to know for sure. I think there will be a lot of cases where two people simply look a lot alike, and any observer would have difficulty distinguishing them.


>In fact there is research that humans are not particularly good at matching people to photos...

>(Someone is now going to suggest, right, that's why we should have computers do it instead, bringing the circle back around again... but just no).

So where does this leave us for facial recognition? Should we ban both computer and human facial recognition, because they're both flawed? How would that be enforced? If a store employee thinks they recognize someone from a few minutes ago, are they supposed to ignore that fact, and pretend that they're different people, on the off chance that the guy might be someone else[1]?

[1] https://www.youtube.com/watch?v=hGsJ3reXz-k


"So where does this leave us for facial recognition?"

Where it leaves us is that it doesn't work, and it can't work. I see no evidence that there is some big reservoir of facial recognition quality to be extracted from the same basic data set. There is all sorts of reasons to believe that it is simply impossible to create a system that can be given a small percentage of the population as the targets and pick them out from millions of samples correctly.

Of all the disciplines, those trained in computer science should be aware of the concept that problems can be fundamentally hard or unsolvable.

However, I've been careful to phrase what I think may be fundamentally unsolvable as being related to "the same basic data set". Expansion of the data set provides other possibilities, and while I'm not ready to declare that adding that data will certainly solve the problem, I'm not ready to declare it as fundamentally unsolvable either. Add portable device tracking, gait analysis, speech analysis, anything else some clever clog can think of, and probably drop the requirement that de facto minimum wagers be asked to confront nominal criminals (I would assert there is no solution to the mismatched incentives there), and the problem may well be solvable. It would, however, require Rite Aid and anyone else planning to use this sort of thing to radically upgrade their hardware.


>Where it leaves us is that it doesn't work, and it can't work.

You didn't answer the second part of my comment:

"How would that be enforced? If a store employee thinks they recognize someone from a few minutes ago, are they supposed to ignore that fact, and pretend that they're different people, on the off chance that the guy might be someone else[1]?"

>and probably drop the requirement that de facto minimum wagers be asked to confront nominal criminals

Are you saying this on the basis that they're not qualified to make an identification, or that confrontation would put them at risk of violence? If it's the latter it really doesn't have anything to do with facial recognition. It would still apply even if replaced facial recognition with a 100% accurate oracle.


I was doing you the favor of ignoring the irrelevant hypothetical. I find "but what if something else entirely that you didn't say?" questions rather annoying. And I believe I was rather clear that the problems I am talking about extend beyond facial recognition, yes.


The hypothetical is very relevant because your stance implies that we should ban human facial recognition as well. That might count as "something else entirely that you didn't say", but asking about the implications of something that you propose is fair game. You can't write off follow up questions with "well I didn't say anything about that, and I find them questions rather annoying, so I'm not going to address them at all".


I would say that it means whatever procedures we build for taking pictures of "known criminals" and applying recognition to someone in your store, they need to be designed and implemented and carried out by people who are aware at all stages that there is a good possibility that they had the wrong person -- how would you want, say, your grandma to be treated if someone wrongly identified her from a criminal picture but wasn't sure? Treat that person that way.

This is hard, we generally do the opposite. Especially in racialized ways in the USA.

AI systems are often promoted as some kind of a solution to this, that somehow avoids human bias/mistakes. I think your comments even revealed that kind of thinking. I don't think they should be thought of that way.


The computers do it for passports photos at self check kiosks in immigration. The thinking is that the computers are better at it anyways, so no fidelity is lost vs a manned checkpoint. A false negative is easy enough to deal with anyways, and that false positive could have happened at the manned checkpoint as well.

Banned lists are simply impossible to implement in general. Instead, police should be more active in shoplifting and organized theft cases. Someone shoplifts, it’s at least a felony, get them in the system, and apply penalties to repeat offenders. Having stores sort this out themselves is just crazy.


Why is the photo in your imaginary scenario low quality?

Why do you think the plan is "infeasible and stupid"? You don't think the technology will catch up?


"These images, which were often poor quality, were captured from CCTV or employees’ mobile phone cameras." - the article

"You don't think the technology will catch up?"

First, technology can not transcend GIGO. GIGO is fundamental.

Secondly:

"When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action — and the majority of the time this instruction was to “approach and identify,” meaning verifying the customer’s identity and asking them to leave." - the article

Face matching is not a hard problem for computers. Face matching is a hard problem, period. We humans seem to have dedicated hardware for it, and we are not generally a "dedicated hardware" sort of species when it comes to that sort of task. GIGO is fundamental for humans too.


>Face matching is not a hard problem for computers. Face matching is a hard problem, period.

If that were true, then federal law enforcement agencies wouldn't use social media as a database for facially recognizing people suspected of crimes.


As I stated in my other post, when you add more information into the scan the situation changes. Social media adds social network information which carries a lot of other info.

The task of picking a face out of hundreds, matched with the other additional cues social media can add, is radically different than the task of picking a face out of the entire human population. It is the latter that is infeasible.


> .. then they themselves can judge if they feel confident the shopper is actually the same person

Yes - but the "approach and identify" of false positives/innocent people was the actual embarrassment and harassment that the system got banned for, it was the human review (asking for ID etc) that was the issue.


But it seems like they went straight to approaching and asking for ID.

What about the step where it shows a human photos from original footage and current footage and they get to say, nope I'm not confident?

Again, not defending Rite-Aid here, just pointing out that facial recognition needs a human verification layer before taking any action at all. The fact they weren't doing that is just one part of why their actions were wrong.


What incentive is there to say "I'm not confident"? If someone is particularly bad at comparing faces in pictures and they let multiple shoplifters through they're likely going to lose their jobs.


Wouldn’t it depend on how many accurate positives there were? If it’s one false positive against nine real positives, great. If it’s one false positive a day for a month before a real match occurs, nobody will pay any attention at all.


Drugstores like Rite-Aid often encounter multiple shoplifters a day.

Assuming most of them are repeat, then there are going to be more true positives than false positives.

(Again, not defending Rite-Aid here, just pointing out that the statistics aren't the issue.)


> It's pretty easy for the system to show the source photo to an employee, and then they themselves can judge if they feel confident the shopper is actually the same person.

Yeah, that's not going to happen in any meaningful sense with ordinary employees. Asking them to make a judgement call like that will just result in employees erring on the side of "it could be them, so I confirm" rather than "I'm not sure it's them, so I don't confirm"


When “I confirm” isn’t just a button but a face-to-face accusation I’d expect people would choose to err in the other direction.


I'd expect employees to act in a way they perceive would be least harmful to themselves and their job. If they incorrectly say someone wasn't the person matched, and that person then proceeds to steal from the store, the employee may (probably correctly) perceive that they'll suffer consequences for the error. The safest direction for them to err in would be "yeah, it's probably them".


> Just saying that there's nothing wrong with the statistics of it.

I think that an understanding of basic statistics still eludes a lot of people. Teaching it should be prioritized as much as teaching algebra. There are a lot of interesting real-life examples of how we get confused or tricked with statistics, enough to even keep the attention in class of the average high-schooler.


Just a reminder that the "1 per store per day" is an innocent human being who probably would have been put through hell on earth by what you considered an acceptable rate of error.


You're missing the parts where I said twice that I wasn't defending the practice.


I've found that adding that disclaimer is entirely useless. Like holding beef jerky in front of a dog and saying "I'm not offering this to you, just holding it here".


You really couldn't have picked a better example to illustrate why the OPs argument was so ignorant. Pretending that reality doesn't exist doesn't make it stop existing.

No one is disagreeing that 1 per store per day is a small number. We're disagreeing with the OPs statement that there's nothing wrong with one false positive accusation of shoplifting per store per day given the obvious consequences of such an accusation in the context of the US system of policing and global blame shifting to the machine, and that given such a context essentially any false positive rate is too high.


So it's useless to say that A is wrong, but not because of B as claimed -- A is wrong for other reasons?

That sure does limit discussion, and human reason generally. I mean, what do you suggest instead?


I'm just saying I've tried to proffer a position while disclaiming any support for it, and maybe some people listen, but there will always be someone who is too tempted to attack the position instead of treating it as a specimen.


What is "hell on earth" to you?

Cops have probable cause and search you. Find no evidence. "Oh boy sorry for the mix up sir". Hell on earth!


Ah yes, those american police who are so well known for their thoughtful and calm approach to suspected shoplifters in low income neighborhoods.


You have an alternative for handling law breakers aside from just letting them do it?


Your insistence on framing people falsely accused of shoplifting as law breakers is telling.


suspected law breakers then? Fucking semantics people. Also telling.


This entire thread has been about false positives. Innocent people being falsely accused of shoplifting. The difference between committing a crime and not committing a crime is not simply semantics.


Police handle both. The "law breaking" aspect isn't proven until later. So the argument is moot anyway really.


And we return to hell on earth, where an innocent person is put through the grinder of the us justice system.


"Oh boy sorry for the mix up sir"

i've been racially profiled before and never once has a cop apologized "for the mixup" after detaining me (rudely, usually cursing, fishing for a way to bust me).

"hell on earth" is living in a dystopian society where non-white citizens are detained and held for no reason and assumed to be guilty, because of a malfunctioning "AI" system.


Spotted the person who has never been detained/arrested and then searched in public before.

Trust me man it’s especially degrading not to mention time consuming.


And in the US, just being arrested -- even if the arrest was a mistake and no charges are filed -- has serious adverse consequences for the arrested person.


>human review is always necessary

Then why have AI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: