Hacker News new | past | comments | ask | show | jobs | submit login

>In fact there is research that humans are not particularly good at matching people to photos...

>(Someone is now going to suggest, right, that's why we should have computers do it instead, bringing the circle back around again... but just no).

So where does this leave us for facial recognition? Should we ban both computer and human facial recognition, because they're both flawed? How would that be enforced? If a store employee thinks they recognize someone from a few minutes ago, are they supposed to ignore that fact, and pretend that they're different people, on the off chance that the guy might be someone else[1]?

[1] https://www.youtube.com/watch?v=hGsJ3reXz-k




"So where does this leave us for facial recognition?"

Where it leaves us is that it doesn't work, and it can't work. I see no evidence that there is some big reservoir of facial recognition quality to be extracted from the same basic data set. There is all sorts of reasons to believe that it is simply impossible to create a system that can be given a small percentage of the population as the targets and pick them out from millions of samples correctly.

Of all the disciplines, those trained in computer science should be aware of the concept that problems can be fundamentally hard or unsolvable.

However, I've been careful to phrase what I think may be fundamentally unsolvable as being related to "the same basic data set". Expansion of the data set provides other possibilities, and while I'm not ready to declare that adding that data will certainly solve the problem, I'm not ready to declare it as fundamentally unsolvable either. Add portable device tracking, gait analysis, speech analysis, anything else some clever clog can think of, and probably drop the requirement that de facto minimum wagers be asked to confront nominal criminals (I would assert there is no solution to the mismatched incentives there), and the problem may well be solvable. It would, however, require Rite Aid and anyone else planning to use this sort of thing to radically upgrade their hardware.


>Where it leaves us is that it doesn't work, and it can't work.

You didn't answer the second part of my comment:

"How would that be enforced? If a store employee thinks they recognize someone from a few minutes ago, are they supposed to ignore that fact, and pretend that they're different people, on the off chance that the guy might be someone else[1]?"

>and probably drop the requirement that de facto minimum wagers be asked to confront nominal criminals

Are you saying this on the basis that they're not qualified to make an identification, or that confrontation would put them at risk of violence? If it's the latter it really doesn't have anything to do with facial recognition. It would still apply even if replaced facial recognition with a 100% accurate oracle.


I was doing you the favor of ignoring the irrelevant hypothetical. I find "but what if something else entirely that you didn't say?" questions rather annoying. And I believe I was rather clear that the problems I am talking about extend beyond facial recognition, yes.


The hypothetical is very relevant because your stance implies that we should ban human facial recognition as well. That might count as "something else entirely that you didn't say", but asking about the implications of something that you propose is fair game. You can't write off follow up questions with "well I didn't say anything about that, and I find them questions rather annoying, so I'm not going to address them at all".


I would say that it means whatever procedures we build for taking pictures of "known criminals" and applying recognition to someone in your store, they need to be designed and implemented and carried out by people who are aware at all stages that there is a good possibility that they had the wrong person -- how would you want, say, your grandma to be treated if someone wrongly identified her from a criminal picture but wasn't sure? Treat that person that way.

This is hard, we generally do the opposite. Especially in racialized ways in the USA.

AI systems are often promoted as some kind of a solution to this, that somehow avoids human bias/mistakes. I think your comments even revealed that kind of thinking. I don't think they should be thought of that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: