Hacker News new | past | comments | ask | show | jobs | submit login

There have been numerous issues where corpora of peoples faces have been insufficiently stocked with a particular racial group, in such cases people's faces might not register as faces and therefore be shut out of whatever the facial recognition is supposed to implement.

For example facial recognition at a high priced building, suddenly non-white people can't get in because you did not show a real face at the scanner. Sorry, it's a bug not a feature.

The same problem would of course apply to anyone with facial deformities etc.

The problem would be essentially of two forms - Automatic exclusion from a group by facial feature, or automatic inclusion in a group by facial feature.

Think of the automatic inclusion as advanced phrenology or other sort of woo, in this scenario company X sells the idea that they will have facial recognition of 'criminal types' based on whatever research they can pull out that criminals often correspond in appearance. Then they fill that corpus up with criminals which then is a nice self-selecting system of find the black/hispanic person and deny them the job, loan etc. because of likelihood for crime past the threshold set.

the reason why this would happen is the same reason why banks used to deny loans to people in certain zip codes, because by doing so they could be racially profiling and excluding but pretend not to do so.

>”corpora of peoples faces have been insufficiently stocked with a particular racial group, in such cases people's faces might not register as faces..”

I’m not sure this is s good argument against the technology because there is a solution to this objection: they’ll improve it and ensure every face type is comparably recognized. It should be rejected on more basic principles.

It's not as simple as "just make sure it works on everyone". Even if it were physically possible to capture the face of all 7 billion people on Earth, you'd start running into issues of false positives/negatives, overfitting, etc., and furthermore it would basically require every person to be periodically scanned from birth until death to ensure that the algorithm can recognize new faces.

This is a different objection than the original objection. (effective accuracy vs specific bias).

With a pervasive system, unless people are hermits, they’ll get scanned periodically, and with other bits of information the changed face can be correlated with the same person (I go into a salon looking one way, come out different, go into a boxing match, come out different, but I’m following my routine and go to the same subway stop and convenience store and use the same payment method, etc.)

And then you just have a different implementation of China's social credit surveillance system...

there are of course examples of technologies that do not work for their stated purpose, but I think in general technologies do work pretty much for their stated purposes if people can use the technology correctly.

Generally the arguments against a technology is not that the technology does not work for its purpose, but that the way humans will use it is problematic.

The problem with an "it's racist" strawman is that all the people selling this stuff have to do is frame it so NOT using it is racist. I don't think it would be hard to argue that a face scan is going to have less implicit bias than a poorly trained security guard or bored cop working overtime. So maybe it's racist to not use facial recognition, since that leaves room for human prejudice.

How does software not leave room for human prejudice? Who decides what data is used to feed the thing?

Who decides what face unlocks your iPhone? When you buy a million dollar condo, you decide whose faces you want to unlock it. That way it's not up to some security guard to decide whether to detain your kid in the lobby because he's wearing baggy clothes and didn't style his hair in a way that the guard likes. His face either scans or it doesn't.

My good buddy HAL 9000.

Many mortgage backing companies still routinely deny loans to black people because their models inversely correlate black homeowners to lower surrounding property values. When a mortgage backer also backs the loans of those same surrounding properties, the risk of losing money on their existing loans increases, thereby leading them to deny the loan and perpetuating racist policy from a so-called "objective" model-driven business decision.

Are there organizations that provide validation standards for this type of software?

I'm thinking something analogous to ASME for mechanical systems. Or is the technology still very much in the wild west phase?

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact