Hacker News new | past | comments | ask | show | jobs | submit login
Defund Facial Recognition Before It's Too Late (theatlantic.com)
23 points by sneeze-slayer on July 6, 2020 | hide | past | favorite | 32 comments



I am not sure what "defunding" would accomplish at this point. At this point, the research has been done, and the products are available for astonishingly low prices. We don't stop oil spills by defunding offshore oil wells that have already been dug, and attempting to do so is likely to result in more oil spills as the money for maintenance is cut. Similarly, I'd expect that if budget for facial recognition is slashed, the parts that will be cut are oversight and training, and so now instead of just dealing with questionable technology, you're dealing with questionable technology used by people who have no idea what they're doing.

Also, as long as public spaces are under constant video surveillance, stopping facial recognition now only solves the problem temporarily. I think at a bare minimum, we need standards for when this evidence should be admissible in court (at current tech levels, probably approximately never) and when it is acceptable to use it in searches. The technical ship has sailed, so any fix is going to have to be legislative at this point.


"> Amazon, Microsoft, and Google have continued efforts to ensure federal regulation that offers a stable and profitable market in which facial-recognition technology is, in fact, used by law enforcement, in direct opposition to the movement the companies claim to support."

if these Tech companies were serious about fighting inequality it would be more effective to start banning racist AI/facial recognition instead of scrubbing Technology language from words like "blacklist", and "master/slave".


...I will have to hide my elongated skull now...


> Rooted in discredited pseudoscience and racist eugenics theories that claim to use facial structure and head shape to assess mental capacity and character, automated facial-recognition software uses artificial intelligence, machine learning, and other forms of modern computing to capture the details of people’s faces and compare that information to existing photo databases with the goal of identifying, verifying, categorizing, and locating people.

I'm sorry, what? Does this person think that phrenology / physiognomy, two old pseudosciences that have been discredited for a hundred years or more, are actually at play within ML systems?

I'm totally willing to believe that ML facial recognition systems insufficiently trained on a wide enough set of faces will mistake one person for another. Sure. But to pretend that the system is based on eugenics belies a critical lack of understanding of what these things actually do, and ascribes agency and racial animus to a computer program. Pretty clear to me that this shows the author doesn't really know anything about how these things work.

The reason to not want facial recognition in public spaces is the same as to not want mass surveillance: a reasonable expectation of privacy by citizens. Of course, if also want no police in these areas at the same time, one should not be surprised if eventually they go from peaceable public squares, to a haven for petty crime, to eventually a fearful place people avoid.


> I'm sorry, what? Does this person think that phrenology / physiognomy, two old pseudosciences that have been discredited for a hundred years or more, are actually at play within ML systems?

https://callingbullshit.org/case_studies/case_study_criminal...

This shouldn't be a problem, because as you note phrenology is pseudoscience that's been discredited for over a century. And yet.

To the broader point, I'd argue that in generally any attempt to predict criminality from facial structure/face picture is phrenological in nature. And people who do know what's at play with ML systems do agree with this take: https://twitter.com/ylecun/status/1276147230295166984


> And yet.

Well, what's happening there is not a study of phrenology at all (which posited specific regions of facial/skull structure being indicators). It's actually a very interesting thing to look at. There was a previous one that was reporting some degree of success in determining whether someone was homosexual via ML.

Here's the thing: if this turns out to have actual predictive power, then it's a subject worthy of scientific study, whether you like the outcomes and conclusions or not. Plenty of other worthy areas of endeavour (e.g. psychometric IQ research) have also revealed uncomfortable truths. If instead these things turn out to not have any legitimate research value (i.e. can't make predictions that can be experimentally verified), then we can stop looking at them, but as long as they continue to maintain a relatively consistent relationship to observable reality, they're as worthy a form of science as any other anthropological research is.

We have a choice either face this head on, include it in policies, and build our sociology around the truth, or we can put our heads in the sand to make people feel better. I for one believe that truth is far more important than feelings, and that if we had continually given higher credence to feelings the Enlightenment and most scientific progress we've had would have been far slower if it had happened at all.


> one that was reporting some degree of success in determining whether someone was homosexual via ML

Yes, which was also phrenological in nature.

> one that was reporting some degree of success in determining whether someone was homosexual via ML

I've yet to see any "uncomfortable truth" from psychometric research that wasn't relatively easily explained as culturally tied.

> but as long as they continue to maintain a relatively consistent relationship to observable reality

The point is that neither this study nor the homosexuality study have a relatively consistent relationship to observable reality. Your priors on us being able to predict, independent of social conditioning, some arbitrary social attribute based on someone's face should be very, very low.

And "predicting" some arbitrary social attribute based on social conditioning is just encoding social bias into the model, which is bad.

> and that if we had continually given higher credence to feelings the Enlightenment and most scientific progress we've had would have been far slower if it had happened at all.

You mean like all the scientific progress that came out of phrenological research?


> Your priors on us being able to predict, independent of social conditioning, some arbitrary social attribute based on someone's face should be very, very low.

What does "independent of social conditioning" mean here? Can you give some examples of social attributes that arise independent of social conditioning?


It's tactical nihilism.

Guess what: most of us can make some pretty intelligent guesses about things just based on looking at them. The observation may not be perfect, it may not even be true, but if the goal of the observation is to inform a system for self-preservation, false positives might be totally acceptable if the cost of taking a preservative action is not too high. For example, avoiding a person your subconscious tells you is dangerous - you can ignore it and overrule it, or you can give in and take the safe action. Much of the time, the latter is the better choice, even if you didn't need to be scared, even if there was no danger, because you have no better way of knowing in advance what the situation truly is.

Should we use this kind of heuristic in law enforcement? No, this is not Minority Report or the Asimov world of Multivac; while it may be useful for an aggregate at scale we will not get good enough accuracy to know much about individuals. However, it's entirely possible it will tell us about the behaviours of groups of people, and that could still be useful for policy decisions.


> Guess what: most of us can make some pretty intelligent guesses about things just based on looking at them.

Surely not based on looking at the blurry, low-quality camera images that are the usual focus for mass facial recognition. Train a system for that and it will just focus on the most visible traits, that we all know about. And those same traits have problematic correlations with social disadvantage and marginalization, that will impact any assessment you might want to make.


> What does "independent of social conditioning" mean here?

So we know, for example, that if you train a model to predict "criminality" based on face here in the US, it will find that race is a strong predictor of criminality.

The first problem with this specific example is that the data is biased: certain communities are overpoliced. We know, for example, that black and white people use marijuana at around the same rate, but that black people are more likely to be arrested for use. So they'll be more likely to be represented in a dataset of "criminals" even if they aren't actually more criminal. So that's one social factor. But let's pretend that we can construct a socially untainted dataset that represents the true underlying crime rate, we correlate it with face images, and the racial disparity still exists. I want to reiterate that we're well off into the world of fantasy here, but for demonstration purposes.

There are generally 3 conclusions you can draw from a correlation like this: A directly causes B, B directly causes A, or something else more complex is at play. It's unlikely for facial structure changes to directly cause criminality, and unless you're Pinocchio, criminal behavior isn't going to directly cause changes in your face.

So what more complex thing is at play? Well one answer is genes. It could be that the genes that make someone darker also make them more naturally predisposed to violence. That is, some factor C directly causes both A and B. Or it could be even more complex, for example that socially, people who exhibit black skin are more likely to be placed in conditions that breed criminal behavior[0]. Since economic and social status are heritable, and so is skin color, this seems reasonable to conclude, and there's lots of other evidence that this is the case.

But if that's the case, then looking at a picture of a face doesn't actually have predictive power. At best it just recognizes that your average black person is likely to have been raised in a situation where they were more likely to commit a crime. It doesn't have any predictive power about a black person who wasn't raised in those conditions.

So by independent of social condition, what I mean is that such a model isn't useful unless you ascribe to the belief that the genes that cause facial structures are correlated with the genes that cause criminal behavior (which presupposes that those genes exist).

Otherwise you aren't actually looking at even a direct correlation and in fact it's very likely that if you divide your subpopulation up in smart ways, you'll find that there are groups against which you are unfairly biased.

Just because the social signifiers in the study we're looking at aren't as obvious to you or I as skin tone doesn't mean they aren't there, and again that presumes the data is good, which we know it isn't.

> Can you give some examples of social attributes that arise independent of social conditioning?

I don't know that I fully explained this above, so let's create another fantasy world to explain this more concretely. Let's agree that murder is bad. This is solely a social agreement, but we decide on it based on ethical beliefs.

Imagine that in this fantasy world there are genes that cause one to occasionally enter a bloodlust that forces one to go on a relatively uncontrollable killing rampage. Or for a more direct fantasy example, turn into a werewolf that then goes into a relatively uncontrollable killing rampage. Or be an Orc which is "naturally" evil (this is actually a relatively common fantasy trope, huh).

This genetic marker would imply a genetic predisposition to criminal behavior, despite any social conditioning. Compare this to a relatively normal human child who is trained from a young age that they are "no one", a part of a greater movement that requires assassinating the wrong people, and that this assassination is sometimes necessary for the greater good.

Both are more likely than your average person to commit a crime, but one was conditioned to this socially while one was genetically predisposed. If this cult holds onto family lines, there will be similarities between cult members, but people who escape the cult or who were never a part of it might be unjustly thought to be criminal, solely because they resembled the cult.

To jump back to the original statement, this means that if you set up a confusion matrix that includes "family of cult members" as a category, your model will perform badly and discriminate negatively agains them. You can see why this might cause huge issues like, for example, causing the justice system to chase or harass people who are related to criminals.

Hopefully that explains. It's essentially a correlation vs. causation issue.

[0]: I should note here that this is true whether you ascribe to the belief that "black culture" encourages/celebrates criminality, or the belief that "white supremacist and social structures" push black people into situations where they can't avoid crime.


> There are generally 3 conclusions you can draw from a correlation like this: A directly causes B, B directly causes A, or something else more complex is at play. ... Hopefully that explains. It's essentially a correlation vs. causation issue.

Note that for predictor it does not really matter correlation vs. causation. That matters for intervention. If you have feature A that in the population always causes features B and C, and no other causes them, then presence of B is perfect predictor of C, but intervention that changes B (and not A) does not affect C.

On the other hand if feature D causes feature E in 90% cases, and no other causes it, then D is predictor of E with 10% false positives, but intervention on D affects E.

It is true that 'perfect causality' predictors, where the set of accounted factors are only causes of a predicted feature, have advantage that they work the same for any subset (or any change) of population. While predictors that ignore some causal factors (like the predictor from the first example that ignores common causal factor) may have vasly different probabilities for subset (or change) of population (when distribution of ignored factors change). But in practice most real-world causal networks are super complex and many real-world tests ignore many causal factors. So it is kind of isolated demand for rigor [1].

[1] https://www.lesswrong.com/posts/fzeoYhKoYPR3tDYFT/beware-iso...


> Note that for predictor it does not really matter correlation vs. causation. That matters for intervention.

Yes, but you don't build a model without intending some intervention, so this distinction is irrelevant in the context of applied ML, although it is a true statistical fact.

> So it is kind of isolated demand for rigor [1].

I don't see how. The isolated demand for rigor "fallacy" that Scott Alexander talks about is when you ask for rigor to be applied in ways that are bad faith. Let me rephrase my concern:

We have lots of evidence that real-world causal networks are super complex and many real-world tests ignore many causal factors. Similarly, we have no a-priori reason to believe that facial structure is correlated with {sexual orientation, innate level of intelligence, innate criminality}. And in fact we have strong reasons to believe that most of the way that those things present is due to cultural influence.

So if someone shows up with a groundbreaking study that shows that they can predict some innate attribute based on facial structure, it's likely that they're actually seeing cultural biases (or cultural correlations) and not innate factors (or correlations with genotypic things). In other words, our priors should be that any model in this space is simply discriminating based on stereotypes, not revealing some innate way to "predict" these attributes.

The model isn't a predictor but a recognizer, and while semantic, that's a very important distinction.


I understand, and generally agree with, the idea that teasing apart genetic and social causes of behavior is difficult and unlikely to yield useful results, and especially unlikely to yield useful results when the people attempting to do it don't know what they're doing and don't have any incentives for getting it right.

I think my true objection is to the idea that we can't make useful inferences, even in principle, based on appearance. I suppose now would be a good time to check whether you actually hold that opinion: I'm assuming yes based on the following exchange that you do.

>> There was a previous [study] that was reporting some degree of success in determining whether someone was homosexual via ML.

> Yes, which was also phrenological in nature. > ... > The point is that neither this study nor the homosexuality study have a relatively consistent relationship to observable reality. Your priors on us being able to predict, independent of social conditioning, some arbitrary social attribute based on someone's face should be very, very low.

If I've learned one thing from looking into machine learning and such over the past few years, it's that the computer is not looking at what you think it's looking at. Yes, it's _possible_ that the computer has learned which varieties of facial bone structure correlate with homosexuality. I find it equally plausible that the computer determined that there was a difference in personal grooming habits or facial expressions between groups. Regardless of what the differences were, though, I think it's a mistake to say that you shouldn't expect to be _able_ to predict many social attributes based on their face.

If you're just saying that we can't determine the underlying causes of social attributes based on machine learning, I agree, but I'm not sure that's really a useful statement. What you said could also plausibly be interpreted as "trying to predict real-world outcomes based on machine learning will result in bad outcomes", I would agree with that interpretation (there is some question of "yes it will make mistakes, but it might still be better than the current system," but much like with self-driving cars vs human drivers, I don't think we're particularly near that point yet).

Still, the comparison of that study to phrenology, and your comments above, suggest you were trying to make a stronger statement, and I'm still not sure what that stronger statement is or even if there is one.


> I find it equally plausible that the computer determined that there was a difference in personal grooming habits or facial expressions between groups.

Right, so then the question is if this is valuable.

Consider three people: a closeted, married-to-a-woman, gay man who has hidden his sexuality. An openly gay man who looks "stereotypically gay" (this statement should start triggering warning bells, btw), and a well groomed heterosexual man who might be described as "stereotypically-gay looking".

Does your model identify men 2 and 3, the stereotypically "gay"-looking men? If so you've just built a model that detects cultural signifiers for homosexuality, which, congrats I guess. This isn't a useful tool, is it? It is a model that encodes stereotypes. But we know the stereotypes already, because we use them as intentional cultural signals. So nothing of value was added, and the model may serve to perpetuate those stereotypes in the future if people wish to change them.

Does your model identify men 1 and 2? Well now you've built something interesting, it's found a deeper relationship than just overt social signifiers, perhaps it's noticing some minute bone structure thing. Is it ethical? Hell no.

In essence the stronger statement is that you are either simply encoding cultural biases into a model, which will serve to perpetuate those biases (and is therefore harmful), or you are building something terrifying and dystopian and harmful.


Ah. Yeah, I'm basically saying "lean towards terrible and dystopian and harmful, not useless". If you tell people it's useless, dystopian, and harmful, and they find out it's actually useful, they may not take seriously the "dystopian and harmful" bits.

It sounded kind of like you thought it wouldn't be possible to get interesting results which tell you something you didn't already know from ML with much better than random chance.

>> I find it equally plausible that the computer determined that there was a difference in personal grooming habits or facial expressions between groups.

> Right, so then the question is if this is valuable.

That's not really my question, but I'd argue that the answer is "it depends on how you're trying to use it". If you're an advertiser of questionable ethics, and trying to promote your local gay bar, it probably is useful to you. Likewise if you're a computer science professor trying to get grant funding. If you mean "useful to society as a whole", probably not.

Which comes down to the difference between "this will accomplish the goals of the people who want to use it" and "letting those people use this tool is a good idea". The danger is that if you try to convince people not to use the tool because it doesn't work, but the tool does in fact work, trying to convince them later because the results are harmful is less likely to work.


> It sounded kind of like you thought it wouldn't be possible to get interesting results which tell you something you didn't already know from ML with much better than random chance.

In general I do think this is true, but it's not verifiable. Like I said, in general I think that the priors that underlying facial structure should be strongly correlated with sexual orientation are very low. It's not impossible, just unlikely, so when you get "results", the most likely explanation is not that you've found something profound, but that you're encoding social biases. And so from the perspective of a researcher, you haven't actually accomplished your goals (of discovering something profound), you've instead committed a relatively elementary error that legitimizes certain forms of stereotyping. And again I note that my priors suggest that this is the most likely outcome.


> But we know the stereotypes already, because we use them as intentional cultural signals

So call that tool 'detector of cultural signals'. Plenty of people do not care about cultural signal knowledge, but a tool that allows to read and analyze cultural signals by AI predictors can be interesting. Such tool could do similar work as expert in given cultural field, but can be used by non-experts.


> criminal behavior isn't going to directly cause changes in your face

Meth.


> I've yet to see any "uncomfortable truth" from psychometric research that wasn't relatively easily explained as culturally tied.

So you're not aware of race and IQ studies, where differences have persisted relatively consistently over a hundred years of research despite decades of effort to reduce any possible influence of cultural components?

If you're not aware of this, it's literally because it's an uncomfortable truth that is deliberately obfuscated and underreported to avoid controversy. Nevertheless, it persists as a datapoint.

> The point is that neither this study nor the homosexuality study have a relatively consistent relationship to observable reality.

You assert this, but I don't know it to be the case. There's also no reason to immediately discount early research in this direction; you run the risk of being like Minsky, fighting against the utility of the Perceptron, which has become the basis of these shiny new ML systems after a long hiatus.

> And "predicting" some arbitrary social attribute based on social conditioning is just encoding social bias into the model, which is bad.

This is silly, because it means you want to pretend we live in a vaccuum where society doesn't exist and where people don't have outward indications of internal traits (whether deliberate or not). Sure, such things are hard to predict totally accurately and are highly context specific, but not out of the question for reasonable predictions given a big enough dataset. Again, I won't pretend these things have been trained on really solid, representative data, yet... but that's only a matter of time, and when it does happen, it has to be data that represents reality _as it is_, not some dreamed up reality where the social issues and complexities we have today aren't included.

You can look at a person and tell me a lot about them and you can be pretty confident; you can guess their background, gender, how much they take care of themselves, what their likely socioeconomic status is, maybe even more just from looking at them. Not perfectly, but enough to inform predictions that have evolutionary consequences - otherwise, Nature would not have wasted massive amounts of neural wetware on them to start with. You do this every day whether you'd like to admit it or not; depending on where you live your personal safety may highly depend upon it, too.

Think about this: you may not like this research, but just like with gun control, trying to prevent people from doing it will just push it underground, and if there is predictive power with this, if it is only underground, it will only be used for bad purposes.


> So you're not aware of race and IQ studies, where differences have persisted relatively consistently over a hundred years of research despite decades of effort to reduce any possible influence of cultural components?

I'm certainly aware of such studies, I'm also aware that pretty much every study that has found some difference in IQ between races has been found to contain social conditioning issues. Whether that was early on issues like nutrition that caused entire standard deviation jumps in IQ, or more recent discoveries like how the race of the test proctor can impact IQ and similar studies, or that if you pay someone more prior to an IQ test, they'll perform better.

Because of all of these confounding factors, it's not really possible to draw any conclusions from what we know to be a bunch of variously flawed studies. To me, the most uncomfortable truth revealed by all this research is how willing some people are to jump at ultimately tenuous research that supports racist preconceptions.

> You assert this, but I don't know it to be the case.

That's okay.

> This is silly, because it means you want to pretend we live in a vaccuum where society doesn't exist and where people don't have outward indications of internal traits (whether deliberate or not).

No, I am well aware that society exists and people can have outward indications of traits. My point is that we should not build systems that discriminate based on those things. And there's no point in building a system that can recognize a person's gender or social class or whatnot if we won't use it, and any use of such a system will be discriminatory and harmful, you're welcome to try and provide a counterexample though.

> Think about this: you may not like this research, but just like with gun control, trying to prevent people from doing it will just push it underground, and if there is predictive power with this, if it is only underground, it will only be used for bad purposes.

This doesn't follow. If it's not underground it will also only be used for bad purposes, but be developed more quickly. And social pressure to not work on or be involved with such tools means that it will be used for significantly fewer bad purposes.

Using facial recognition for policing is a bad purpose. We were going that direction, now we're not, not because the police don't want to, but because social pressure against researchers, by other researchers, caused them to stop developing the products in that realm. That's a clear win.


> Because of all of these confounding factors, it's not really possible to draw any conclusions from what we know to be a bunch of variously flawed studies. To me, the most uncomfortable truth revealed by all this research is how willing some people are to jump at ultimately tenuous research that supports racist preconceptions.

This is deeply unfair to psychometrics researchers. For it to be racist the claim would have to be that group differences in mean psychometric differences are overwhelmingly hereditary and so far as I know absolutely nobody with any education on the subject is making that claim. They allow that it could be nutrition, toxin exposure, cultural, or any number of other factors. Nobody credible says it's just heredity because the evidence doesn't support that. Nonetheless, the differences are measurable and persistent.


I don't necessarily mean the researchers themselves.

> They allow that it could be nutrition, toxin exposure, cultural, or any number of other factors.

Someone, just weeks ago, here on HN cited Rushton at me to point out that it really was genetic factors. Which is not clear to anyone except maybe Rushton.


They are pointing out that facial recognition has a lot of the problems that these earlier schemes had in that they were another way to disenfranchise and cause problems for minorities. With FR the immediate issue is making many false matches, leading to yet another thing causing over-policing with darker skinned people.


> With FR the immediate issue is making many false matches

I don't get it though. False matches seem to be a great thing: it means that facial recognition does not really work well and can't be used to tell who is where. Do you prefer FR that never makes mistakes?


The concern here is that like a dog sniffing a car and barking, the false matches give erroneous probable cause. Combo this with systems that are more likely to give false negatives to minorities, and you've created a blackhole of accountability for authoritarian harassment.


But then your problem is with authoritarian harassment, not with facial recognition. If you're fine with the fact that the police can stop and harass random people based on indicators that have a high rate of false positives, then facial recognition is not the first of your issues.


No it isn't the first of our issues. If there is a fire, you don't pour kerosene on it while trying to put it out, right?

Adding more tools we know will be abused at a time like this seems like adding fuel to fire and removing accountability from individual actions.

You might have a better chance fighting a case where you are stopped for no reason than fighting a case where your car is pulled over because the FR system mistook your blurry face for that of someone else due to poor lighting.


> you don't pour kerosene on it while trying to put it out, right?

The fact is that you're not trying to put out the fire (I am talking about the US, as this seems to be first and foremost a US issue). You're actually feeding it, in the conviction that it's the right thing to do. Removing the worst flammable materials is pretty pointless when you have already decided that you want the fire to keep burning because you still think it's for the best.

Out of the metaphor, the US police seems to be uniquely prone to escalating conflicts and generally asserting its absolute power because it's expression of the law. From the outside, the US seem to be a society where everyone (starting from the police, down to citizens, criminals, law enforcement and prisons) is encouraged to "stand their ground" up to the highest possible level of conflict, without a space for mediation, forgiveness or oblivion. That is the fire, and the US thinks it's just fine to keep it burning.


It's too late.


It was too late even before facial recognition technology existed, because it's not like you can't apply modern tech to older video footage.


Sorry, I'd much rather defund The Atlantic. Facial recognition at least has valid uses beyond surveillance. The Atlantic has no known good uses that I can discern.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: