
Defund Facial Recognition Before It's Too Late - sneeze-slayer
https://www.theatlantic.com/technology/archive/2020/07/defund-facial-recognition/613771/
======
JoshuaDavid
I am not sure what "defunding" would accomplish at this point. At this point,
the research has been done, and the products are available for astonishingly
low prices. We don't stop oil spills by defunding offshore oil wells that have
already been dug, and attempting to do so is likely to result in more oil
spills as the money for maintenance is cut. Similarly, I'd expect that if
budget for facial recognition is slashed, the parts that will be cut are
oversight and training, and so now instead of just dealing with questionable
technology, you're dealing with questionable technology used by people who
have no idea what they're doing.

Also, as long as public spaces are under constant video surveillance, stopping
facial recognition now only solves the problem temporarily. I think at a bare
minimum, we need standards for when this evidence should be admissible in
court (at current tech levels, probably approximately never) and when it is
acceptable to use it in searches. The technical ship has sailed, so any fix is
going to have to be legislative at this point.

~~~
DyslexicAtheist
"> Amazon, Microsoft, and Google have continued efforts to ensure federal
regulation that offers a stable and profitable market in which facial-
recognition technology is, in fact, used by law enforcement, in direct
opposition to the movement the companies claim to support."

if these Tech companies were serious about fighting inequality it would be
more effective to start banning racist AI/facial recognition instead of
scrubbing Technology language from words like "blacklist", and "master/slave".

------
7ArcticSealz
...I will have to hide my elongated skull now...

------
core-questions
> Rooted in discredited pseudoscience and racist eugenics theories that claim
> to use facial structure and head shape to assess mental capacity and
> character, automated facial-recognition software uses artificial
> intelligence, machine learning, and other forms of modern computing to
> capture the details of people’s faces and compare that information to
> existing photo databases with the goal of identifying, verifying,
> categorizing, and locating people.

I'm sorry, what? Does this person think that phrenology / physiognomy, two old
pseudosciences that have been discredited for a hundred years or more, are
actually at play within ML systems?

I'm totally willing to believe that ML facial recognition systems
insufficiently trained on a wide enough set of faces will mistake one person
for another. Sure. But to pretend that the system is based on eugenics belies
a critical lack of understanding of what these things actually do, and
ascribes agency and racial animus to a computer program. Pretty clear to me
that this shows the author doesn't really know anything about how these things
work.

The reason to not want facial recognition in public spaces is the same as to
not want mass surveillance: a reasonable expectation of privacy by citizens.
Of course, if also want no police in these areas at the same time, one should
not be surprised if eventually they go from peaceable public squares, to a
haven for petty crime, to eventually a fearful place people avoid.

~~~
joshuamorton
> I'm sorry, what? Does this person think that phrenology / physiognomy, two
> old pseudosciences that have been discredited for a hundred years or more,
> are actually at play within ML systems?

[https://callingbullshit.org/case_studies/case_study_criminal...](https://callingbullshit.org/case_studies/case_study_criminal_machine_learning.html)

This _shouldn 't_ be a problem, because as you note phrenology is
pseudoscience that's been discredited for over a century. And yet.

To the broader point, I'd argue that in generally any attempt to predict
criminality from facial structure/face picture is phrenological in nature. And
people who do know what's at play with ML systems _do_ agree with this take:
[https://twitter.com/ylecun/status/1276147230295166984](https://twitter.com/ylecun/status/1276147230295166984)

~~~
core-questions
> And yet.

Well, what's happening there is not a study of phrenology at all (which
posited specific regions of facial/skull structure being indicators). It's
actually a very interesting thing to look at. There was a previous one that
was reporting some degree of success in determining whether someone was
homosexual via ML.

Here's the thing: if this turns out to have actual predictive power, then it's
a subject worthy of scientific study, whether you like the outcomes and
conclusions or not. Plenty of other worthy areas of endeavour (e.g.
psychometric IQ research) have also revealed uncomfortable truths. If instead
these things turn out to not have any legitimate research value (i.e. can't
make predictions that can be experimentally verified), then we can stop
looking at them, but as long as they continue to maintain a relatively
consistent relationship to observable reality, they're as worthy a form of
science as any other anthropological research is.

We have a choice either face this head on, include it in policies, and build
our sociology around the truth, or we can put our heads in the sand to make
people feel better. I for one believe that truth is far more important than
feelings, and that if we had continually given higher credence to feelings the
Enlightenment and most scientific progress we've had would have been far
slower if it had happened at all.

~~~
joshuamorton
> one that was reporting some degree of success in determining whether someone
> was homosexual via ML

Yes, which was also phrenological in nature.

> one that was reporting some degree of success in determining whether someone
> was homosexual via ML

I've yet to see any "uncomfortable truth" from psychometric research that
wasn't relatively easily explained as culturally tied.

> but as long as they continue to maintain a relatively consistent
> relationship to observable reality

The point is that neither this study nor the homosexuality study have a
relatively consistent relationship to observable reality. Your priors on us
being able to predict, independent of social conditioning, some arbitrary
social attribute based on someone's face should be very, very low.

And "predicting" some arbitrary social attribute based on social conditioning
is just encoding social bias into the model, which is bad.

> and that if we had continually given higher credence to feelings the
> Enlightenment and most scientific progress we've had would have been far
> slower if it had happened at all.

You mean like all the scientific progress that came out of phrenological
research?

~~~
JoshuaDavid
> Your priors on us being able to predict, independent of social conditioning,
> some arbitrary social attribute based on someone's face should be very, very
> low.

What does "independent of social conditioning" mean here? Can you give some
examples of social attributes that arise independent of social conditioning?

~~~
joshuamorton
> What does "independent of social conditioning" mean here?

So we know, for example, that if you train a model to predict "criminality"
based on face here in the US, it will find that race is a strong predictor of
criminality.

The first problem with this specific example is that the data is biased:
certain communities are overpoliced. We know, for example, that black and
white people use marijuana at around the same rate, but that black people are
more likely to be arrested for use. So they'll be more likely to be
represented in a dataset of "criminals" even if they aren't actually more
criminal. So that's one social factor. But let's pretend that we can construct
a socially untainted dataset that represents the true underlying crime rate,
we correlate it with face images, and the racial disparity still exists. I
want to reiterate that we're well off into the world of fantasy here, but for
demonstration purposes.

There are generally 3 conclusions you can draw from a correlation like this: A
directly causes B, B directly causes A, or something else more complex is at
play. It's unlikely for facial structure changes to directly cause
criminality, and unless you're Pinocchio, criminal behavior isn't going to
directly cause changes in your face.

So what more complex thing is at play? Well one answer is genes. It could be
that the genes that make someone darker also make them more naturally
predisposed to violence. That is, some factor C directly causes both A and B.
Or it could be even more complex, for example that socially, people who
exhibit black skin are more likely to be placed in conditions that breed
criminal behavior[0]. Since economic and social status are heritable, and so
is skin color, this seems reasonable to conclude, and there's lots of other
evidence that this is the case.

But if that's the case, then looking at a picture of a face doesn't actually
have predictive power. At best it just recognizes that your average black
person is likely to have been raised in a situation where they were more
likely to commit a crime. It doesn't have any predictive power about a black
person who wasn't raised in those conditions.

So by independent of social condition, what I mean is that such a model isn't
useful unless you ascribe to the belief that the genes that cause facial
structures are correlated with the genes that cause criminal behavior (which
presupposes that those genes exist).

Otherwise you aren't actually looking at even a direct correlation and in fact
it's very likely that if you divide your subpopulation up in smart ways,
you'll find that there are groups against which you are unfairly biased.

Just because the social signifiers in the study we're looking at aren't as
obvious to you or I as skin tone doesn't mean they aren't there, and again
that presumes the data is good, which we know it isn't.

> Can you give some examples of social attributes that arise independent of
> social conditioning?

I don't know that I fully explained this above, so let's create another
fantasy world to explain this more concretely. Let's agree that murder is bad.
This is solely a social agreement, but we decide on it based on ethical
beliefs.

Imagine that in this fantasy world there are genes that cause one to
occasionally enter a bloodlust that forces one to go on a relatively
uncontrollable killing rampage. Or for a more direct fantasy example, turn
into a werewolf that then goes into a relatively uncontrollable killing
rampage. Or be an Orc which is "naturally" evil (this is actually a relatively
common fantasy trope, huh).

This genetic marker would imply a genetic predisposition to criminal behavior,
despite any social conditioning. Compare this to a relatively normal human
child who is trained from a young age that they are "no one", a part of a
greater movement that requires assassinating the wrong people, and that this
assassination is sometimes necessary for the greater good.

Both are more likely than your average person to commit a crime, but one was
conditioned to this socially while one was genetically predisposed. If this
cult holds onto family lines, there will be similarities between cult members,
but people who escape the cult or who were never a part of it might be
unjustly thought to be criminal, solely because they resembled the cult.

To jump back to the original statement, this means that if you set up a
confusion matrix that includes "family of cult members" as a category, your
model will perform badly and discriminate negatively agains them. You can see
why this might cause huge issues like, for example, causing the justice system
to chase or harass people who are related to criminals.

Hopefully that explains. It's essentially a correlation vs. causation issue.

[0]: I should note here that this is true whether you ascribe to the belief
that "black culture" encourages/celebrates criminality, or the belief that
"white supremacist and social structures" push black people into situations
where they can't avoid crime.

~~~
zajio1am
> There are generally 3 conclusions you can draw from a correlation like this:
> A directly causes B, B directly causes A, or something else more complex is
> at play. ... Hopefully that explains. It's essentially a correlation vs.
> causation issue.

Note that for predictor it does not really matter correlation vs. causation.
That matters for intervention. If you have feature A that in the population
always causes features B and C, and no other causes them, then presence of B
is perfect predictor of C, but intervention that changes B (and not A) does
not affect C.

On the other hand if feature D causes feature E in 90% cases, and no other
causes it, then D is predictor of E with 10% false positives, but intervention
on D affects E.

It is true that 'perfect causality' predictors, where the set of accounted
factors are only causes of a predicted feature, have advantage that they work
the same for any subset (or any change) of population. While predictors that
ignore some causal factors (like the predictor from the first example that
ignores common causal factor) may have vasly different probabilities for
subset (or change) of population (when distribution of ignored factors
change). But in practice most real-world causal networks are super complex and
many real-world tests ignore many causal factors. So it is kind of isolated
demand for rigor [1].

[1] [https://www.lesswrong.com/posts/fzeoYhKoYPR3tDYFT/beware-
iso...](https://www.lesswrong.com/posts/fzeoYhKoYPR3tDYFT/beware-isolated-
demands-for-rigor)

~~~
joshuamorton
> Note that for predictor it does not really matter correlation vs. causation.
> That matters for intervention.

Yes, but you don't build a model without intending some intervention, so this
distinction is irrelevant in the context of applied ML, although it is a true
statistical fact.

> So it is kind of isolated demand for rigor [1].

I don't see how. The isolated demand for rigor "fallacy" that Scott Alexander
talks about is when you ask for rigor to be applied in ways that are bad
faith. Let me rephrase my concern:

We have lots of evidence that real-world causal networks are super complex and
many real-world tests ignore many causal factors. Similarly, we have no
a-priori reason to believe that facial structure is correlated with {sexual
orientation, innate level of intelligence, innate criminality}. And in fact we
have strong reasons to believe that most of the way that those things present
is due to cultural influence.

So if someone shows up with a groundbreaking study that shows that they can
predict some innate attribute based on facial structure, it's likely that
they're actually seeing cultural biases (or cultural correlations) and not
innate factors (or correlations with genotypic things). In other words, our
priors should be that any model in this space is simply discriminating based
on stereotypes, not revealing some innate way to "predict" these attributes.

The model isn't a _predictor_ but a _recognizer_ , and while semantic, that's
a very important distinction.

------
rexreed
It's too late.

~~~
core-questions
It was too late even before facial recognition technology existed, because
it's not like you can't apply modern tech to older video footage.

------
m0zg
Sorry, I'd much rather defund The Atlantic. Facial recognition at least has
valid uses beyond surveillance. The Atlantic has no known good uses that I can
discern.

