
A.I ‘Gaydar’ Could Be the Start of Something Much Worse - sus_007
https://www.theverge.com/2017/9/21/16332760/ai-sexuality-gaydar-photo-physiognomy
======
caconym_
I don't really understand the article's position. Comparing physiognomy to a
computer program that can _actually_ make accurate predictions about people at
a statistically significant rate based on their facial features is like
comparing alchemy to chemistry. They might be superficially similar, but one
actually works and one doesn't.

edit: Of course, I think this technology is horrifying and that a) it will
never be 100% accurate, and b) it is just _begging_ to be abused. I just don't
think denying the reality of its existence will be a good way to fight against
its misuse. Pandora's box has been opened.

~~~
alva
The position of the article is one which will become extremely prevalent if
this area of research gets results. If certain behavioural indicators can be
identified reliably with this method, it will be the atomic bomb of political-
incorrectness.

~~~
candiodari
It could be even worse if it becomes self-reinforcing. One could see this
being developed in a way that "allows HR to make better decisions", and let's
say it predicts purple people with yellow spots have a tendency to steal.

Which of course will lead to exactly what it predicts.

And we all know HR departments would gladly use stuff like this. Especially
for the jobs where it would matter most: underpaid entry-level jobs that a lot
of people get as either a last resort or a first job.

------
bitL
It's a major can of worms. If any of these classifiers work well on
intelligence, orientation, character, capabilities, it might give scientific
backing to some horrible practices like eugenics, abortions/extermination of
undesirables etc. This is a truly horrifying scenario, an ultimate black pill
and humanity will need to confront it soon it seems.

~~~
adventured
Apparently we can accurately determine a child is a psychopath from an
extremely young age now:

[http://www.nytimes.com/2012/05/13/magazine/can-you-
call-a-9-...](http://www.nytimes.com/2012/05/13/magazine/can-you-
call-a-9-year-old-a-psychopath.html)

There's no question that the next 20-30 years will be overflowing with cans of
worms.

The next step after early (fetus or embryo stage) diagnosing of X thing, is
that we're going to soon have the genetic alteration power to do something
about it courtesy of CRISPR. It seems practically guaranteed that
authoritarian societies will begin forcing parents to alter their unborn
children to eliminate all sorts of attributes that are considered undesirable
by whatever regime is in power.

~~~
TheSpiceIsLife
It won't take an authoritarian regime to force people to alter their embryonic
children. I'm certain people will be willingly lining up at the clinics.

~~~
pizza
Heh, I'm.. wary of that, I guess? The people who alter their embryonic
children during the first 100 years of that technology's existence - I assume
- will mostly get a raw deal until embryonic alteration as a field goes
through a billion 'Anna Karenina principle' failures.

We have the ability to adapt by simulating nature in our minds, and science
too comes from this - but we are the result of so much time and such harsh
filtering. In other words, I don't think we will see as much interest ten
years into it after as we would during the first year.. we would only, in such
little time, pick the low-hanging fruit changes. But these are also the ones
nature has not propagated - which would be odd if they are both simplest to
achieve (including my accidental mutation) and also known to be beneficial on
average.

I'd imagine hypothetically that the first wave of children would start
exhibiting genetic illnesses will drive down the willingness to sign children
up, out of simple precaution that parents are unlikely to have ever seen
evidence that the procedure does not have a risk of ruining the child's life.

------
dogruck
It's illogical that the Verge article takes it as fact that it's impossible to
infer someone's character from photos or videos. If that's true, then the only
concerning outcome is that we build and operate software that falsely purports
to provide insight.

To me, the worrying outcome is that it _is_ feasible. Before you know it, SAT
tests are replaced by photos.

Of course I can think of good applications, such as scanning for terrorists by
identifying abnormal signs of stress.

~~~
TheSpiceIsLife
Not all abnormal signs of stress are indicative of terrorist tendencies.

~~~
dogruck
Agreed. I think about that sometimes, for example, when I'm stressed at the
airport just because I'm late or headed to an important business meeting.

That said, I can imagine a AI-driven screening mechanism that was rooted in a
causal relationship to data, instead of random nonsense.

------
amiga-workbench
I don't see where the panic is coming from, I'm almost certain Google,
Facebook and several advertising agencies know I'm gay from my browsing
history, and all that data is already made available to governments.

~~~
tgb
"You're not _really_ gay, you're just acting gay for attention - look the AI
says you're not gay!"

"I can't believe you're still seeing John! You know he's gay, right? I tested
his profile picture - it came back 98%!"

The whole point of the LGBTQ movement was to let people decide for themselves
what they are, not to label them based off what society says they ought to be.

~~~
wyager
Just because people might choose to misinterpret the results of a test doesn't
mean the test is invalid. People already do an acceptable job understanding
the existence of false positives and negatives on other tests.

~~~
pizza
If people thought about what they were doing at all times, both claims could
be true but in the real world, both of those claims will almost surely be
rejected. People don't choose to be certain, they are just unaware of their
overconfidence.

Overconfidence doesn't change the probabilities of the rare, harmful events
that they are predicting won't happen. But their bias against self-doubt
_does_ increase their willingness to bet against unprecedented events.. n=1001
safe observations (where E[payoff] > 0) cannot predict that the next will be
catastrophic (E[payoff] = -inf, or at least absolute value >> O(N*E[payoff
seen up till then])) with non-zero probability beforehand; the expected payoff
would appear vastly different before the event than after. The bets they
choose will reflect the way they underestimate the consequences of
speculation.

Evidence of absence > absence of evidence, another way to put that is the
weight of ¬valid(test) >> the weight of ¬¬valid(test), so we should not decide
the test is invalid just because it's not yet invalidated.

My gut feeling is that the upper-bound of proven benefit (to
individuals/society) of the test (even if ~99.99% accurate) << upper-bound of
the harm (intentionally or not) - because 'invalidity' isn't a question of
whether a test is always non-satisfiable, the burden of proof falls upon the
one introducing it, rather than the one rejecting it.

------
zitterbewegung
Can a model predict someone is Gay? Who knows. Can someone create a model that
looks legitimate and then has false negatives and positives? Absolutely. Also,
they will probably be able to sell it to countries.

This reminds me of a movie called The Final Cut by Robin Williams. Its where
everyone has an implant for recording everything and you get a face tattoo to
bypass it. In this case you would get a face tattoo that would intentionally
misclassify you as whatever orientation that you want.

~~~
cool_shit
Then they would give their algorithms more power and memory, upgrade their
cameras to capture more spectrum and/or resolution and learn who has used a
tattoo to cover up their face.

------
bitL
More coming from China:

"Automated Inference on Criminality using Face Images"

[https://arxiv.org/pdf/1611.04135v2.pdf](https://arxiv.org/pdf/1611.04135v2.pdf)

~~~
R_haterade
IIRC, this was widely discredited due to problems with the sampling procedure.
Can't be arsed to dig up the criticisms I found.

Confirmed some of my long-held biases about physiognomy though, so I didn't
dismiss it outright.

------
omalleyt
The funny thing is the author thinks having an 81% success rate of predicting
which image out of a pair of images is of a gay person, is not as hard as
predicting with 81% accuracy whether any given individual is gay or straight.

Meaning the author really doesn't understand the concept of unbalanced classes

------
fosco
I must say I remember reading Blink[0] by Malcolm gladwell which spoke of
Sylvan Tomkins[1] looking at people's faces (and animals) and have extremely
high success rates of describing their personalities to such a degree that by
looking at the faces of a tribe in a rural area of Africa he was able to
determine parts of their cultural rituálé and tendencies.[2] while i still
find this far fetched, considering people are born a certain way it does make
sense to me that like a fingerprint certain types of brain structures may have
an impact on how the face looks. Please take a look at my second article that
had malcolm gladwell discussing Suliban Tomkins, specifically towards the end
of it.

[0][https://en.m.wikipedia.org/wiki/Blink_(book)](https://en.m.wikipedia.org/wiki/Blink_\(book\))

[1][https://en.m.wikipedia.org/wiki/Silvan_Tomkins](https://en.m.wikipedia.org/wiki/Silvan_Tomkins)

[2][http://gladwell.com/blink/the-mysteries-of-mind-
reading/](http://gladwell.com/blink/the-mysteries-of-mind-reading/)

------
tomrod
Interesting -- why is the composite skin tone so light? I've heard ML
algorithms can have a hard time with darker skin tones, though I'm still not
sure of the reasons. Is the research here subject to these same issues?

~~~
stevenwoo
Maybe related to the bias of film and digital cameras for caucasian skin
tones? [https://priceonomics.com/how-photography-was-optimized-
for-w...](https://priceonomics.com/how-photography-was-optimized-for-white-
skin/)

------
Animats
A system to identify concealed gun carriers from video should be possible.
It's not hard if you can see someone step up or down; the inertial effects of
carrying a big weight show.

~~~
icelancer
There'd have to be "truth" gait data for the person. Gait is highly variable
between subjects, and if the weapon is worn in a centered position (small of
the back is a common concealment position), it's unlikely this will work.

Holsters are not always worn in the lower extremities, either. I'm not sure
this would be all that successful. It seems quite difficult.

------
basicplus2
<And to be clear, based on this work alone, AI can’t tell whether someone is
gay or straight from a photo.>

What can i say?

------
cerealbad
seems like some kind of face camouflage is necessary for the future.

~~~
hacoo
Relevant paper (attacks facial recognition, but same concept):
[https://www.cs.cmu.edu/~sbhagava/papers/face-rec-
ccs16.pdf](https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf)

------
0xbear
As a human I can’t predict whether someone is gay or not most of the time.
Most gay people I know don’t show any of the stereotypical “gay” cues. I don’t
see how a machine would be able to determine something like this, so this
story just seems like meaningless clickbait to me.

~~~
icelancer
>>As a human I can’t predict whether someone is gay or not most of the time

Why would your success as a human have anything to do with whether or not an
algorithm could have success in any field?

~~~
0xbear
Because it’s really freaking hard to beat humans on visual tasks.

~~~
barry-cotter
The fact that you can't do it doesn't mean humans can't do it.

~~~
0xbear
This is modern day phrenology. It _cant_ work.

