
Emotion Science Keeps Getting More Complicated. Can AI Keep Up? - imartin2k
https://howwegettonext.com/emotion-science-keeps-getting-more-complicated-can-ai-keep-up-442c19133085
======
Bartweiss
The history-of-psychology content here is fascinating - it's hard to overstate
the harm done by Ekman's theory to everything from police investigations to
Pysch 101 textbooks. But the attempt to tie AI into all of this is weirdly
presumptuous. It assumes that the progress of AI development is predicated on
psychology theory without supporting that claim in any way. The rebuttal is
painfully obvious: children learn to read emotions without some grand backing
theory, so why do we assume AI will need one to reach human levels?

It ought to be telling that this is a piece about AI written by a historian,
citing a psychologist, a historian, and a fictional philosopher - but not a
single person with an AI or even computing background. The result is a
painfully slow and sloppy retreading of existing ideas. First p-zombies and
then the Chinese Room question are sketched out without any reference to that
original work, ending with a question-begging assertion that perfect imitation
of an emotion is meaningfully different from feeling the emotion.

The first article in the series, grandly titled _Silicon Valley Thinks
Everyone Feels the Same Six Emotions_ , is even worse. It critiques Ekman and
his work with the TSA, then talks about Alexa, but offers no evidence at all
that Alexa or any other Valley project involves Ekman's six-emotion model.
(Anyone with a cursory knowledge of machine learning will tell you that it
almost certainly _doesn 't_. Reducing inputs down to six artificial categories
is not the sort of thing that improves training.)

Given all of that, the parting shot of this piece is especially galling. After
indicating total unfamiliarity with AI theory, AI ethics, and post-1970
philosophy in general, the author blithely declares: _" if I were
contemplating a career in philosophy right now, I’d be thinking about making
my central field the ethics of emotion AI. There’s much to be done."_

~~~
jack_pp
Children learn to read emotion using a brain that evolved to do so over
millions of years

~~~
BenMorganIO
The scale of the learning is interesting in itself. From a few months, to
years old, to all the way to preteens, emotions are constantly being
reevaluated and developing new ones as well.

~~~
Bartweiss
Is there any decent model of what actually _happens_ when children develop
theory of mind?

Along with having a suite of emotions and recognizing cues in others, it's
pretty obviously a required piece of learning to read emotion for humans.
Below a certain age, it's literally impossible to generate ideas like "the cat
doesn't like that", since there's no space between "I like that" and "that is
likable". But I don't think I've ever seen somebody describe what kind of
structural or chemical changes are correlated with e.g. passing the mirror
test.

~~~
BenMorganIO
From what I remember, when children are born, they're born with a very
underdeveloped brain. The prefrontal cortex to I believe isn't present but I
could be wrong, otherwise it's very small.

The self conscious emotions, pride, shame, guilt, envy, remorse,
embarrassment, hubris, and many more come from the prefrontal cortex. As it
develops over a very long period of time, so do the emotions. I remember that
embarrassment doesn't typically develop until you're approximately 7-years-
old.

------
mr_overalls
“The question of whether a computer can think is no more interesting than the
question of whether a submarine can swim.” ― Edsger W. Dijkstra

Replace "think" with "emote" and the insight of the original quote still holds
true, I believe. The advanced of artificial intelligence is not built on a
bedrock of psychological understanding of human emotion.

------
Xcelerate
Psychology needs to start moving away from trying to categorize things into
these arbitrary fuzzy human buckets ("emotions") and toward being able to
predict things. If someone is diagnosed with "X", the most pressing question
is now: how is this diagnosis useful to that person or society? Is the person
trying to change from the state they are in to a different state? If so, does
knowing they are "X" or feeling "Y" help with this goal? I would argue that
for the past few centuries, no, this process hasn't helped very much.

For instance, consider the DSM description of borderline personality disorder:

"BPD is a pervasive pattern of instability in interpersonal relationships,
self-image, and emotion, as well as marked impulsivity beginning by early
adulthood and present in a variety of contexts, as indicated by five (or more)
of the following:

\- Frantic efforts to avoid real or imagined abandonment

\- A pattern of unstable and intense interpersonal relationships characterized
by extremes between idealization and devaluation (also known as "splitting")

\- Chronic feelings of emptiness

\- [...]"

You get the point. You have to meet _five or more_ of the listed criteria. Not
four or more. Not six or more. But five or more. There's a lot of ad hoc,
informally specified, subjective criteria that probably have no
reproducibility if ten different psychologists were all told to independently
specify which of the above criteria a given patient satisfies. See the
recently published update on the reproducibility crisis: [https://www.the-
scientist.com/news-opinion/half-the-time--ps...](https://www.the-
scientist.com/news-opinion/half-the-time--psychology-results-not-reproducible
--study-65117).

In general, psychology doesn't seem to be based on many of the lessons
acquired in the physical sciences over a very long period of time, and there
doesn't seem to be much math involved. Open a textbook on quantum field theory
and open a textbook on psychology, and notice the main difference between the
two. Data on people is scarce, often biased, improperly sampled, and study
results are frequently cherry-picked to promote some sort of predetermined
narrative about how the human mind works. The human brain is one of the most
complex objects in the known universe, and pretending that we have a modicum
of understanding about this object is the type of overconfidence and hubris
that will lead to centuries of inadequate mental help for people that need it.

What's the solution? Start collecting data and start trying to
_quantitatively_ predict things. I have no doubt that machine learning will
fail to predict human behavior, but I also have no doubt that it will fail
much less drastically than the current approach. Outcomes need to be strictly
measured. In this case, I don't think we are as interested in "explanatory"
science as we are "predictive" science; i.e., we don't care as much about
_why_ the blackbox model works as long as it improves people's lives. Consider
for instance the current system of categorizing personalities: the "Big Five"
system (OCEAN). Why is this still a thing? Any sort of primitive GBDT would
almost certainly perform better than this, so it's a mystery to me why these
crude systems are still being used in the mental health industry in 2018.

~~~
winchling
_> Start collecting data and start trying to quantitatively predict things_

One can't start collecting data until one knows what to measure and why. We
normally start with _problems._

Psychologists have made little progress, as far as I can tell, because they
unwittingly chose a hard problem: how does a general problem-solver work?
They've actually been trying to solve AGI but without the tools of computer
science.

> _The human brain is one of the most complex objects in the known universe_

Otter hardware seems good enough:

[https://www.youtube.com/watch?v=zGTzclrsujA](https://www.youtube.com/watch?v=zGTzclrsujA)

~~~
Xcelerate
> We normally start with _problems_.

I never said we didn't. Why would anyone collect data without knowing the
problem that it will be used for? That's implicit. I don't need to directly
state the obvious.

> Otter hardware seems good enough

Ok, here's a technical description of what I meant by "complex object": an
object whose output is _in theory_ capable of being represented by a universal
Turing machine that is significantly shorter in length than the output itself
but is still longer than the output of other objects that humanity has
interacted with and is capable of observing. I think you'd be hard pressed to
argue that the Kolmogorov complexity of an otter's brain is greater than that
of a human's.

~~~
winchling
_> I never said we didn't. Why would anyone collect data without knowing the
problem that it will be used for?_

It bears emphasis because trying to measure stuff without new theories has
been a besetting problem in fields such as psychology and medical research and
I think a major reason why many studies can't be reproduced.

 _> I think you'd be hard pressed to argue that the Kolmogorov complexity of
an otter's brain is greater than that of a human's._

Yes. But my point is that we wouldn't need to understand much about how human
hardware works if we could only understand how an otter brain works. Beyond
that the complexity is due to human culture/memes, which we study in their own
right anyhow (in fields such as history, politics, literature).

------
ninegunpi
>We may never be able to build a machine that can recognize the full diversity
of human emotional experience

Even humans have a lot of problems recognizing full diversity of their own
emotional experience, unless trained appropriately.

