
Real-time emotional voices with D.A.V.I.D - elie_CH
http://cream.ircam.fr/?p=44
======
JoshTriplett
The concept seems quite interesting. I can hear slight differences between the
different versions, but I honestly can't hear those differences as conveying
any particular emotion. If you mixed them up and asked me which one sounded
most "happy" or most "scared" or most "sad", I'd be stumped.

~~~
fluxist
Nor can I very well. After some googling, I see that apparently this would be
called "auditory affective agnosia" (if it were at the level of
neuropsychological pathology). I wonder if this software could be used
diagnostically in disorders that are associated with deficits in empathy or
emotional identification -- autism, Antisocial or Narcissistic Personality
Disorders, etc.

Perhaps it could also be used as a therapeutic tool. One could train with it
in order to reduce the affective threshold at which one detects emotion.

~~~
JoshTriplett
> One could train with it in order to reduce the affective threshold at which
> one detects emotion.

Interesting idea. If software like this can smoothly adjust intensity, you
could find the threshold at which you can reliably detect it, and attempt to
push that downward to pick up more subtlety.

------
DiThi
OMG. A TV channel used it (or a very similar technique) here in Spain to
manipulate some video footage.

[http://iniciativadebate.org/2016/01/22/por-que-solo-en-
video...](http://iniciativadebate.org/2016/01/22/por-que-solo-en-video-
antena-3-suena-temblorosa-la-voz-de-anna-gabriel/)

In the first video (at around 0:33) you can see the voice as if it was applied
the "scared" filter too much, and cutting only the section where she says
"yes, we met with X" out of context to make it sound like a shameful
confession instead of the firm statement of known information that it is (0:41
of second video).

------
omarforgotpwd
Amazing. Imagine combining this with facial emotion detection algorithms like
those in Microsoft Kinect and making an AI that can automatically empathize
with users. When you're happy, the AI is happy. When you hate the world it
hates the world. Maybe you could even optimize the UI so that certain offers
like buying things or rating apps would only be shown when the user is in a
good mood.

~~~
serpix
This is going to be used in propaganda if not already.

