
Emotion Detection in Games? - ingve
http://www.gamefromscratch.com/post/2015/11/15/Emotion-Detection-in-Games.aspx
======
vicbrooker
I've been on-and-off interested in facial expression for a while, and can't
help but feel that 'the emotion tool' is kind of flawed.

Unless there's been a groundbreaking shift in the last 12 months or so, this
software will be using the Facial Action Coding System to recognize muscle
contractions which are then interpreted as 'emotions'. FACS is an amazing tool
(can't recommend learning it enough if you're interested in this stuff), but
it still relies interpreting the context of the expression to determine an
emotion. It's a notation system for expressions, not an identification system
for emotions.

There's an off chance that their algorithm has crowdsourced their data which
can be even worse because the emotions we display and the ones we feel aren't
always the same. For example, I may force a smile to cover fear. FACS training
can take 100+ hours because a lot of heuristics for identifying expressions
need to be unlearned.

It's a bit of worry when
[https://www.projectoxford.ai/demo/emotion#detection](https://www.projectoxford.ai/demo/emotion#detection)
uses mostly posed photos to demonstrate their software. The third photo
contains perhaps one smile that shows signs of being genuine, the rest seem to
be posed (I'd need higher resolution to be sure). If this third photo was
accurately reading their emotions (cf. expressions), then the results should
not be that they are happy.

Replacing the emotions with names for expressions would make me feel a little
more comfortable about using this: 'happy' with 'smile', 'angry' with 'glare'
and so on. But then it seems a lot less useful so I see why they've
described/marketed it the way they did.

------
PeterStuer
We have been using similar software ([http://www.noldus.com/human-behavior-
research/products/facer...](http://www.noldus.com/human-behavior-
research/products/facereader) ) in our research into human observation in
context aware computing. On the sensing side it would be interesting to see
how this compares. In our experience, the observation goes quite well as long
as you have enough pixels to work with, and good lighting on the face. Whether
you can get that depends on the use case. For instance, when we worked with
our national public broadcaster on capturing and deriving emotion and
intention awareness in television viewers, things are fine in a 'laboratory'
setting, but 'real living room conditions' where you get dark grainy low
quality images from typical television 'Skype' cams were far more challenging.

------
Tinyyy
Direct link:

[https://www.projectoxford.ai/emotion](https://www.projectoxford.ai/emotion)

------
sdrothrock
If this is meant for players, I wonder how effective it would actually and how
much of the gaming population it would actually affect.

When I'm reading, watching a movie, or playing a game, I'm generally pretty
stone-faced with little to no actual reaction to reflect what I'm thinking as
I consume the media in question.

~~~
harperlee
But the fact that you are stone-faced during those activities is due to the
fact that there is no reaction. If you jump on Skype you're not stone-faced
anymore, as you know that you can express yourself through face reactions. If
your game were to get more interesting whenever you put a bored face, that
would _very quickly_ end tour stone face! You would not even notice that
you're asking for more difficulty, your dopamine shots will do the work for
you :)

