
Using NeuralNets to make smooth character animations - adikus
https://techcrunch.com/2017/05/01/this-neural-network-could-make-animations-in-games-a-little-less-awkward/
======
jjcm
What I'm really curious about is the latency of it. Having smooth and accurate
animations makes for a very rich experience, but in most games having your
character react instantly to your commands is more valuable. Even if it's a
100ms delay in order to accomplish this, it might still be too much especially
for fast, action packed games. I know in the Witcher 3 they reduced some of
the complexity in animations to make them react quicker. What this does sound
amazing for though is NPCs.

~~~
flohofwoe
Exactly this. The character control in some games (eg GTA 4 and 5) feels laggy
and imprecise because stopping, turning, reversing takes a lot of time. Yes,
it is "natural", "realistic" and it looks nice if you're watching someone
play, but playing yourself it feels like steering an oil tanker :)

~~~
kbart
I, on the contrary, like some inertia in movement because, as you've
mentioned, it is more realistic. Sure, it get some time to get used to
(especially if you had played Quake style shooters for a while), but I like
inertial movement more than weightless ragdolls.

------
rl3
This is amazing. It should even be possible to have characters traverse
obstacles differently depending on their in-game skill level.

Imagine this system used to implement rock climbing. The player is simply
pressing the "W" key to go up, but depending on their character's skill level
the speed and manner in which traversal occurs could be very different.

I really hope Bethesda is paying attention to this, because open world games
could use it more than anything. In Skyrim you climb mountains by mashing the
jump button. Far from ideal, and tends to break immersion.

The tech could even be used to power melee animations some day, perhaps entire
fighting styles.

~~~
Baeocystin
Fully agreed. While Skyrim Horse Physics™ was entertaining in its own way,
something like the OP would be a huge leap forward for immersion.

It would likely have secondary effects of easing the burden on animators,
allowing them to concentrate on more important aspects of the game, like
facial interactions.

(Looking at you, BioWare.)

~~~
rl3
That's a good point. I suspect neural nets would be suitable for near-perfect
lip syncing animations that aren't too expensive.

We can already synthesize the speech itself to the point it sounds real,
although it's very computationally intensive at present. Infusing the speech
with emotion remains very difficult. Some day voice actors will probably just
license their likeness along with a sample of emotive recordings.

After that, one of the few remaining holy grails is generative dialog for NPCs
that's both believable, dynamic in relation to the game world, and don't sound
like glorified ELIZA bot output.

Neural nets can probably be used for level design today. One use case might be
creating an entire urban environment complete with residential interiors,
where the artists don't have to slave over each individual apartment for it to
be believable. Imagine playing a war game where the levels look like there's
actually people living there.

~~~
Baeocystin
Speaking of emotive, synthesized speech:
[https://lyrebird.ai/demo](https://lyrebird.ai/demo)

Clearly early days, but still, the promise is there.

------
santaclaus
Ah yes with the breaking of spring the annual SIGGRAPH paper flood begins

~~~
justinjlynn
A glorious time indeed

------
partycoder
The results are amazing. I am do not have a trained eye around animation, but
it looks no different than motion capture.

So far, correct me if I am wrong, the state of the art prior to this would be
something like Unity's Mecanim, where you define a state machine for animation
transitions, which would interpolate animations to save you work.

~~~
stonith
You can do both 1D and 2D blendspaces in Unreal, so you end up with something
like this: [https://80.lv/wp-
content/uploads/2016/05/walktorund2.gif](https://80.lv/wp-
content/uploads/2016/05/walktorund2.gif)

Very handy for character locomotion, but only gets you so far. I'm not sure
how applicable offline processing is to this though, since if you have a large
number of different types of obstacles to be traversed, you'd have to bake >=
that number of animations, and your animation state machine would be enormous.
The end goal might be to do that in real time, but I don't think anyone is
going to seriously suggest running a neural net to calculate animation
positions while the game is running. Maybe if you had some smarts in the level
designer you could determine the potential pool of animations needed based on
placed geometry, and build the animations at packaging time along with the
state machine.

~~~
partycoder
Well, the expensive part is to train the network. After that it should not be
as expensive, depending on the size.

If that is still not enough, they can also try to look at ways of producing a
"good enough" effect with a smaller network.

Then, it can also be limited to only some characters, the ones that you are
more likely to pay attention to.

Now, the MS Kinect does some intense processing behind the scenes (random
forests) to capture your motion in real time. Yet you can still run games with
decent performance. I think a neural network to adjust animations is not too
dissimilar in terms of performance cost.

------
daveguy
Source paper is here:

[http://theorangeduck.com/media/uploads/other_stuff/phasefunc...](http://theorangeduck.com/media/uploads/other_stuff/phasefunction.pdf)

------
speps
One of the references used throughout the paper is this:

[http://www.gameanim.com/2016/05/03/motion-matching-
ubisofts-...](http://www.gameanim.com/2016/05/03/motion-matching-ubisofts-
honor/)

I would always trust actual concrete implementation rather than a research
project though... unless they have source and demo available.

