The blog post you link to doesn't seem to make that claim. It talks about how they were unable to purchase it because the company only wants to work with big companies and it's not a prepackaged library you can just work with.
"The animation in games that use Euphoria bears little resemblance to the slick, organic movement in their tech demo video. For example, here is a video of how fighting looks in an Indiana Jones game using Euphoria. Similarly, in The Force Unleashed game, the only physically-based behavior I could see is magnetic hands -- the stormtroopers' hands stick to anything they encounter. Penny Arcade even made a comic parodying their indiscriminate grasping. Here is the official Lucasarts video showing off this dubious 'feature':"
Their PR video is not an accurate representation of how the technology works.
If you want a statement from the programmers themselves, I can even provide that.
"IGN: Once the video hit the net, there was a lot of speculation as to whether or not that footage was real-time, a target render, etc. Can you set the record straight and let everyone know specifically what it is, what it's built on, and when you created it?
Blackman: One of the ideas the next-gen Star Wars team has been exploring is the concept of the "Force unleashed." To us, the "Force unleashed" is exactly what it sounds like: a Jedi or similar character releasing the full potential of the Force in ways that, while they feel like logical extensions of powers we've already seen, are also new, amped-up, or over the top. The video was created a little over a year ago, still very early in pre-production, so that the development team could wrap our collective heads around the concept and understand the gameplay and production implications.
The video is a pre-rendered pre-visualization of what we're targeting in terms of gameplay, the degree of interaction with the environment, and character reactions."
Just saying because I haven't touched machine learning in the last 7 years and think maybe it is time to catch up a bit.
If I have to go over several terabytes of images, manually segmenting like that, I'll need to live a thousand lives.
In any case, manually segmented data from my lab already went to Sebastian's lab. They are testing their convolutional networks on it--I hope they get them to work.