Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Next Billion Dollar Disruption, Real Time Animation (inc.com)
76 points by aparashk on Nov 4, 2018 | hide | past | favorite | 36 comments


I ran a vfx company before. Author clearly is just a hobbyist at best and has no experience in digital production.

Studios aren't clueless, this is their business. They take advantage of every hardware upgrade, have already expanded into the cloud, and the bigger companies have direct R&D access to nvidia's engineers to develop the next generation. Everyone wants faster graphics and lower costs, and they already use real-time rendering everywhere, especially in editing.

Yes, better hardware has made it easier for more people to produce more things, as evidenced by the exploding world of Youtube creators making high-end productions. However there is an immense difference in overall quality between those videos (which can easily climb into 6 figure costs) and the big budget studio productions that fill cinemas.

Also graphics are a tiny part of the overall product. The rest is the creative process, building the actual scenes, and making it all look and feel right. There's no magic button to transfer your imagination into the computer, and that's the majority of the effort, along with actually coming up with a good story in the first place.


Yea the author insinuated that stock 3D libraries were also an advantage. Anyone who's worked in animation or video games knows you can't generally just grab a bunch of assets from a store and put them in a work and expect them to look like they belong to the same world. You're gunna have to do some shading work at the minimum.

Unless you happen to make movies that share very similar styles, such as Pixar, which already has a massive digital library of assets they pull off the rack whenever they want.

Another thing to look at is "Smash and Grab", which is the original talk about an internal Pixar project to give a small team a really small budget and timeline, no bureaucracy, but access to Pixar tech, and make a quality short film, and it's doing pretty well.


I will make a few speculations:

1) ML algorithms will be able to adjust stock art to have a common look. Think MM fonts on steroids.

2) Common themes will develop in stock art libraries. In other words, I'll be able to pick a world, have a wide array of objects, and it will all fit together pretty well.

Will this be as good as a Hollywood-budget animation? Almost certainly not. Will this be good enough, given good plot and writing? I suspect so.

Will we see fifty different movies with the same-looking characters playing different roles? Almost certainly. It will be almost like having the same actor play Tarzan AND the Terminator AND a police officer.

Will we see animation well behind the state-of-the-art? Almost certainly. On the other hand, I'm perfectly okay watching cartoons from 50 years ago, and I think we can do much, much better.

I think the major problem right now is that to do a movie, you had to be competent at plot/writing, animation, distribution, voice acting, etc. That took a movie studio.

Digital platforms are already taking over distribution. Now, a group of five friends can make a high-impact video. As other parts of the channel get subsumed by advances in technology, eventually it will be down to one person in a room with a good idea. That will be transformative.


Indeed. It looks like the author is only focusing on realtime rendering.

Yes, realtime rendering will save time and money but this will only be a fraction of the cost.

And realtime rendering of complex scenes is still some years away.


Especially in long running anime series this is highly visible. They are often based on the manga as a source material. Eventually the anime will catch up with the manga and to avoid that the anime has to be stretched with "filler content".

Producing an anime series requires a far far bigger team and is significantly more expensive than a manga so why is this happening in the first place? Why can't they just add more people to work on the manga? The artists are not the bottleneck. The authors of the story are the bottleneck. They have to come up with an original story and this is not the type of problem where adding more people helps.


For many years, my stock answer to anyone who says that computers are "fast enough" or "more powerful than most people need" is that I will only be satisfied when I can create photorealistic movies by talking to my machine.

The session could go something like this:

Create a scene: London docks, around 1919, mid-afternoon. There are a few ships being offloaded, crates stacked around, trucks and horse carts transporting freight.

<scene is created based on historical photos, ML readings of period books, etc.>

OK make it grimer, more soot on everything. And make the season wintertime. Add more stevedores. Ship in the foreground is red, weathered, and flying a German flag.

<changes are made to the scene>

OK pull back and add a pub on the right side of the screen. Let's make it dusk. Sun is setting. Get some good light spill when the door of the pub is opened.

<scene updates>

Let's work on the pub's sign. It's called the Red Swan. Make a sign carved in the shape of a swan, paint it red. Smaller, and let the wood show through where some of the paint is weathered away. Hang it over the door.

<scene updates>

etc.

This should all become possible eventually, and probably a lot sooner than many people think.


Definitely not within the 5-10 year timeframe that the article quotes though... if a computer can follow completely open-ended instructions that well in a decade then we'll be living in an entirely new world, and making movies will frankly be the least of our concerns.


So like, studios and FX shops like Pixar and ILM are very much aware of these techniques, because they are leaders in them (they closely partner with Nvidia on things like the super impressive real-time raytracing of the RTX GPUs, for example), and they develop prototypes and working production applications of these things. The technology to replicate cinema-quality graphics at lower cost has existed for a while, at pretty much consumer-level costs in-fact (if you know what you are doing). But if you have $100m to spend, you're gunna spend it to keep ahead of what is already possible.

But I think my take after reading this is what he sees as a bomb that's waiting to go off is in reality a gradual evolution that literally everyone in animation/film sees coming. I have spoken to a bunch of people in the space, and have been in the space myself, and have run into absolutely no one that is a non-believer (like the people who doubted the iPhone would take off, for example).

Also let's be real here, we are talking about entertainment art here, and from what I've seen, most people suck at telling stories. Cost hasn't really been a barrier for a long time with low-cost, damn good cameras and free distribution platforms like Youtube. Hell, the pilot used to pitch It's Always Sunny in Philidelphia was famously recorded on a home camcorder. People don't go from distributing their own stuff to pitching studios for technology resources they don't have access to, they do it for the marketing machines and higher-revenue distribution channels. The up-level in quality is just a bonus if you have a good story.


Yeah I don’t think this author has any idea what he’s talking about. His argument basically is that real-time animation and motion capture will, what, replace what Disney and Pixar are doing?

What people like this don’t realize is the degree to which every iota of detail in a Disney (Pixar, Dreamworks, Sony, etc) production is art directed and stylized. Every shot is cheated to camera (meaning the poses don’t necessarily read well in 3D, but look spectacular/have clear posing to the locked camera), at least to some degree. That’s why these movies are so fun to watch. Even something like Rick and Morty has stylized animation and it’s own particular motion language. To think you could replicate this with generic asset libraries and motion capture is beyond ignorant.

It’s particularly frustrating as somebody who once was a professional animator to hear this guy talking about how this technology will revolutionize the field, casually mention that he was “considering becoming a professional animator” - and then you look at his work and he has such limited knowledge of the craft. It belittles those who have devoted their lives into mastering this deep skill.


This is what professional typesetters (etc.) said when these ridiculous DTP machines came around.

One of the hallmarks of a disruptive technology is that it's crap compared to what the entrenched technology does. Not just seems like crap, but really significantly worse by any objective standard.

So you are 100% correct in your assessment, and at the same time very possibly missing the point.


This isn't disruptive, it's evolutionary at best. Red vs Blue was a successful web series 15? years ago, made entirely from screen recordings from Halo 1. Look up "Machinema". As others have pointed out, lack of truly great artists/inspiration seems like the biggest barrier to producing entertainment. Look at season 1 of South Park - good content beats flashy trimmings every day of the week.

If you've never seen Red Vs Blue before, check out this scene:

https://youtu.be/9BAM9fgV-ts


They used Halo only in the very beginning, they switched to more traditional tools: https://www.smithmicro.com/company/news-room/press-releases/...


The examples the author uses and states that they are “almost real” are in no way almost real. To compare his short film to Disney price and art is a joke. They’re nothing a like. One of Disney’s Moana stills was something like 15GB+. Try do that in real time.


The Nvidia RTX Quadro cards recently released can actually handle that size a scene with ray-tracing in real-time, just fyi, and Disney/Pixar is using them now.


The author probably does not realize that Moore's Law has really slowed down and the geometric increase in 3D rendering power is unlikely to continue as fast as in the past. Not to mention that you need ever more graphics processing power to make smaller and smaller improvements in realism. I therefore suspect it will take longer than anticipated to get to the cinematic quality that he hopes for at the price needed for the transformation he anticipates.


So for CPUs yes, for GPUs we have seen massive progress even just this year. The Nvidia RTX cards have actual real-time raytracing, not some demo, an actual working product, that can handle massive scenes. I saw a Pixar Renderman talk where they used an Nvidia RTX Quadro to real-time raytrace the entire city in Coco.

Less about raw power, more about specialization. There is a lot of room still in specialization. In graphics, with things like ray-tracing, and in AI with things like the TPUs.


Moore's law is about transistor density, not clock rate, floating point operations or anything else.

CPUs have risen in transistor count too, they just use the transistors to speed up serial general purpose computations as much as possible.

Moore's law is not about benchmarks.


Seems like we really are going to be stuck at around 5GHz for CPUs (we will see how many cores) and <20TFlops in GPUs, whereas for realtime animations we would need a ~1,000x speedup :-(


I would say the fallacy is the author's assertion that studios don't know about realtime techniques, or that they aren't already applying motion capture and GPU renderers internally.


I remember watching the great documentary "Good Copy Bad Copy." The most memorable fact from that movie is that Nigeria is the third largest film industry in the world. And, the film added that they are making films much more relevant to people from the African disaspora than Hollywood ever will, as Black Panther proves (it's the exception to the rule and took this long because Hollywood couldn't move any faster). I'm not doubtful that Hollywood isn't aware of this shift and will use the technologies, but the author's point that there is a lot of overhead which will need to be jettisoned to compete in the future is not something to ignore.


Today you can get 3D motion in pretty much real time just from your iPhone videos https://getrad.co (not affiliated with them, just like the app)


Already, real-time animation devices allow cartoon characters to "live." Systems such as Vactor and Alive propel toons onto talk shows and into interactive installations at theme parks.

That was from 1995. https://www.wired.com/1995/12/new-hollywood/


The article inspired me to a couple of hours of research of the field. I feel it helps to see whats currently possible on a relatively low budget. This demo I came across uses an x sens bodysuit for capture (around $7000, plus yearly sub - prices aren't public) , and an iphone x for facial capture in realtime using ARKit programming.

To my amateur eyes, it looks superb, and better than many demos in terms of the sync/accuracy of the facial expression and speech.

https://www.youtube.com/watch?v=i51CizUXd7A

It's also quite charming and amusing. Here's a background article, explaining the approach.

https://uploadvr.com/iphone-xsens-performance-capture-bebylo...


That demo was really cool (had the pleasure of seeing it live), and the Kite & Lightning guys are really great at these kinds of hacks. Keep in mind while this setup suits their needs for video-game class animation, the iPhone only supports like 56 blend-shapes (facial expressions), which wouldn't cut it for film.


On a somewhat similar theme, but far more in depth and exploratory, is the video 'Goodbye Uncanny Valley' by Alan Warburton - https://vimeo.com/237568588


A bit weak article for HN frontpage, but it's good enough to start the discussion.

Yes, the cost of producing quality content will drop drastically. Yes, local improv groups will start making epic space operas and detailed historical reenactments.

One thing didn't get mentioned - interactivity. If you're creating content in real time, you can customize it to specific audience. You can respond to audience questions, give more screen time to favoured character and even alter the plot based on voting. What about crowd-sourced animated content?

The technology is already good enough (check Adobe Character Animator), but we sill need a good digital theatre software.


Neil Blomkamp of District 9/Elysium fame has released some shorts made with the Unity engine and was pretty effusive about the benefits of using a real time engine for animation: https://unity.com/madewith/adam

That said, the results are still quite far from studio CG animation. I wonder whether the market for cheaply produced animation is really as large as the author thinks. I think he's underestimating the primary value add of studios, which is marketing and distribution.


Off Topic Questions for All those working in FX and Studios etc

Are you all in on Nvidia? Are there any shops using AMD? Every Game / FX needs to mention they partner with Nvidia and I have yet to read much professional uses on Radeon. I understand for Data Science and Machine learning it is all CUDA, and likely won't change any time soon. But even for Graphics AMD doesn't seems to be doing well in professional space.


Both companies are used, but nvidia just has better support and better drivers. GPUs are mostly used in workstations for real-time renders, lighting previews, and physics simulations. That stuff works well with CUDA and its programmability.

Render farms are different, and still CPU driven because of software and memory. AMD cpus will probably get more use in render farms for the price/performance, but then GPUs are also getting better at stretching memory and running other software, which might eventually switch rendering pipelines to be GPU driven.


Nvidia dominates at large studios because those studios buy prebuilt linux workstations where Nvidia's drivers make a big difference.


Now, from the people who hyped 3D TV and VR headsets, the Next Big Thing.

This is a useful technology for creators, but probably not mass market.


Too bad that everything centered around NVIDIA in that industry. It is one of the worst companies for the open source. Not only staying aside of open source, but actively resisting any attempts of supporting their hardware in open drivers.


The mocap example he showed was very uncanny valley for me. The disruption he envisions may be further away than he thinks due to how uncanny valley works. The closer you get to human the more revulsion you create.


The author is right, but not in the ways he thinks he is.

The change will be larger and even more unexpected to the people who are currently expecting it. It will be larger than even I can imagine.

Everyone in this thread can't see the forest for the trees. Tell me a story ...


Ok... care to share any actual details?


I actually agree with this comment. It's as simple as saying that the medium of the internet is still a teenager, and what can be done with it as a filmmaking tool, per se, is largely untapped.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: