It's amazing we can now have photorealistic images in seconds.
But I think it will take some more years before studios can render the latest Avatar in realtime.
* Given, say, a $2,000 budget for a new PC (for CPU / GPU / Motherboard / Power Supply / Drives)
* Same resolution as the original movie
* Same frame rate as the original movie (24 fps?)
* You can use whatever rendering pipeline you want. It doesn't need to be RenderMan. And we explicitly encourage cheating, using the modern cheating techniques, if they exist.
* It does not have to be identical in output. You can cheat. But the differences need to be very minimal, even when you do a frame-by-frame still image comparison. Maybe blades of grass, or strands of hair, can be in different places, etc. But basically the same quality. The point being that if you had handed this image to the original team who made the movie, they wouldn't have had a problem using it instead of what they did produce.
Physically based rendering is mostly about ensuring that your models look the same across many different lighting conditions. IIRC, something like Toy Story "cheated" by having the artists change the lighting settings between scenes.
While Wreak it Ralph is the famous Pixar Movie for using a singular "uber-shader" that worked in all situations across the movie.
The "Principled" BSDF was mostly about making things easier to control for the artist. IE: Making settings between 0 and 1, arbitrarily. And other such "user interface" issues.
Because the "cheating" way to get things to run at runtime the easiest is to encode it to H264, and then hit the playback button. :-)
One thing I learned is how a lot of video-game models have baked-in shadows (especially 2d perspective games like fighting games). The sun is assumed to be coming from a certain direction. So any shadows created by the sun are "baked into" the texture itself, and no need to calculate it during runtime.
Other shadows remain dynamic (Ex: a lamp in the background may still cast a dynamic, runtime shadow). And the combination is enough to trick most people into thinking they have fully dynamic lightning.
A good comparison is looking at the kinds of things being rendered in realtime at 4K/30fps on modern game consoles— even games that are a few years old look pretty comparable to 90s-era CG, or the areas that don't are constrained to things like facial animations, where the limitations are more about assets/artwork than technology.
As an aside, it seems to be there is an argument that being dependent on artwork for this is a product of inadequate technology, in that you don't have a good enough general model that you can just provide a simple description (perhaps with iterative refinement) and script to do what you want, with the technology doing what we currently demand of an artist.
Also, I believe they used Blue Moon Rendering Tools (BMRT) to Ray trace Buzz’s visor. BMRT was developed by Larry Gritz, who later joined Pixar. At some point they came up with a way to “farm out” specific areas of rat tracing from renderman.