
Racing for Realism - mariuz
https://renderman.pixar.com/stories/cars-3
======
WhitneyLand
Going to work at a competitor to Pixar in ‘96, it was the first time I was
exposed to real life computer science problems that needed more compute power
than could be imagined.

Between then and 2017, think about the staggering performance increases
available, more so than in many fields because of the embarrassing (parallel
nature) thing inherent to many of the problems.

And yet there is still no end in sight to the compute power that could be
leveraged.

I wonder what would change if we had say, 3 orders of magnitude performance
improvements across the board starting tomorrow. What might be some of the
first practical advancements to be exploited and seen?

Let’s assume the amount of compute power that could be leveraged effectively
and practically for computer graphics is finite, however large.

How much less compute power would that be, compared to what it’s going to take
achieve general AI? Or even a Turing effective chat bot?

~~~
dahart
That's an interesting question, with an interesting premise.

Will realism (graphics + physics simulation) take less compute than AI for
sure?

If we look at the limit case, the reality I can see out my window has more
atoms than are in my brain. So, in theory perhaps, human-equivalent AI will
similarly require less compute than realistic graphics & physics
simulation/animation.

OTOH, evolution took billions of years and presumably depends on all the
animals and all the physics that has ever happened, so maybe inference will
always be cheaper than graphics, and maybe AI training and evolution will
always be more expensive?

I have no idea, but it's a fun thought experiment.

FWIW, I also worked for a Pixar competitor, and the renderfarm we had was
indeed more compute than I had been exposed to before. That said, we didn't
make a huge amount of effort to optimize the system, we only made sure really
stupid things didn't cause overnight renders to take longer than required for
dailies the next day. I'm absolutely certain that at least 1 order of
magnitude is available to software render farms, if the primary focus shifts
from production to optimization. I'm halfway sure that using today's and
tomorrow's GPUs could get you at least another order of magnitude, but that
comes with a second level of expensive engineering effort. In short, I think
if you _really really_ wanted it, and had the money & time to commit, you
could probably have your three orders of magnitude right now.

~~~
cr0sh
> In short, I think if you really really wanted it, and had the money & time
> to commit, you could probably have your three orders of magnitude right now.

I'd imagine something along the lines of a datacenter, filled with racks
stuffed full of these:

[https://www.nvidia.com/en-us/data-center/dgx-1/](https://www.nvidia.com/en-
us/data-center/dgx-1/)

...but repurposed for graphics rendering instead of deep learning.

In fact, it wouldn't surprise me to find out something like this actually
exists (though whether for graphics or deep learning, I don't know).

Also - I wouldn't be surprised to learn that a datacenter full of D-Wave
machines exists somewhere (perhaps next door to the datacenter filled with
DGX-1 systems - I mean, something has to make sense of the data output).

Pure speculation, of course; I'd suspect we'd know about it though if it were
the case, unless it was done as a government black project...

------
pecg
Personally, I think one of the main problems with current use of CGI
(specifically in non-animated movies) is color treatment. Comparisons between
the original non-altered version of Jurassic Park (JP) against Jurassic World
(JW) shows my point. There is an increased tendency to change hue and
saturation of color in post-production, in order to transmit ideas or set
moods on the film itself, but this makes CGI look faker than what it is and
the brain is able to identify it.

EDIT: I prefer the look of JP over JW, not because I think it has better CGI,
but because colors look more closer to reality.

~~~
munificent
Modern color treatment in film totally drives me crazy, independent of what it
does to CGI.

It seems like _every_ film coming out of Hollywood these days is processed
exactly the same way:

* Crank down the low end so half the frame is solid inky black.

* Pick one or two key colors, and jack up their saturation.

* Desaturate everything else.

In other words, make the entire movie look like an over-dramatic poster for
it. Throw any sense of naturalism or unique look out the door. Once you notice
it, you see it everywhere. Here's the top live action movies of 2016:

 _Rogue One_

Cyan and olive: [http://www.ramascreen.com/wp-
content/uploads/2016/04/Rogue-O...](http://www.ramascreen.com/wp-
content/uploads/2016/04/Rogue-One-A-Star-Wars-Story15.jpg)

 _Captain America: Civil War_

Yellow and red:
[http://cdn3-www.superherohype.com/assets/uploads/gallery/civ...](http://cdn3-www.superherohype.com/assets/uploads/gallery/civil-
war-trailer-2-screenshots/cwttss03.jpg)

Why they felt the need to make Cap's lip rosy red is beyond me. Even better is
this shot:

[http://www.theglobaldispatch.com/wp-
content/uploads/2015/11/...](http://www.theglobaldispatch.com/wp-
content/uploads/2015/11/Captain-America-Civil-War-cast-photo-Renner-Johansson-
Evans-Sebastian-Stan.jpg)

This is a _daylight_ shot. Look at how the characters are all squinting. Look
at the big white fluffly clouds that should be diffusing the light and
softening shadows. And yet, magically, all of the shadows are jet black and
almost all of the color is gone.

 _The Jungle Book_

Green and red (to draw attention to Mowgli):

[https://www.moviedeskback.com/wp-
content/uploads/2016/02/The...](https://www.moviedeskback.com/wp-
content/uploads/2016/02/The_Jungle_Book_HD_Screencaps-23.png)

 _Deadpool_

Another daylight shot with nonsensical shading:

[http://turntherightcorner.com/wp-
content/uploads/2015/12/Dea...](http://turntherightcorner.com/wp-
content/uploads/2015/12/Deadpool-Movie-Screenshot-Brianna-Hildebrand-
Negasonic-Teenage-Warhead-Texting.jpg)

The light is clearly diffuse, if you look at the shadows on the ground, and
yet there is this deep black everywhere. Deadpool took the look further by
only picking one color (red, Deadpool's outfit color) for the entire film.

Lest you think this is just an issue with superhero movies, let's ignore those
and look at the other top non-superhero movies of 2016:

 _Fantastic Beasts and Where to Find Them_

More of the same:

[http://harrypotterfanzone.com/wp-
content/2016/09/fantastic-b...](http://harrypotterfanzone.com/wp-
content/2016/09/fantastic-beasts-trailer-screenshot75.png)

It also suffers strongly from the "It's a period movie so we're going to sepia
tone everything so hard it looks like you're watching the movie through an
aquarium full of pee" effect.

 _Hidden Figures_

[https://www.themarysue.com/wp-
content/uploads/2016/11/hidden...](https://www.themarysue.com/wp-
content/uploads/2016/11/hiddenfigures.jpg)

More of the period movie sepia along with an improbably jacked up
complementary cyan.

 _Star Trek Beyond_

If the past is warm, the future must be cool:

[http://www.ramascreen.com/wp-content/uploads/2015/12/Star-
Tr...](http://www.ramascreen.com/wp-content/uploads/2015/12/Star-Trek-
Beyond2.jpg)

Blacks, grays and neon cyan, just like Rogue One. Look how unnatural his skin
tone is! They wanted to jack up yellow to complement their primary blue so
much that the poor dude has jaundice:

[http://cdn.wegotthiscovered.com/wp-
content/uploads/2016/05/s...](http://cdn.wegotthiscovered.com/wp-
content/uploads/2016/05/star-trek-beyond-trailer-cover.jpg)

 _La La Land_

I'll give this one some credit for varying it up a bit. Also, dramatic spotlit
stage lighting is a logical part of the movie's look. But it's still prey to
the same cliched look in places:

[http://henrymollman.com/content/images/2017/01/Screenshot-
fr...](http://henrymollman.com/content/images/2017/01/Screenshot-
from-2017-01-18-22-36-49-1.png)

 _Ghostbusters_

Ectoplasm green and spooky violet were the key colors for the whole film.
Secondary colors tend to come across as magical or eerie to viewers since
they're different from the safe and familiary primary colors. Of course,
jacking up magenta doesn't do flattering things to skin:

[http://www.ramascreen.com/wp-
content/uploads/2016/03/Ghostbu...](http://www.ramascreen.com/wp-
content/uploads/2016/03/Ghostbusters2.jpg)

Also, what color is Melissa McCarthy's coat? Is it actually black? Who fucking
knows.

 _Central Intelligence_

The traditional comedy look is brightly-lit and saturated, so Central
Intelligence didn't fare quite as poorly as the above. Whenever they had a
chance to dim the lights, though, they revert to the same cliched look as
well:

[https://i2.wp.com/doblu.com/wp-
content/uploads/2016/09/centr...](https://i2.wp.com/doblu.com/wp-
content/uploads/2016/09/centralintelligence2944.jpg)

 _The Legend of Tarzan_

[http://images3.static-bluray.com/reviews/14209_5.jpg](http://images3.static-
bluray.com/reviews/14209_5.jpg)

Jesus, we get it, colorist. They're in the wet jungle. We don't need to be
physically assaulted with the colors blue and green to figure that out.

...You get the idea.

Go back and watch a movie from before 2000 and you'll quickly realize how much
more _pleasant_ to look at most of them where, and how much more variety there
was in look. Compare Alien to The Graduate. But it seems like ever since the
rise of digital coloring and superhero movies, Hollywood has decided every
single fucking movie needs to look like a comic book page, whether it needs it
or not.

One recent movie I really liked that bucked this trend was Her. It was
_heavily_ colorized, but in a way that stood out. In shots like these, even
though there's a strong color cast and a lot of dark, there's still always
some color and detail in the shadows:

[https://i1.wp.com/turntherightcorner.com/wp-
content/uploads/...](https://i1.wp.com/turntherightcorner.com/wp-
content/uploads/2013/12/Her-Movie-2013-Screenshot-Theodore-Apartment.jpg)

[https://3kpnuxym9k04c8ilz2quku1czd-wpengine.netdna-
ssl.com/w...](https://3kpnuxym9k04c8ilz2quku1czd-wpengine.netdna-ssl.com/wp-
content/uploads/2014/02/her-screenshot-14.jpg)

~~~
toomanybeersies
I think that part of the reason this is happening is the move to digital
cinematography, which makes this kind of adjustment from raw video files much
easier.

Also, moving to digital allows you to basically configure your own "film look"
rather than relying on the filmstock that you're feeding into your cameras.

Digital also tends to look a bit more flat and desaturated than film in
general. So to make up for that, they make use of strongly direct lighting to
cast shadows all over the actors (it's really noticeable on the face) to give
depth to the image.

This Star Trek example is a good one of this: [http://www.ramascreen.com/wp-
content/uploads/2015/12/Star-Tr...](http://www.ramascreen.com/wp-
content/uploads/2015/12/Star-Trek-Beyond2.jpg)

There's a massive stage light sitting right off to the left of him.

------
mc32
Nice, I like the wear and aging element it affords the renderings.

This makes me wonder if at some point realism will extend to movement
whereupon a studio begins to shoot (render) feature films involving digital
"people" as costs undercut actors. Will they be able to cultivate favorite
"actors"? So you see an "actor" in different and disparate roles? In other
words create interest and following for a constructed "actor".

~~~
yoavm
Reminds of "The Congress" by Ari Folman. "Robin Wright is an aging actress
with a reputation for being fickle and unreliable, so much so that nobody is
willing to offer her roles. [...] Robin agrees to sell the film rights to her
digital image to Miramount Studios in exchange for a hefty sum of money and
the promise to never act again. After her body is digitally scanned, the
studio will be able to make films starring her, using only computer-generated
characters."

[https://en.wikipedia.org/wiki/The_Congress_(2013_film)](https://en.wikipedia.org/wiki/The_Congress_\(2013_film\))

~~~
qbrass
[https://en.wikipedia.org/wiki/Simone_(2002_film)](https://en.wikipedia.org/wiki/Simone_\(2002_film\))

"When Nicola Anders, the star of out-of-favor director Viktor Taransky's new
film, refuses to finish it, Taransky is forced to find a replacement.
Contractual requirements totally prevent using her image in the film, so he
must re-shoot. Instead, Viktor experiments with a new computer program he
inherits from late acquaintance Hank Aleno which allows creation of a
computer-generated woman which he can easily animate to play the film's
central character. Viktor names his virtual actor "Simone", a name derived
from the computer program's title, Simulation One."

~~~
IncRnd
Actually Happened (not a movie plot)

[http://www.criticalcommons.org/Members/kellimarshall/clips/A...](http://www.criticalcommons.org/Members/kellimarshall/clips/Astaire_DirtDevil2.mp4/view)

"The late Fred Astaire 'dances with' a Dirt Devil vacuum cleaner. This
controversial ad, okayed by Astaire's daughter but protested by his widow,
inspired "the Astaire Bill," which was passed in 1999 to 'eliminate the
exceptions and place the burden of proof on those using celebrity images,
forcing them to show that their use is protected by the First Amendment.'"

~~~
qbrass
I remember that commercial, and Audrey Hepburn's chocolate commercial. And
Bill Clinton's role in Contact.

More recently there was also some news about actors having to double check
their contracts to make sure they didn't sign away the rights to 3D scans of
them being used outside of the film they were in.

~~~
IncRnd
> _More recently there was also some news about actors having to double check
> their contracts to make sure they didn 't sign away the rights to 3D scans
> of them being used outside of the film they were in._

Yikes! An actor works once and would never work again.

~~~
jerf
If that sort of thing becomes commonplace, that won't be an issue, because
they'll never work _at all_ , and actor will slowly but surely cease to exist
as a profession at the high end. See the comments upthread about Hatsune Miku
and such; Japan is much farther down this road than the US is.

------
icc97
It was all super impressive until the last photo of the sunset shot over the
bridge, there seems something artificial about that. Possibly the blur of the
car on the far side, but it's more than that. Just not sure what.

~~~
TheRealPomax
One problem is that if you don't live at the same longitude as where this shot
is simulated, the colours are literally going to look wrong, giving you an
overall feeling of "there's something off about this and I don't know what".
It's the same reason why you can regret buying a beautiful jacket in Norway,
only to discover it looks "completely different" when you get back to your
homestead in Wisconsin.

~~~
rmcpherson
Why would longitude affect an image like this? Can you elaborate?

~~~
dmd
They meant latitude.

------
fwilliams
Since the article doesn't go into a whole lot of technical detail about the
shading technique, here's the pixar whitepaper describing it:
[https://graphics.pixar.com/library/BumpRoughness/paper.pdf](https://graphics.pixar.com/library/BumpRoughness/paper.pdf)

------
acd
Maybe there will be artificial intelligence painters for cgi movies? What I
mean is AI that learns from real photos of similar objects and paints the
computer generated images in a similar style. Deep learning AI can already
colorize black and white images quite good.

Maybe the cgi artificial look can be seen as an art form in itself and 100%
realism is not the end goal for children movies.

------
kwcto
At a glance, there appears to be almost no rendering performance difference vs
standard normal mapping. Seems like it would be feasible in realtime engines.
Baked normal maps are commonplace now in leading game titles. AR/VR
stereoscopic views would make anisotropic surface effects, diffraction, etc
even more obvious for those micro details and really help sell the illusion.
Very interested in seeing Cars 3 in 3D to test this hypothesis...

~~~
taw55
It’s an additional texture fetch on the gpu, it’s virtualy free if you
discount storage costs. It’s been in games for years.

~~~
jayd16
Additional texture fetch is by no means free but when the alternative is 100x
SSAA you can get away with a lot.

------
dmos62
I swear for about 3 minutes I thought the title was "Racism for Realism". I
said to myself, of course, Pixar. Can't have a believable fairytale without
it.

------
im3w1l
> So, problem solved, right? Not entirely. Though these microfacet approaches
> take surface orientation into account and give us realistic results while
> shading, the problem is that they don’t take into account the geometric
> microfacet details contributed by Bump or Displacement mapping which can end
> up inside a single pixel and get filtered out

Could someone explain this in more detail?

~~~
LoSboccacc
Check the bump and bump+roughness disk platter image

Basically the scratches done via displacement mapping aren’t filtered trough
sourface roughness using the previous approach. The new approach appies the
roughness filtering on top of the displacement map, so even displaced geometry
partecipates into the roughness map. physical based rendering pipeline is
basically the swap between the bump and roughness filters.

------
kakarot
That was a good read. I'm excited to see these gains trickle down into video
game rendering. A 35% gain on normal mapping performance without any user
intervention would be huge.

Next step is to explore a similar method with parallax mapping so that we can
see these some of these gains transfer to VR titles.

~~~
taw55
Parallax mapping will not benefit from this, since the limiting factor there
is performing a raymarch on a heightfield to find an exact intersection. At
every step along the ray you need to test wether you are inside or outside of
the heightfield. This means offsetting the uv coordinate (xy position inside
the texture) by the ray vector, and then using that coordinate to sample the
texture again, to check whether or not you penetrated the heightfield. The
amount of texture lookups quickly becomes the bottleneck, especially on large
textures since they incur cache misses. To give you an idea: for every
parallaxed pixel on the screen the heightmap texture might be looked up
several dozens of times. You don’t nearly get to subpixel accuracy before
performance grinds to a halt. Parallax mapping is view dependent, so roughness
mapping, even if somehow applicable would need to be highly anisotropic for it
to work, which means a huge storage cost.

~~~
kakarot
> for every parallaxed pixel on the screen the heightmap texture might be
> looked up several dozens of times

I didn't realize it was on the order of dozens. Just to clarify, we're talking
pixels and not texels, right? This isn't dependent on the resolution of the
map?

> roughness mapping, even if somehow applicable would need to be highly
> anisotropic for it to work

I didn't think about that, I guess you're right. How huge, exactly? Seems like
something you could compress very well if you combine textures into a larger
megatexture.

------
geekamongus
Why do we try to recreate reality instead of maintaining some amount of
fantasy/imagination/unrealism?

When does it make more sense to simply photograph a car than to try and make a
digital image of a car look as real as possible?

~~~
dahart
To your second question, it makes sense to photograph real cars when you want
a movie of real cars doing normal & safe car things. If your car has eyes and
a mouth and talks, then photography might not be an option.

For non-Pixar movies, if the cars are racing and crashing, then for safety it
often makes more sense to go digital. Digital is very often much cheaper than
photography as well. It's extremely expensive to take a movie crew out into a
city to film real cars, and it can get more expensive depending on the venue
and the crew size and the number of shots.

To your first question, why not is an equally valid question. Realism is a
choice, and fantasy/unrealism is also a choice. People employ both choices all
the time, so it's not one or the other, and Pixar's quest for realistic
shading has no bearing on what other people choose. But in general, realism is
good for believability, it makes things look tangible and can increase the
viewer's emotional connection to a story. Sometimes non-photorealistic
animation can break the viewer out of a story, and sometimes (depending on the
story & the environment & details) non-photorealism can enhance a story. It
all depends on what the story is, and how good it is.

------
LoSboccacc
Very interesting article. I remember beikg mesmerized by the old track
pannings in cars and thinking how far cgi has come, reading this afterward
provides an excellent background to that feeling

------
debt
It's crazy to think that someday soon. We will have assembled actors.

People that don't actually exist, appear to be human but are CGI, and that the
audience prefer over real, living humans.

~~~
glitcher
This idea has been explored at least a few times in movies:

S1m0ne (2002)
[http://www.imdb.com/title/tt0258153/](http://www.imdb.com/title/tt0258153/)

------
contravariant
Why is this page hijacking the scroll wheel? As far as I can tell there's
literally no point in doing so.

~~~
lucideer
Looks like it may be being done as some part of the parallax effect on the
full-width images, but it certainly needn't be.

------
tmoravec
Hyper realistic talking cars.

------
Animats
Now someone has to implement this for graphics card shaders in real time.

~~~
danieltillett
It is coming.

~~~
Cthulhu_
Ehhh. I don't think modern computers could even render Toy Story still, maybe
an approximation but that's what realtime graphics are about - using tricks to
make approximations, to make things look "good enough". Non-realtime graphics
feel easier in some ways because you can use e.g. raycasting to get "free"
realistic reflections and lighting and whatnot. Then you can focus more on
what makes a material look the way it does, like they do in this article.

~~~
AboutTheWhisles
I think the original Toy Story could be done in real time with modern graphics
cards. The Quora answer below was talking about CPUs in 2011, and it seems
that plenty of added time (relatively speaking) was due to network and disk
infrastructure not having the same 1000x speedup.

A more subtle aspect to consider is that CPUs might have increased in raw
computation by huge orders of magnitude, but memory latency means that the
same programs from 20 years ago will run much faster, but not at the same
multiple of the raw flops. This implies that to get the full effect of a cpu
or gpu, the software would have to be rearchitected to an extent to get the
full benefit of modern hardware.

Toy Story didn't use any ray tracing, Renderman at the time was reliant on
shadow maps. Because of the lack of global access to the scene from shaders,
the local illumination Reyes architecture should map extremely well to GPUs. I
think it is possible that someone will do similar things with Vulkan or openGL
compute buffers if they haven't already.

~~~
dahart
For raw compute cycles, I think you're right, but...

The one place where today's GPUs aren't as good as Toy Story is filtering &
anti-aliasing. The texture filtering and pixel (camera) filtering in software
renderers is much higher quality and more expensive than GPUs can (typically)
do still. You could roll your own high quality texture sampling in CUDA or
OpenCL, but the texture sampling that comes with GPUs is not great compared to
what Renderman does.

BTW, textures & texture sampling are a _huge_ portion of the cost of the
runtime on render farms. They comprise the majority of the data needed to
render a frame. The entire architecture of a render farm is built around
texture caching. Just getting textures into a GPU would also pose a
significant speed problem.

~~~
AboutTheWhisles
> The one place where today's GPUs aren't as good as Toy Story is filtering &
> anti-aliasing

This only makes sense if you are locked in to some texture filtering algorithm
already, which isn't true. CPU renderers aren't doing anything with their
texture filtering that can't be replicated on GPUs. Where the line should be
drawn by using the GPUs native texture filtering and doing more thorough
software filtering would be something to explore, but there is no reason why a
single texture sample in the terms of a software renderer has to map to a
single texture sample on the GPU.

> BTW, textures & texture sampling are a huge portion of the cost of the
> runtime on render farms. They comprise the majority of the data needed to
> render a frame.

I'm acutely aware of how much does or does not go into textures. Modern
shaders can account for as much as half of rendering time, with tracing of
rays accounting for the other half. This is the entire shader, not just
textures and is an extreme example.

> The entire architecture of a render farm is built around texture caching.

This is not true at all. Render farm nodes are typically built with memory to
CPU core ratios that match as the main priority.

> Just getting textures into a GPU would also pose a significant speed
> problem.

This is also not true. In 1995 an Onyx with a maximum of 32 _Sockets_ had a
maximum of 2GB of memory. The bandwidth to PCIe 3.0 16x is about 16GB/s and
plenty of cards already have 16GB of memory. The textures would also stay in
memory for multiple frames, since most textures are not animated.

~~~
berkut
> I'm acutely aware of how much does or does not go into textures. Modern
> shaders can account for as much as half of rendering time, with tracing of
> rays accounting for the other half. This is the entire shader, not just
> textures and is an extreme example.

At least at VFX level (Pixar's slightly different, as they use a lot of
procedural textures), Texture I/O time can be a significant amount of render
time.

> This is not true at all. Render farm nodes are typically built with memory
> to CPU core ratios that match as the main priority.

I don't know what you mean by this (I assume that memory scales with cores?),
but most render farms at high level have extremely expensive fast I/O caches
very close to the render server nodes (usually Avere solutions) mainly just
for the textures.

The raw source textures are normally of the order of hundreds of gigabytes and
thus have to be out-of-core. Pulling them off disk, uncompressing them and
filtering them (even tiled and pre-mipped) is extremely expensive.

> This is also not true. In 1995 an Onyx with a maximum of 32 _Sockets_ had a
> maximum of 2GB of memory. The bandwidth to PCIe 3.0 16x is about 16GB/s and
> plenty of cards already have 16GB of memory. The textures would also stay in
> memory for multiple frames, since most textures are not animated.

This _is_ true. One of the reasons why GPU renderers still aren't being used
at high-level VFX in general is precisely because of both memory limits (once
you go out-of-core on a GPU, you might as well have stayed on the CPU) and due
to PCI transfer costs of getting the stuff onto the GPU.

On top of that, almost all final rendering is still done on a per-frame basis,
so for each frame, you start the renderer, give it the source scene/geo, it
then loads the textures again and again for each different frame - precisely
_why_ fast Texture caches are needed.

~~~
AboutTheWhisles
> At least at VFX level (Pixar's slightly different, as they use a lot of
> procedural textures), Texture I/O time can be a significant amount of render
> time.

I was referring to visual effects.

> I don't know what you mean by this (I assume that memory scales with
> cores?), but most render farms at high level have extremely expensive fast
> I/O caches very close to the render server nodes (usually Avere solutions)
> mainly just for the textures.

I wouldn't say the SSDs on netapp appliances or putting hard drives in render
nodes are 'architecting for texture caching'. These are important for disk IO
all around. Still it's not relevant to rendering Toy Story in real time since
it is clear that GPUs have substantially more memory than a packed SGI Onyx
workstation in 1995.

> The raw source textures are normally of the order of hundreds of gigabytes
> and thus have to be out-of-core. Pulling them off disk, uncompressing them
> and filtering them (even tiled and pre-mipped) is extremely expensive.

I don't know if I would say 'normally', but in any event I don't think that
was the case for Toy Story in 1995. Even so, the same out of core texture
caching that PRman and other renderers use could be done from main memory to
GPU memory, instead of hard disk to main memory.

> This is true. One of the reasons why GPU renderers still aren't being used
> at high-level VFX in general is precisely because of both memory limits
> (once you go out-of-core on a GPU, you might as well have stayed on the CPU)
> and due to PCI transfer costs of getting the stuff onto the GPU.

This was about the possibility of rendering the first Toy Story in real time
on modern GPUs.

> On top of that, almost all final rendering is still done on a per-frame
> basis, so for each frame, you start the renderer, give it the source
> scene/geo, it then loads the textures again and again for each different
> frame - precisely why fast Texture caches are needed.

This is a matter of workflow, which makes perfect sense when renders take
multiple hours per frame, but if trying to render in real time, the same
pipeline wouldn't be reasonable or necessary.

------
mozumder
One thing about 3-D animated movies is that can be rerendered later on with
improved rendering tech without being reshot.

Disney should go ahead and rerelease Toy Story with its latest rendering tech
(at maybe 8k), to show these kinds of rendering improvements.

~~~
tikhonj
My guess is that it's significantly more difficult than that—those movies are
pure legacy code by now, and we all know how difficult it is to modernize
legacy applications :).

~~~
tbabb
You are correct. Toy Story 1 and 2 were resurrected around 2010 for 3D re-
rendering. It took a team of 4 or 5 working for the better part of a year,
IIRC.

A 3D movie like Toy Story isn't like a Maya file; it's hundreds of thousands
of files in a gigantic filesystem, and many pieces of complicated software
working together just to compose them and generate a mere _description_ of the
scene, let alone make an image out of it. Getting the software to a place
where it could reproduce an image at all was a feat.

And the end result, if everything goes well, is just the same image rendered
again from two angles instead of one. The images would not magically look like
modern graphics, because for that, an artist would have to re-model the
characters to be higher resolution (and then re-animate them to be up to
snuff), re-surface them, re-light them... basically re-make the whole movie.
Not a button push. :)

