

Pixar and RenderMan: Changing the distributed rendering game - UPSLynx
http://icrontic.com/article/pixar-renderman-distributed-rendering

======
troymc
There are many other changes happening in the rendering world. Two others are:

\- More accurate calculations based on the physics of the light in the scene
(i.e. techniques which tend to produce more "real-looking" renders). Ideas for
how to do this aren't new, but the available computing has increased and made
some of those ideas practical.

\- The use of GPU clusters for rendering. This may sound odd to gamers, who
might think GPUs have always been used for rendering, but the real-time
rendering done for video games is different from the rendering done for high-
end movies and architectural visualization. Until recently, those rendering
calculations were done by CPUs, but with the advent of general-purpose
computing on GPUs (GPGPU), and standard ways to do that (CUDA and OpenCL),
there's been an increase in the use of GPUs for rendering.

~~~
radarsat1
I was wondering about GPUs, after reading that sentence about "massive farm of
CPU cores." GPU processing is here now, and I'd be surprised if the movie-
quality rendering industry, who stand to benefit hugely from increases in
speed, haven't explored this option yet. Especially as both GPGPU APIs have
special structures and functions to help with graphics-related tasks like
image loading, handling colours, etc.

On the other hand, I could see some difficulties: a lot of cards don't yet
support 64-bit floats, and there are some severe memory restrictions. However,
with a card on the expensive side, you can get can around both these problems,
and it really will render the scene thousands of times faster than a CPU so
the cost would pay off. Something like the NVidia Testla "personal
supercomputers" are "only" on the order of 8000$ or so for 6 GB of video RAM
and just an amazing number of cores available, a drop in the bucket for a
company whose revenue depends solely on rendering graphics.

~~~
berkut
The reasons GPUs aren't yet used much in the VFX industry, are:

GPUs are currently limited by what they can do - they're fast in terms of
float throughput in a simple loop, but as soon as you introduce a lot of
branching (for things like global illumination - path tracing), their
throughput falls off a lot. The Fermis are better, but Intel's branch
predictors are very good.

SSE and AVX can also more than quadruple the throughput of CPUs in the best of
cases.

Also, a typical VFX feature film scene can have up to 200GB of geometry and
100 GB of textures, and this is before acceleration structures, deformation
motion blur caches and texture mip-maps have been created.

Transferring that amount of data through the PCI-E bus isn't worth it at the
moment, especially given the limit of 6GB of onboard VRAM that would mean huge
amounts of swapping.

Also, GPUs use even more power and produce even more heat than CPUs, so you
don't save anything here.

~~~
Keyframe
One other reason why GPUs aren't as widely used: Primary Rays Cache, Secondary
Rays Trash! GPUs are becoming a standard in some pipeline stages of rendering
though. <http://www.nvidia.com/object/wetadigital_avatar.html>

~~~
berkut
That's Weta being weird and being wed to their PRMan pipeline using PantaRay
as a first-stage radiosity solution which is fed to PRMan as an infinite
series of lights. It works, but from what I hear it's limiting their pipeline
quite a bit. But then, they're Weta so they're more than capable of writing
their own stuff and they're good at what they do :)

ILM used to do something similar, but a lot of the big houses are moving from
PRMan over to raytracers (specifically Arnold) with full GI support as it
makes the overall pipeline and definitely artist look-dev a lot simpler and
faster. PRMan 16 is also a very good raytracer now (it could previously do ray
tracing through custom shaders).

------
akg
The major problem for cloud-based services for studio pipelines is:

1) There are are large amount of assets that need to be sent over to the
cloud. In my experience this can go up to several gigabytes per frame if you
count CFX, FX, Rigs, etc.

2) Most studios are reluctant to have their data stored on the cloud due to
piracy concerns.

I wonder how Greenbutton is handling these two issues?

~~~
berkut
1\. Most VFX places are on a fat bandwidth pipe because it's becoming more and
more common to shares assets with other studios. It's definitely an issue, but
with geometry formats like Alembic which offer very efficient compression and
de-duplication, it's not as bad as it seems.

~~~
akg
True. Although, when I was at Dreamworks, they had a dedicated pipe between
their two office locations and one to a offsite rendering center in New
Mexico. I know they were having trouble still (even with a private dedicated
pipe) transferring all their assets and render data. This got even more
problematic as they switched to 3D-Stereo renders. Then again, that was a
couple of years ago.

~~~
pestaa
How does the switch to 3D-Stereo make the issue worse? The process of 3D
rendering consists of rendering the same content from two angles and blending
the two, is that right? To an amateur like me, it doesn't sound like
additional data to transfer.

~~~
berkut
For 3D rendering, yeah, the geometry and textures are the same, so you re-use
them.

But you get double the output. And it's not just RGBA data, generally you
output AOVs of Z depth, UV coords, Normals, XYZ world position pass, motion
vectors (forwards and backwards), specular contribution, reflection, ambient
occlusion, object ID, material ID, etc, etc.

And that's stored normally as EXR files (16 or 32-bit float), which can be
compressed, but for 4K res is generally around 40MB per frame (RGBA). For a
full set of AOVs, you can be talking 640 MB per frame, x 2 for stereo which
adds up to a lot. Assuming a 2 hour movie, at 24 fps, that's around 172k
frames.

If it's not a full CG film, if you're lucky (i.e. no manual 3D conversion)
you've got double the video footage as well.

These all need to be composited together, producing twice as much on the
output of that.

------
regularfry
Heh. I strapped together a LightWave cloud renderer as a side project a couple
of years ago. Never really did anything with it, though. Wonder if I should
dust off the code and see if there's interest?

------
kenrik
It's a shame that Xgrid never lived up to its potential. Unless your rending
podcasts there's not much it's good for.

It would be awesome to have something like seti@home or folding@home for
rendering small independent studios work.

