
Disney's Hyperion Renderer - ghosh
http://www.disneyanimation.com/technology/innovations/hyperion
======
couchand
I saw Greg Nichols give a talk about this at the UICC [0] this year. It was a
very entertaining talk covering the business and technical aspects of this
project.

From the start the art team on Big Hero 6 wanted to make a really big movie.
They envisioned those big sweeping shots of San Fransokyo that made the
technical folks shudder. They have a massive rendering farm at their disposal
but still probably wouldn't be able to render all the shots in time for the
release; for one reason, because the art folks kept making changes, and for
another, because they contain so many entities. A handful of folks on the
rendering team built a proof-of-concept renderer with support for the global-
illumination look they wanted while also being able to handle the massive
scale.

As the linked article mentions, the solution was to optimize a massively-
parallel algorithm by finding coherent rays that can be efficiently calculated
together. It's a batch method: start casting a bunch of rays, identify similar
rays and group them, calculate collisions for each group, cast reflected rays,
identify similar rays and group them, etc. What I like about it conceptually
is that in a way it treats light as a field rather than as individual directed
rays.

Their initial tests were rendering an infinite plane of generated buildings.
The dystopian metropolis was mesmerizing, and I think Disney would be wise to
come up with some dark plot line to set there. Anyway, someone way at the top
(the director or art director I think) saw the proof-of-concept renderings and
decided that it was the perfect tool to attain their vision of this movie,
which, by the way, has a hard release date something like three months away.

So they proceeded to turn the proof-of-concept into a production-grade
renderer in time to render all the frames and save the movie. And somehow they
pulled it off, they released a really beautiful movie, and they wrote a paper
about it, too.

[0]:
[http://www.acm.uiowa.edu/uicc/speakers.html#nichols](http://www.acm.uiowa.edu/uicc/speakers.html#nichols)

~~~
CyberDildonics
I'm going to deflate the marketing spin a little bit since large companies
tend to go down the route of 'there was an impossible task and we use outside
the box thinking to do what no one else could'.

Tracing batch rays, sorting them for better coherency and sorting the shading
for coherency are not new ideas, but they are long over due to be used heavily
in offline, cpu, ray traced rendering situations.

Large amounts of geometry have been done in movies for a long time, so Disney
didn't really do anything that couldn't have been done before. All the same
tracing a path of light one at a time while interlacing the shading is
terrible for performance so it is great that Disney was able to reap the
benefits. It is something that should have been obvious to anyone who
understood cache performance in CPUs, but now at least people have something
non-academic to point to.

~~~
mattpharr
Given that this is HN, I'd encourage reading and commenting on the substance
of the technical paper rather than the marketing webpage.

Those ideas have indeed been around for a long time; the paper nicely cites
all of the related research work. (Including among many others, a SIGGRAPH
paper I wrote on the topic 18 years ago.)

I think the paper is excellent. First, there's a big gap between academic
papers on a topic and the experience of actually building a real system that
works for real movies. It's unusual for people in industry to take the time to
write up their experiences building these systems, so I salute their making
this contribution to general knowledge about rendering systems.

It's easy to carp about "all could have been done before"; that seems like an
argument that could be applied to try to dismiss just about anything.

~~~
tomvbussel
I completely agree with you that the information in that paper is excellent
and I think it's amazing that Disney is so willing to share so much
information about their renderer, even though their sister company Pixar sells
a renderer.

The question though is whether or not this paper should have been published at
EGSR. Personally I think that an EGSR paper should either propose a novel idea
or should provide a good survey of the field. I don't think this paper
succeeds at either of those. Their method is 'simply' a combination of already
published ideas and I don't think that they do a very good job at comparing
those existing ideas.

Personally, I think this paper would have been better suited as a talk at
SIGGRAPH, a publication at JCGT or a technical report (like Pixar). So: great
paper, wrong venue.

~~~
CyberDildonics
I'm actually really glad they got the paper out there as a small pdf (and I'm
actually glad they compared it to other commercial renderers). The reason is
that it shows very real world performance results. Not only that but cache
coherency optimization results are shocking and illuminating to many people
because it seems so counter intuitive when there technically aren't more
instructions being ran.

Every paper I read now I am looking for all the things that were left out, how
the comparisons have been changed for each scene to make that particular
algorithm look good etc.

It answered a big question lingering in my mind that I haven't been able to
actually try out.

------
Xophmeister
What's the difference between path tracing, ray tracing and photon mapping?

I'd never heard of path tracing before now and the video makes it sound the
same as ray tracing/casting, but what I've read implies otherwise (but I'm not
sure how). Clearly it's different to photon mapping, but what ever happened to
that: Was it too computationally expensive for the pay-off?

~~~
kuya
Ray tracing is a more generic term that covers rendering techniques which
trace rays between a camera and a scene. Back in the day everyone used Whitted
style ray tracers to render shiny metal blobs because secular reflections are
trivially handled with ray tracing. Unfortunately other physical aspects of
light like diffuse reflections were not handled by early ray tracing
algorithms.

Kajiya introduced the rendering equation, a mathematical formulation of global
illumination (takes diffuse, secular into account). It's a multidimensional
integral equation. Unidirectional path tracing is an algorithm he introduced
to solve the rendering equation. It uses several rays traced from pixels in
the camera bouncing randomly through the scene. It's a Monte Carlo integration
algorithm for solving the nasty integrals.

Photon mapping involves a pass algorithm tracing light from light sources into
the scene, then a pass (like path tracing) gathering that light back at the
camera. It better handles more complex light effects like caustics.

~~~
Xcelerate
> It better handles more complex light effects like caustics.

I wouldn't say _better_. I would say it provides an approximation to the
rendering equation more quickly than path tracing does. However, photon
mapping is a biased algorithm, which means that if you average many
independent renderings together, they won't converge on the correct (exact)
image. Path tracing methods (bi-directional, Metropolis, etc.) converge on the
exact solution, regardless of how noisy each individual rendering is.
(However, it may be the case that an unwieldy number of samples is required
for tricky caustics, so _in practice_ , path tracing may fail to produce a
correct result because of high variance.)

~~~
tomvbussel
Photon Mapping might be biased, but it's extremely easy to make it consistent
by using the method outlined by Knaus and Zwicker [1]. Using that method
photon mapping will converge to the right result. Even without progressive
photon mapping you can choose a photon radius that won't cause visible errors.

[1] [http://cgg.unibe.ch/publications/2011/progressive-photon-
map...](http://cgg.unibe.ch/publications/2011/progressive-photon-mapping-a-
probabilistic-approach)

------
shocks
The Pixar online library is a really great resource.

[http://graphics.pixar.com/library/](http://graphics.pixar.com/library/)

~~~
tomvbussel
They have some pretty cool papers on there, but the few landmark papers that
Pixar published were all published in the 80s.

Here is a better source for papers (SIGGRAPH, Eurographics and EGSR have the
best papers on rendering):

[http://kesen.realtimerendering.com/](http://kesen.realtimerendering.com/)

The Hyperion renderer was made by Disney, not Pixar by the way. Pixar only
uses their own RenderMan renderer. Disney uses a mix of Hyperion and
RenderMan, as Hyperion is not useable for test renders, as it can take up to
15 minutes until a pixel will appear on the screen, as they trace millions of
rays at the same time.

~~~
mineral_or_veg
"Disney uses a mix of Hyperion and RenderMan, as Hyperion is not useable for
test renders, as it can take up to 15 minutes until a pixel will appear on the
screen, as they trace millions of rays at the same time."

None of the things you claim in this sentence are true. Disney does not use
PRman at all anymore.

~~~
mineral_or_veg
Sorry, I guess what I meant was that Hyperion is perfectly capable of being
used in test renders. The amount of time before you see any resolved pixels is
of course dependent on the size of your scene, the desired resolution, etc. It
_could_ take 15 minutes before a pixel shows up, but that's not a standard
case for production test renders in an artist's daily work.

------
pmorici
Anyone have any insight as to why Disney would build a separate rending engine
when they own Pixar and could use their renderman engine?

~~~
berkut
Because until last year when Pixar released PRMan 19 RIS, PRMan was not that
good at ray tracing / path tracing - the REYES method of rendering and shading
didn't scale as well for big complex scenes when global illumination and
raytraced light occlusion was done.

The RSL shading language was also causing a rather large overhead of shading
time as it was an interpreted language that could no longer amortise its
runtime cost over a shading grid of points using SSE - in path tracing, you
generally shade one point at a time per ray vertex, although you can batch
them up in some cases (but without significant ray re-ordering and deferring,
after the first bounce this becomes less and less useful).

Disney also use Ptex for texturing, which doesn't really like incoherent
shading points, so that was another reason to put effort into doing large
scale re-ordering of rays and batching them up.

~~~
tomvbussel
The REYES algorithm isn't really the problem, you can just generate an
acceleration structure and shade each shading point using ray tracing [1]. The
REYES algorithm is just an algorithm that generates shading points and
determines in which pixels those points are visible, it puts no restrictions
on how you shade those points. The downside to mixing REYES and ray tracing is
that you have to keep two representations of the geometry in memory, but that
doesn't mean you can't combine the two.

The main problem with RenderMan before version 19 was that it didn't have a
good acceleration structure and that you basically had to implement a path
tracer in RSL on your own. Pixar wrote a path tracer in RSL for Monster's
University, so Disney could have just used that. I doubt that they would write
their own renderer if they could have just asked Pixar to implement a better
acceleration structure in PRMan, so there were obviously different reasons
too.

I think Ptex was an important reason for them to write their own renderer, as
they seem to put a lot of focus on textures in their paper on the architecture
of Hyperion [2]. Hyperion's architecture also allows them to use packet
tracing and to load and subdivide geometry on demand, which is also really
great.

[1]
[http://www.realtimerendering.com/resources/RTNews/html/rtnv1...](http://www.realtimerendering.com/resources/RTNews/html/rtnv11n1.html#art6)

[2] [https://disney-
animation.s3.amazonaws.com/uploads/production...](https://disney-
animation.s3.amazonaws.com/uploads/production/publication_asset/70/asset/Sorted_Deferred_Shading_For_Production_Path_Tracing.pdf)

~~~
berkut
It _was_ the problem, because doing REYES hiding you suffer the huge
"overdrawing" issue, where you need to set the shading rate per object in the
scene based on its size on screen such that it doesn't get too much shading.
If it had too much shading (the shading rate was too low), many more rays will
be sent. And this is because REYES shades stuff before it sees if it's
visible, whereas raytracing from the camera (with the raytrace hider in PRMan)
only shades the points which _are_ visible, so it scales much more on complex
scenes (when GI comes into the picture - without GI REYES can cope easily as
it pages everything).

------
dr_zoidberg
They talk about the bounces of light, which reminds me of radiosity global
illumination[1]. I'm left wondering if this idea of going "from the camera to
the light sources" is applied in any particular radiosity implementation, and
which are it's differences in respecto to the basic form of the algorithm
(that would be "from the light sources to the camera").

[1]
[https://en.wikipedia.org/wiki/Radiosity_%28computer_graphics...](https://en.wikipedia.org/wiki/Radiosity_%28computer_graphics%29)

~~~
wlesieutre
Radiosity doesn't track where light is coming from or going to, just the total
amount of light hitting a face from any direction. It's built around the
simplification that all surfaces are perfectly diffuse so that it can iterate
through all the faces and exchange light with every other face.

This has its advantages and disadvantages, the biggest advantage being that
it's "view independent." Since there's no specular reflection, a surface looks
equally bright from any direction, and you don't need to recalculate anything
if you move the camera. The big disadvantage is there's no soecular
reflection, so light bounces wrong off anything that should have been glossy.

I'm not aware of any attempts to do it "backward," and I can't think of how
that would work. The camera in radiosity is essentially an afterthought; you
can calculate the entire scene without even having one, and the view you get
afterward is more of a data visualization of that result than anything that
involved the camera in a fundamental way.

~~~
dr_zoidberg
Sounds about right. Not having the time to work on implementing this sort of
thing (even as a toy project) leaves me at the point where I sometimes read
about the algorithms but can't see through them into the details.

------
gioele
In the first comparison I see that the shadows in the picture generated by
Hyperion are way too fuzzy compared to the reference photo. Is this a problem
of the technique or just the result of the way in which lightning and polygons
have been set up?

------
cbsmith
This article is from 2013. Why is it getting attention now?

~~~
tomvbussel
This web page is from this month, not 2013. They published a paper on the
architecture of Hyperion in 2013, but that never received a lot of attention
as it appeared out of the blue and it was published at a 'minor' conference.
As most people thought that Disney was still using RenderMan, people never
realized that they were actually using that architecture in a production
renderer. A few months ago FXGuide published an article on Hyperion and in the
past few months there have been a few talks by Disney on it, but this is the
first public article by Disney specifically about their renderer. And besides,
there aren't a whole lot of discussions on Hacker News on offline rendering,
so this article is a good opportunity to discuss graphics.

~~~
cbsmith
Thanks. I had heard about it last year, and so thought it was all old news.

