Hacker News new | past | comments | ask | show | jobs | submit login

The difference between Reyes and path tracing is this:

- Reyes batches camera-visible geometry into grids of micropolygons and microvoxels. These micro-elements are then shaded using any method you please (which, in the case of Pixar movies since Monsters University, has been raytraced physically based illumination). The micropolygons are projected to the image plane and stochastically sampled, much more akin to traditional rasterization. There is no raytracing for camera-visible geometry, and all projections must be linear.

This style of doing things was to exploit locality in screen space. It assumes that the image can be broken into "buckets", and the information outside the bucket can be destroyed when the bucket is finished. This used to be true when RenderMan was first designed, but now that everybody is raytracing their illumination, it is less of a useful optimization.

This also means that shading happens at discrete times. Usually everything is shaded at the shutter-open instant, so specular highlights will be smeared in an unphysical way, because their motion is not properly correlated with the geometry motion.

I should also point out that Renderman keeps an internal radiosity cache in Reyes mode, which is basically just stored colors of illuminated geometry. This cache is queried by raytracing when computing indirect illumination, and a cache of the whole scene is kept for each bounce. Pixar only uses one or two bounces in Reyes mode due to the heavy memory cost. Note that view-dependent indirect illumination is not possible with this scheme-- Second bounces (and deeper) are always diffuse only. (It is possible to avoid this, but it tends to be slow because Reyes makes all kinds of generally-untrue assumptions about its ability to batch shading which fall apart in second+ bounce shading).

- Path tracing fires rays for camera-visible geometry. In most path tracers, including RenderMan RIS, each ray hit generates a new shading exec. This means that time-jittering for camera motion blur also trivially produces time-jittered shading, so (among other things) moving specular highlights will be correct, and view-dependent indirect illumination is possible. Because of this live-shading aspect there is no radiosity cache, and so much deeper levels of indirect illumination bounces can be used (compare 8 bounces to Reyes' typical 2, for example).

Also, because camera-visible geometry is raytraced, it is possible to do interesting things like nonlinear projections.




All good points. I'd also add:

* Discarding data when finishing a bucket means that you have to produce final-quality pixels before moving on. Path tracing allows the renderer to produce an initial, crude pass over the pixels and then refine from there. This leads to quicker initial feedback and makes interactivity possible.

* Object instancing is much more practical with ray tracing.

* Before hybrid REYES/ray tracing, you'd have to bake out shadow maps and other passes before you could get to the beauty render. Even with ray tracing you might still be baking point clouds for GI. With path tracing you can do it all with one render pass.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: