Hacker News new | past | comments | ask | show | jobs | submit login

Isn't that complimentary to what I said though? You could presumably get the same result by just path tracing the whole thing, so all you've gained is speed at the cost memory. I didn't mean speed hack in a pejorative way.

I understand why you might want to do this in practice, but it still doesn't seem to be that important to teach as an introduction.




The difference between Reyes and path tracing is this:

- Reyes batches camera-visible geometry into grids of micropolygons and microvoxels. These micro-elements are then shaded using any method you please (which, in the case of Pixar movies since Monsters University, has been raytraced physically based illumination). The micropolygons are projected to the image plane and stochastically sampled, much more akin to traditional rasterization. There is no raytracing for camera-visible geometry, and all projections must be linear.

This style of doing things was to exploit locality in screen space. It assumes that the image can be broken into "buckets", and the information outside the bucket can be destroyed when the bucket is finished. This used to be true when RenderMan was first designed, but now that everybody is raytracing their illumination, it is less of a useful optimization.

This also means that shading happens at discrete times. Usually everything is shaded at the shutter-open instant, so specular highlights will be smeared in an unphysical way, because their motion is not properly correlated with the geometry motion.

I should also point out that Renderman keeps an internal radiosity cache in Reyes mode, which is basically just stored colors of illuminated geometry. This cache is queried by raytracing when computing indirect illumination, and a cache of the whole scene is kept for each bounce. Pixar only uses one or two bounces in Reyes mode due to the heavy memory cost. Note that view-dependent indirect illumination is not possible with this scheme-- Second bounces (and deeper) are always diffuse only. (It is possible to avoid this, but it tends to be slow because Reyes makes all kinds of generally-untrue assumptions about its ability to batch shading which fall apart in second+ bounce shading).

- Path tracing fires rays for camera-visible geometry. In most path tracers, including RenderMan RIS, each ray hit generates a new shading exec. This means that time-jittering for camera motion blur also trivially produces time-jittered shading, so (among other things) moving specular highlights will be correct, and view-dependent indirect illumination is possible. Because of this live-shading aspect there is no radiosity cache, and so much deeper levels of indirect illumination bounces can be used (compare 8 bounces to Reyes' typical 2, for example).

Also, because camera-visible geometry is raytraced, it is possible to do interesting things like nonlinear projections.


All good points. I'd also add:

* Discarding data when finishing a bucket means that you have to produce final-quality pixels before moving on. Path tracing allows the renderer to produce an initial, crude pass over the pixels and then refine from there. This leads to quicker initial feedback and makes interactivity possible.

* Object instancing is much more practical with ray tracing.

* Before hybrid REYES/ray tracing, you'd have to bake out shadow maps and other passes before you could get to the beauty render. Even with ray tracing you might still be baking point clouds for GI. With path tracing you can do it all with one render pass.


I guess I didn't fully understand your post then, I thought you meant that the Reyes algorithm is an "inferior estimation" and that it might be beneficial to add it to a renderer, but neither is the case. But it isn't an inferior estimation (just a different way of computing things) and you don't want to use the Reyes algorithm in practice as scenes either barely fit in memory or they don't fit in memory at all (and in that case you want your memory usage to be as efficient as possible to avoid reading from the disk as much as possible).


Am I missing something?

In "regular" Reyes (and I may just have this wrong - I was unaware that the hybrid technique you describe was common) I though you were a) locally approximating surfaces and b) applying local shaders. These shaders are typically inferior estimates of the BSDF at that location compared to other techniques, but can be very memory efficient.

In the hybrid approach you describe, you avoid at least (b) above by doing the shaders by path tracing, say, but localized to where your rasterizer has discovered. So you aren't benefiting in anything but speed, so this seem more an implementation detail than anything fundamental.

So I don't see how dropping this sort of detail from any introductory course is anything but sensible. There should probably be a discussion of space/speed/distributional issues in general, but at a higher level.

Again, I'm not questioning why anyone would want to implement it, just why anyone would find it odd to leave out from an introductory course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: