
Path Tracing vs. Ray Tracing (2016) - colinprince
https://www.dusterwald.com/2016/07/path-tracing-vs-ray-tracing/
======
berkut
A bit outdated, and not entirely accurate, at least for film VFX...

This bit:

> but until path tracing times are measured in minutes per frame, as opposed
> to the hours or days they are now, ray tracing (or rasterization, especially
> micropolygon rasterizers like the one powering RenderMan) remains the better
> option for many classes of rendering tasks.

wasn't even true in 2016 - RenderMan (PRMan) 19 added a pathtracer and
eventually in 21 removed entirely its REYES rasteriser, and pretty much the
entire VFX film/animation industry is using pathtracing now.

Artist time is much more expensive than machine time, and not having to tweak
things like materials / lights in the PBR realm that path tracing lives in per
shot / sequence to fake things like GI (like the article mentions is a good
alternative - it's not) is where a lot of the speedup has come from with
iteration times.

If anyone's interested in state-of-the-art for film VFX, five papers on the
five most used pathtracing renderers used in VFX were recently released:

[https://jo.dreggn.org/home/2018_manuka.pdf](https://jo.dreggn.org/home/2018_manuka.pdf)

[https://www.yiningkarlli.com/projects/hyperiondesign.html](https://www.yiningkarlli.com/projects/hyperiondesign.html)

[https://www.arnoldrenderer.com/research/Arnold_TOG2018.pdf](https://www.arnoldrenderer.com/research/Arnold_TOG2018.pdf)

[https://graphics.pixar.com/library/RendermanTog2018/paper.pd...](https://graphics.pixar.com/library/RendermanTog2018/paper.pdf)

[https://fpsunflower.github.io/ckulla/data/2018_tog_spi_arnol...](https://fpsunflower.github.io/ckulla/data/2018_tog_spi_arnold.pdf)

~~~
wheelie_boy
I liked this siggraph presentation about how some people at NVidia see the
games industry adopting a more VFX-like pipeline, for a lot of the same
benefits you talk about:

[http://on-demand.gputechconf.com/siggraph/2018/video/sig1813...](http://on-
demand.gputechconf.com/siggraph/2018/video/sig1813-1-chris-wyman-morgan-
mcguire-real-time-ray-tracing.html)

~~~
pzone
The biggest benefit of the RTX cards isn't that they'll make existing video
games faster or that the new effects are strikingly more beautiful than
current technology. The biggest benefit is how easy it is to add these new
effects, achieving a look as beautiful as the previous state-of-the-art
trickery with a simple toggle.

------
tinymint
Noticed a lot of inaccuracies in the article.

> It also requires light sources to have actual sizes, a bit of a departure
> from traditional point light sources that have a position but are treated
> like an infinitely small point in space

You don't wait for the ray to bounce into a light. Lights are typically
sampled directly at each bounce point in a path tracer. If you waited for the
beam to hit a light it would take a crazy amount of time (100k+ samples for
complex scenes) to converge.

> how far you can trace them before giving up

You can use Russian roulette techniques to get unbiased sampling of arbitrary
length paths.

> The crux of the problem is that with a path tracer you are locked into an
> all or nothing approach...

There's many more subtleties into getting convergence than simply 'tweaking
quality settings', eg. volumetrics, types of lighting, types of material,
denoising etc. Also it's MUCH more difficult to get a realistic result with
simple raytracing than the author says.

> is it the future of high quality offline rendering?

Pathtracing has been used for almost all offline VFX rendering for a very long
time now (although there's some new interesting developments in using
rasterization for production now)

~~~
TheLoneAdmin
> You can use Russian roulette techniques

Did you mean Monte Carlo?

~~~
mroche
They meant Russian Roulette:

[https://docs.redshift3d.com/display/RSDOCS/Optimizations?pro...](https://docs.redshift3d.com/display/RSDOCS/Optimizations?product=maya#Optimizations-
Russian-Roulette)

------
peterbraden
Maybe I'm wrong, but the terminology isnt that clear. Path tracing is a sub
field of Ray tracing - Ray tracing refers to the camera into scene Ray
generation. What he describes as raytracing sounds like the whitted recursive
algorithm.

~~~
pasta
Ray casting = where does a ray intersect an object

Ray tracing = ray casting from the camera and where an object intersects trace
rays (with ray casting) to light sources and reflective materials (Turner
Whitted).

Path tracing = ray tracing but when you hit an object start ray casting from
that point as bounce and gather all energy so you can send it back to the
camera (James Kajiya ?).

So to me path tracing is just a method (extension) of ray tracing.

~~~
atq2119
... and yet, in the DXR API, the TraceRay function does what you're calling
ray casting here, i.e. it figures out the intersection of a single ray with
the scene (and calls a shader associated to the hit material, to somewhat
simplify it; in any case, TraceRay doesn't trace multiple rays).

The terminology just isn't 100% consistent.

------
paulhilbert
AFAIK current state-of-the-art in pathtracing are still metropolis light
transport methods. Seems like current research is mostly focussed on denoising
- which makes sense, since a good denoising provides a valuable shortcut for
quicker results.

While I only occasionally glance at new results in this area it seems that
CNN-based denoising techniques look quite promising, possibly getting us close
to viable real-time pathtracing at least for "suitable" scenes. I am more
confident than ever that a shift to traced renderers could be next - this has
nothing to do with RTX and the buzz around it though...

~~~
wheelie_boy
Metropolis has lots of advantages, but it has serious problems for realtime or
animated applications, in that it tends to flicker or have noticeable low-
frequency noise. This is less of a problem for stills.

RTX can absolutely accelerate path tracers, even for non-realtime
applications. The underlying framework is definitely flexible enough to
support a variety of rendering algorithms, it's basically accelerated BVH and
intersection, with shaders to control behavior.

The biggest advancement I've seen lately is advances in denoising - the ML-
based denoisers are incredible, but others are also impressive.

~~~
kayamon
This is slightly irrelevant but presumably someone could invent a version of
Metropolis that also distributed its rays across time as well as space,
thereby ensuring that any bright/flickery pixels would remain coherent across
frames; i.e. once that path was discovered on one frame, it could propagate
that path information out to previous/succeeding frames.

------
tossandturn
I use path tracing for scientific/engineering studies of light propagation; in
particular, I attempt to simulate monochromatic light sources and
reflective/transmissive/absorptive material configurations (ranging from
specular to diffusive, and everything in between) to determine the irradiance
delivered to specific geometries. In the past I have used commercial packages
like ASAP (APEX Solidworks add-in), Zemax, and FRED (in order of preference).

------
GuB-42
> but until path tracing times are measured in minutes per frame, as opposed
> to the hours or days they are now

How about realtime on a consumer PC ;)

[http://www.pouet.net/prod.php?which=69642](http://www.pouet.net/prod.php?which=69642)

[http://www.pouet.net/prod.php?which=75720](http://www.pouet.net/prod.php?which=75720)

To make things clear, these 4k intros are not at all representative of what is
done in the film industry. It is made possible by using very simple
mathematical shapes (a sphere or 8 cubes). But that's still bonafide
pathtracing.

------
julienreszka
Wish there were illustrations

~~~
andystanton
Disney's Practical Guide to Path Tracing is a good resource for explaining
path tracing: [https://youtu.be/frLwRLS_ZR0](https://youtu.be/frLwRLS_ZR0)

------
GolDDranks
I wonder what convolutional neural networks bring to the table. Maybe you
could use them to get rid of the noise, or then you could use ray tracing for
the base image and train a CNN to "recolour" that image based on a noisy path
traced one of the same scene.

This demo is 4 years old, but the noise still seems to be a problem:
[https://youtu.be/BpT6MkCeP7Y](https://youtu.be/BpT6MkCeP7Y)

If anybody's got more recent impressive demos to link, I'd like to see how
things have been developing.

~~~
blooop
This uses a neural network

[https://www.youtube.com/watch?v=YjjTPV2pXY0](https://www.youtube.com/watch?v=YjjTPV2pXY0)

video results start around 1:15

This one does not use a neural network:

[https://www.youtube.com/watch?v=HSmm_vEVs10](https://www.youtube.com/watch?v=HSmm_vEVs10)

~~~
aidenn0
Those both only use global motion, which is the simplest case for temporal
filtering; it would be interesting to see the effects with local motion (e.g.
a blinking light).

------
seanalltogether
Is there a reason that a Path Tracer doesn't just start with a quick Ray Trace
and then layer the extra paths on top of it? I've seen the effects mentioned
where you limit bounces of a Path Tracer and the resulting image looks grainy.
Why not start with a Ray Trace and blend a limited Path Trace on top? I would
guess that even a full resolution Ray Trace blended against a quarter
resolution Path Tracer with a high bounce limit would give a pretty good image
right?

~~~
Asooka
That's pretty much how commercial ray tracers work. Path tracing for scattered
rays and GI, with ray tracing for evaluating direct light. Look at Cycles's
code.

------
jcfrei
I wonder, will we ever see (or is there already) the equivalent of a game
engine on a chip? Like a ray-tracing rendering pipeline, where the polygons of
a scene (and the position of the observer) are directly sent via a low-level
API to the GPU and then it returns an image.

~~~
w0utert
There are a lot of moving parts involved in making a full-featured path
tracer, and they can have wildly varying performance characteristics. Some
parts such as finding ray intersections in parallel and integrating them map
extremely well to a GPU-like architecture, while others such as building and
traversing a bounding-volume hierarchy are better suited for a traditional CPU
(even though within certain constraints you can also do it on a GPU).

The full rendering pipeline is much more than just finding ray-triangle
intersections. It also involves material BRDFs/BSDFS (reflection/scatter
properties), volumetric effects (fog, liquids, etc), motion-effects such as
blurring, etc. Depending on what you are rendering, the render pipeline be
vastly different from application to application.

I think most production path-tracers are still primarily CPU-based, which
would be because of the required flexibility.

