
Vectorized Production Path Tracing - setra
http://tabellion.org/et/paper17/index.html
======
ykl
This is one of my favorite graphics/rendering papers from 2017; great work
from the entire DWA Moonray team! In addition to the main paper, there are
some other materials that were presented at HPG and SIGGRAPH 2017 by the same
team.

HPG Slides: [http://www.highperformancegraphics.org/wp-
content/uploads/20...](http://www.highperformancegraphics.org/wp-
content/uploads/2017/Papers-
Session5/HPG2017_VectorizedProductionPathTracing.pdf)

SIGGRAPH Course Notes (the relevant part begins on Page 35):
[https://jo.dreggn.org/path-tracing-in-
production/part1.pdf](https://jo.dreggn.org/path-tracing-in-
production/part1.pdf)

SIGGRAPH Slides: [https://jo.dreggn.org/path-tracing-in-
production/MoonrayV3.p...](https://jo.dreggn.org/path-tracing-in-
production/MoonrayV3.pdf)

Also, the actual shading system they used was also presented at SIGGRAPH 2017:
[http://blog.selfshadow.com/publications/s2017-shading-
course...](http://blog.selfshadow.com/publications/s2017-shading-
course/dreamworks/s2017_pbs_dreamworks_notes.pdf)

~~~
pixel_fcker
It's interesting work but it does seem like the speedup they get is pretty
poor (1.3x to 2.3x) when going from 1 to 16 SIMD lanes. It seems like the
overhead from all the queue management and AOSOA transformation must negate
most of the benefit of the parallelization?

They also mention the fact that programming the system is hard, and plugins
must fall back on single-lane code paths until they can be coded into the
system proper.

I would assume all the ray sorting makes it extremely difficult to use any
bidirectional methods as well.

------
Jonanin
Why aren't animation houses using GPUs for rendering? It seems like they could
get another order of magnitude speedup with CUDA or OpenCL.

~~~
w0utert
It's not that simple. The basic idea behind path tracing maps very well to
GPU's, so you can get huge speedups there (10x-100x depending on the scene,
geometry, representation, etc). For example path tracing signed distance
fields (volume textures, basically) can be done extremely efficiently on
GPU's.

Things get a lot more difficult when the scene becomes more complex, if it
needs to be animated, etc though. You need much more advanced forms of
visibility detection/object culling, track scene changes, need to have an much
bigger working set (models, textures, metadata) in memory, etc. The amount of
code that needs to run compared to 'just the path tracer' starts to far
outweigh what the GPU can efficiently process.

Hybrid solutions are of possible and widespread, but it is not easy to
implement those in a way that will not annihilate the speedup you get from the
parts running on the GPU because of synchronization, copying memory around
between CPU <-> GPU etc. I can imagine that the kinds of rendering pipelines
animation studios use likely depend on hundreds of individual tools from
different suppliers, so it would be next to impossible to integrate and the
full rendering pipeline efficiently if it runs partly on CPU and partly on
GPU. But maybe some parts could be?

~~~
gt_
Good summary. I don’t think many 3D artists would disagree that ‘Redshift’ is
slightly ahead of the small pack of GPU renders available, but switching has
repercussions and limitations throughout the production. These CPU renderers
will handle anything you throw at it.

------
gt_
The featured image looks incredible.

------
chaintip
Wow, this looks nice.

