
Direct X Raytracing: Further Unification - petermcneeley
http://darkcephas.blogspot.com/2018/03/direct-x-raytracing-further-unification.html
======
corysama
Meanwhile, on the Vulkan side: [https://gpuopen.com/announcing-real-time-ray-
tracing/](https://gpuopen.com/announcing-real-time-ray-tracing/)

[https://www.youtube.com/watch?v=C9eMciZGN4I](https://www.youtube.com/watch?v=C9eMciZGN4I)

~~~
pjmlp
Which is only supported on AMD cards.

Given that NVidia has their own API (RTX) , which will be part of Gameworks
[0], is collaborating with Microsoft, and is currently requesting papers for
Ray Tracing Gems[1], I don't see them bothering with Vulkan support.

[0] - [https://blogs.nvidia.com/blog/2018/03/19/whats-difference-
be...](https://blogs.nvidia.com/blog/2018/03/19/whats-difference-between-ray-
tracing-rasterization/)

[1] - [https://news.developer.nvidia.com/call-for-papers-real-
time-...](https://news.developer.nvidia.com/call-for-papers-real-time-ray-
tracing/)

~~~
corysama
Hmm... The video claims that it is open source and supports non-AMD hardware.

[https://www.youtube.com/watch?v=C9eMciZGN4I&t=1m7s](https://www.youtube.com/watch?v=C9eMciZGN4I&t=1m7s)

------
kevingadd
Related and worth reading: [http://aras-p.info/blog/2018/03/21/Random-
Thoughts-on-Raytra...](http://aras-p.info/blog/2018/03/21/Random-Thoughts-on-
Raytracing/)

~~~
fyi1183
The black box criticism is fair but to be expected. Early 3D APIs were black
boxes, but that had the advantage of allowing IHVs to experiment with
different implementations behind the scenes.

Now that things have largely settled down, moving to explicit APIs like Vulkan
makes sense.

With hardware acceleration for ray tracing we're only just at the beginning of
the development, so having fairly black box APIs makes sense again. Expect
that to change eventually once the industry has settled on answers to many of
the currently open questions, but it might take a while.

------
billconan
can this raytracing pipeline be mixed with rasterization pipeline?

~~~
petermcneeley
Yes. In fact the starting point for the raytracing is the data resulting from
rasterizing normal camera view. This technique will allow the expensive
raytracing to be used sparingly.

~~~
a_e_k
What's amusing to me is how closely this resembles the path that some of us
went down in film rendering.

If that history is any guide, the hybrid approach here is likely to be a
short-lived stop-gap before moving on to full ray tracing for everything. It
quickly becomes annoying have to maintain the code paths and the data
structures for both rasterization and ray tracing. You have to keep the two
carefully in sync (e.g., tessellating to the same level, using the same LoD
levels, displacements the same) or you'll get weird artifacts. Even then,
numeric precision can still bite you; surface acne and ray trace bias issues
are worse than usual when the launch points for your rays are computed by an
entirely different rendering technique.

Things get a lot easier when you have just the one codepath and set of data
structures. Not to mention that ray tracing for primary visibility tends to be
fairly cheap since that's when the rays are the most coherent. A lot of the
ray tracing research back in the aughts dealt with efficient acceleration
structure traversal and intersection for tightly coherent bundles of rays.
People have been ray casting for a long time. I think the bigger deal is that
it's only in the past decade where we finally have enough compute power to
really start to handle the less coherent secondary rays at real-time rates.

~~~
sushisource
"Short" being a relative term though. Your points make sense but there's
simply no way raytracing can be as performant as current rasterization
techniques in geometry-heavy scenes at a same-or-better quality level,
resolution, and framerate until GPUs get substantially more powerful.

~~~
gmueckl
Thus is entirely the wrong way around: raytracing scales less than linearly
with the scene complexity while rasterization is mosyly linear in the number
of visible primitives. It is not a simple complexity function in either case,
though.

~~~
geon
> raytracing scales less than linearly with the scene complexity

Not really true, is it? More like O(log n), or linearly to the number of
pixels/samples.

~~~
gmueckl
I meant that in the sense that O(log n) < O(n). But when comparing raytracing
to rasterization you have the problem that the complexity depends mostly on
different variables for each algorithm.

~~~
kbwt
On the other hand, if we assume that every pixel is covered by a constant
number of triangles and a typical scene doesn't push more triangles than there
are pixels, rasterization suddenly looks a lot more like O(pixels) or O(1) per
pixel while ray tracing is still O(log triangles) per pixel accounting only
for direct rays.

Then again, it's the constant factors that really matter. A couple arithmetic
operations plus a single spatially coherent memory fetch (depth test) and a
single write to the frame- or g-buffer vs. 10+ fetches for the kd/bvh tree
traversal followed by several full ray vs. tri tests (each one taking at least
an order of magnitude more arithmetic ops than the sign-based test in
rasterization).

I guess when we are not yet able to use Voxel Cone Tracing in AAA games, will
the temporal filter or RNN based denoising really be enough to make ray traced
GI worth the cost?

~~~
gmueckl
[This became quite a rambling post, but I'm too lazy to shorten it. Sorry.]

For a proper response to this I'd need to dig up the literature that analyzed
the complexity in detail. I haven't had a reason to look at that yet. In
practice, we do not care that much for theoretical complexity of algorithms.
It does not tell you anything really useful. For certain problems, a grid
beats a BVH, and for others, a BVH beats a grid. Sometimes, switching the
heuristic used for BVH construction makes or breaks performance. Sometimes,
rasterization performs worse than ray tracing.

Voxel cone tracing is at its core a volume rendering technique. It requires a
brute force sampling of the generated reflectance volume at each grid cell
along the ray. The dynamic generation of the volumetric data is a three-
dimensional rasterization step that is not cheap. And the output is only
really good for surfaces with a certain amount of glossiness. I was a bit
surprised that Epic axed it from UE4 before release (implementing it takes a
lot of work!), but I think in the end the combined results from reflection
mapping and screen space reflections was of a similar quality. It's a shame,
though, that Cyril Crassin's research work went essentially unused.

This is fundamentally different from path tracing with BVH traversal. A _lot_
of manpower and money has been sunk into the later problem in the last couple
of decades. Ray intersection kernels like OptiX use every trick in the book to
run fast on the hardware they are designed for - and they are really, really
fast when you consider what they have to work with. Unfortunately, the
hardware manufacturers are hell bent on keeping a lot of their tricks secret.

Wavelet filter based denoising really takes the rendered input down to about 1
or 2 paths per pixel. I have had that demonstrated to me in real time on quite
complex scenes (one was San Miguel) - on currently available commodity
hardware, too. Otherwise I wouldn't believe it. These filters make realtime
path tracing work.

