
Ray Tracing Essentials Part 7: Denoising for Ray Tracing - ibobev
https://news.developer.nvidia.com/ray-tracing-essentials-part-7-denoising-for-ray-tracing/
======
swalsh
This might be a stupid question, because I don't really know what I'm talking
about.. But is there a way this could be used to improve physics engines? My
understanding is that a physics engine generally uses continuous collision
detection. I don't know if noise plays a roll in casting for collisions, but
if you could cast less rays, and fill in the holes in the same way... that
seems like it could improve performance there too?

~~~
Swiffy0
Aren't the collision models already pretty simple as is? Simple shapes like
boxes, spheres and lines, for which the collisions are fast to calculate.
Don't know how it works if you have full blown meshes colliding though...

------
simias
The tech is cool but I still can't shake the feeling that from a practical
perspective, in the context of videogame graphics, that's a lot of work for a
meager difference in the end result compared to good old rasterizer shading.
Sure, you get more accurate shadows and reflections... But I'd gladly trade
that for more detailed models and larger, more complex environments for
instance.

Maybe it'll become the new standard for real time 3D graphics in the future
but for the time being I file it next to HairWorks as an nvidia gimmick whose
main purpose is to make the competition look worse in benchmarks because they
don't implement that API when it practice the visual difference is fairly
subtle.

~~~
speedgoose
If you have static lights in a static world with pre-computed raytraced
lights, yes it doesn't change that much. But once you have a dynamic world and
dynamic lights, it's a game changer.

Minecraft with raytracing is a great example. It's so much better with
raytracing compared to simple shaders.

I also heard that it will make game development a lot smoother and faster in
the futur once most hardware will be powerful enough. No need to build
lightmasses, no ambiant occlusion hacks and similar, no reflections with
approximate results.

~~~
sprash
This[1] demo does pretty convincing dynamic lighting without the need for RTX
relying just on regular shaders. To me nvidias RTX looks like a gimmick with
the sole purpose of enforcing vendor lock-in.

1.:
[https://www.youtube.com/watch?v=GtU2194C-D4](https://www.youtube.com/watch?v=GtU2194C-D4)

~~~
dahart
That demo is a ray marching shader with hardly any geometry, and the geometry
is procedural. You do understand this method can't be used to render video
games or films?

> RTX looks like a gimmick with the sole purpose of enforcing vendor lock-in.

That's like saying a floating point unit, or a Google TPU, or even an Intel
CPU is a gimmick to enforce vendor lock-in. If you want to do something faster
in software, you can make hardware for it. Don't buy hardware if you don't
need speed.

~~~
sprash
The most expensive step in ray marching is to find the distance which you will
get for free after one rasterization pass via the z-buffer. With the z-buffer
already present every effect used in this demo can be applied in the same way
to purely rasterized graphics with even less cost.

E.g. Doom Eternal has very elaborated dynamic lighting effects but uses 100%
forward rendering. RTX is completely pointless.

~~~
dahart
> The most expensive step in ray marching is to find the distance which you
> will get for free after one rasterization pass via the z-buffer.

Not true, you can't make that claim. The most expensive step depends entirely
on what you render. The material can easily be more expensive than the SDF.

> With the z-buffer already present every effect used in this demo can be
> applied in the same way to purely rasterized graphics with even less cost.

You're completely forgetting that shadows are a thing.

> RTX is completely pointless.

I interpret that to mean that either 1- you're mad at Nvidia for various
reasons not particularly related to RTX, or 2- you don't know what RTX
actually does.

~~~
sprash
> The material can easily be more expensive than the SDF.

But you still don't have to calculate the SDF in the first place. Which means
one step less computation for your material. Also geometries with high object
counts are not as slow as it is the case with SDF.

> You're completely forgetting that shadows are a thing.

Those are being done via stencil buffers since doom3. And for non-point-like
sources you have a bag of tricks available via shaders to make them soft. No
RTX needed!

> you don't know what RTX actually does.

I know what RTX does. As long as the results are so subtle to the point where
on some screenshot comparisons you can't even tell which one is with or
without, RTX is completely pointless especially when it comes with a
performance penalty.

~~~
dahart
Why are you arguing about SDFs? The demo you linked proves nothing about what
either games do nor what RTX does, in the context of this thread it’s
irrelevant whether z-buffers save you one iteration of ray marching.

Not needing a special shadow map or stencil volume or a big bag of tricky
tricks for area lights is one of the reasons to use ray tracing & RTX.

The main point of RTX is faster ray tracing. You’re complaining about the
screenshot marketing of specific games, not really demonstrating an
understanding the tradeoffs of ray tracing. It’s fine if you don’t see any
advantage to having ray tracing in Control or Battlefield, and/or don’t like
the idea that some people see value in better visuals or easier development.
That means you don’t like it. I see a point even if you don’t.

------
dmitshur
What APIs are being used to perform ray tracing in these new games that
support it?

Edit: Found a starting point of an answer, covering Nvidia’s hardware at
least, at
[https://developer.nvidia.com/rtx](https://developer.nvidia.com/rtx):

> Ray tracing acceleration is leveraged by developers through NVIDIA OptiX,
> Microsoft DXR enhanced with NVIDIA ray tracing libraries, and the upcoming
> Vulkan ray tracing API.

------
seanalltogether
I wonder if the same concept of running the output image through a fast neural
network could be applied to typical rasterized game engines. For instance,
applying antialiasing or anisotripic filtering or shadow blurring?

~~~
ATsch
This is more or less what DLSS is

------
touchpadder
I watched some videos comparing ray traced and rasterised graphics and in real
life the benefit seems negligible. On top of that rasterised graphics use all
kinds of smart techniques to improve the performance. Ray tracing the whole
scene is kind of brute forcing, brick and mortar method. It should be used
selectively and not on each frame IMO

~~~
ihaveajob
From the fundamental point of view, the cost of raytracing grows proportional
to the number of pixels rendered times log(scene size), whereas rasterization
grows linearly with the scene size. So raytracing enables much more complex
scenes at a fraction of the cost of rasterizing, which explains why it's being
used more and more in real time rendering.

~~~
ccmonnett
For others interested in why ray tracing scales O(log N), this is covered in
an earlier video in the NVIDIA "Ray Tracing Essentials" series, specifically
this one:

[https://youtu.be/ynCxnR1i0QY?t=173](https://youtu.be/ynCxnR1i0QY?t=173)

It's timestamped to the discussion of why this is true, but the whole video
(like the series, IMO) is very informative, this one focused on "Rasterization
vs Ray Tracing".

~~~
sudosysgen
For someone that doesn't want to watch, the essence of it is that space is
world-space is divided in a tree-like structure, which makes traversing the
scene as costly as traversing the tree, thus log(n) operations.

------
fxtentacle
While this works OK-ish for static images, it is almost unusable for
animations. But for static images, blur+sharpen in Photoshop has worked OK-
ish, too, for many years.

So the practical benefit of this is negligible.

What we need is a denoising technique where the denoising artifacts move
convincingly with the features of the scene, so that you can use it for movies
and games.

Oh and for games, as long as this is NVIDIA-exclusive, developers have to
treat it as an optional add-on. For multiplayer games, that implies that Ray
Tracing may never show details (such as a reflection of an enemy) that would
give a strategical advantage.

Plus the real issue with contemporary game development is that consoles make a
majority of the revenue (due to less piracy) but they choke when you have 50k+
polygons on an animated character. And you'll be limited to 2 GB GPU RAM on
30% of your PC player base, because they use laptop GPUs.

In the end, then, you usually don't have enough detail to make ray-tracing
look good. It looks amazing for high-poly curved surfaces, such as those used
for offline-rendered cinema movies. But on a blocky realtime game model, ray-
tracing may also highlight artifacts.

Here's a ray-traced low poly bunny:
[https://i.imgur.com/MGotRC7.png](https://i.imgur.com/MGotRC7.png)

Notice how clearly you can see that it is low poly. In a rasterization engine,
one would "fix" this by blurring the edges with shaders and bending the
corners with normal maps.

So in a sense, ray-tracing is too honest to work well with current video game
models.

~~~
willis936
>Oh and for games, as long as this is NVIDIA-exclusive

It isn't. [0][1]

The APIs are intelligently laid out such that hardware accelerated raytracing
can be used on popular APIs regardless of GPU vendor, if the vendor has made
their drivers correctly.

0\. [https://devblogs.microsoft.com/directx/announcing-
microsoft-...](https://devblogs.microsoft.com/directx/announcing-microsoft-
directx-raytracing/)

1\. [https://www.khronos.org/blog/ray-tracing-in-
vulkan](https://www.khronos.org/blog/ray-tracing-in-vulkan)

~~~
fxtentacle
At the moment, there is no AMD GPU which supports the full ray-tracing spec.
So effectively, it is NVIDIA-only.

