
Ray Tracing Denoising - Impossible
https://alain.xyz/blog/raytracing-denoising
======
jonas21
It's only briefly mentioned, but NVIDIA's recurrent denoising autoencoder is
pretty amazing.

[https://research.nvidia.com/publication/interactive-
reconstr...](https://research.nvidia.com/publication/interactive-
reconstruction-monte-carlo-image-sequences-using-recurrent-denoising)

Two Minute Papers did a nice explanation of it:

[https://www.youtube.com/watch?v=YjjTPV2pXY0](https://www.youtube.com/watch?v=YjjTPV2pXY0)

~~~
pastrami_panda
Yes it's very nice. I think they went a bit too far with their deep learning
anti aliasing solution DLSS though. Read somewhere here that AMD potentially
had an algorithm that surpassed it in many metrics using no ML. Seems like
that thing about holding a hammer for too long so everything starts looking
like nails.

~~~
missosoup
Also, DLSS was received rather poorly. The performance impact doesn't justify
the increase in perceived quality, and the artifacts when it fails are pretty
jarring.

~~~
wruza
I thought DLSS speeds up the rendering with RT ON.

~~~
missosoup
In all games to date, it's better both in terms of perf and aesthetics to run
with RTX disabled and TAA enabled. There's a bunch of blog posts and youtube
videos like [1] about this.

There's a bit of a meme going around that DLSS stands for 'doesn't look so
sharp'

[1]
[https://www.youtube.com/watch?v=3DOGA2_GETQ](https://www.youtube.com/watch?v=3DOGA2_GETQ)

~~~
Erlich_Bachman
> and aesthetics to run with RTX disabled

Did you mean to say "run with DLSS disabled"? RTX contains all the new
features including ray tracing, which is not the same as DLSS. There are
several games where many people feel they are indeed much better aesthetics-
wise with RTX on, not off. At the very least it's a matter of preference, and
not "better" in general to have RTX off.

~~~
missosoup
I meant to say with RTX disabled. Turning on RTX (and let's call DLSS free at
this point) costs a performance penalty that's equivalent to meaningfully
upping resolution or AA settings without RTX. And people prefer the latter.
This might change as games start making better use of RTX functionality, but
this is where things are today.

For n=1 I've played Metro with RTX on lower settings and without RTX on higher
settings, and I prefer without. I think realtime raytracing came out a hw
generation too soon.

[https://au.ign.com/articles/2019/04/17/what-is-ray-
tracing-a...](https://au.ign.com/articles/2019/04/17/what-is-ray-tracing-and-
should-you-care)

[https://www.techradar.com/au/news/we-tested-ray-tracing-
in-c...](https://www.techradar.com/au/news/we-tested-ray-tracing-in-control-
on-pc-on-every-nvidia-rtx-super-card)

~~~
Erlich_Bachman
> And people prefer the latter

You prefer the latter. Many people prefer ray tracing on. In fact the main
complaint online is about the cost of cards, very few people seem to contest
that scenes where ray tracing is properly artistically used, have superior
aesthetic quality to them and superior realism. (assuming that's what you
mean, since you keep using the term "RTX" and it's unclear what you talk
about, whether it's ray tracing or DLSS etc.)

~~~
missosoup
And yet I cite 3 unrelated sources that all corroborate what I said with
detailed analysis and you cite.. Opinion?

> assuming that's what you mean, since you keep using the term "RTX" and it's
> unclear what you talk about, whether it's ray tracing or DLSS etc.

I use the term the same way NVIDIA uses it. RTX is anything an RTX core
accelerates. Still confused? I think that might have been the intention of
their marketing team.

> very few people seem to contest that scenes where ray tracing is properly
> artistically used, have superior aesthetic quality

This is quantifiable bs. Ray tracing as a technique is superior to
rasterisation, but only with sufficient flops. And the current generation of
hardware does not yield that critical number. So we get 'ray tracing', but so
subdued and limited that existing approaches just flat out look better and
also perform better.

[https://www.youtube.com/watch?v=CuoER1DwYLY](https://www.youtube.com/watch?v=CuoER1DwYLY)

Or if you want a more approachable comparison of RTX vs. not-RTX. consider
Minecraft+RTX[1] vs. Minecraft+PTGI[2]

[1]
[https://www.youtube.com/watch?v=91kxRGeg9wQ](https://www.youtube.com/watch?v=91kxRGeg9wQ)

[2]
[https://www.youtube.com/watch?v=Y2WqX6Iu6cU](https://www.youtube.com/watch?v=Y2WqX6Iu6cU)

~~~
phonypc
You linked a performance analysis of RTX cards in Control, a general overview
of ray tracing and how it applies to gaming and some youtube video from almost
a year ago about DLSS not being implemented very well in one game (which has
since been much improved).

None of these corroborate the idea of RTX effects being aesthetically
inferior, or that this is a widely held opinion.

Consider watching these for an up-to-date take on the subject.

[https://www.youtube.com/watch?v=blbu0g9DAGA](https://www.youtube.com/watch?v=blbu0g9DAGA)

[https://www.youtube.com/watch?v=yG5NLl85pPo](https://www.youtube.com/watch?v=yG5NLl85pPo)

------
affyboi
Aw this didn't mention orthogonal array sampling, another sampling technique
aimed specifically at ensuring good distribution in higher dimensions:
[https://cs.dartmouth.edu/~wjarosz/publications/jarosz19ortho...](https://cs.dartmouth.edu/~wjarosz/publications/jarosz19orthogonal.html)

~~~
BubRoss
It actually does mention and link that paper.

~~~
affyboi
oh woops! didn't catch it

------
vokep
Can't wait until ray tracing is more mainstream. I know NVIDIA has their share
of uncool things but I really appreciate them making the RTX series, hopefully
we'll see more and more of this.

~~~
OCASM
With AMD bringing ray tracing next year to consoles and desktop we won't have
to wait long.

------
criddell
At one time, writing a ray tracer was a right of passage when getting into
computer graphics. I never did it, but this week I was trying to figure out
how to find the intersection point of a ray and an oblique cone and all of the
resources I found were related to ray tracers.

I'm thinking that maybe I should actually write a ray tracer. Is that still a
worthwhile thing to do, or has the world moved on?

FWIW, I never solved the ray-oblique cone problem...

~~~
dahart
Still worthwhile in my opinion! But full disclosure, I happen to be working on
ray tracing problems.

Writing them is still really fun, it’s no less useful today for learning
things than it was 20 years ago. You can get amazing pictures with not very
much code, and the algorithms are really satisfying to understand & implement.

There are also still tons of low hanging fruit. You’d think the easy problems
would be mined out by now, but they’re not. New developments are actively
happening with intersection primitives, color handling, sampling, direct
lighting, shadowing, the list goes on. If you want to do research, you don’t
have to dive that deep to find something unsolved that is solveable.

For an oblique cone intersection, I don’t know the right answer, but the
oblique cone is a skew transform of a regular cone, right? You might be able
to use a regular cone intersector, but pre-transform the ray by the inverse
skew transform?

This resource is fantastic for finding intersection building blocks and code
examples:
[http://www.realtimerendering.com/intersections.html](http://www.realtimerendering.com/intersections.html)

------
Erlich_Bachman
For a practical demonstration of what denoising can theoretically do I
recommend Quake II RTX. There is a dev mode with a bunch of rendering
settings, many more so than in an average game. They were added to it because
this game is more of a tech demo of RTX at this point.

There is a switch where you can turn the denoising of the ray-traced output
on/off: it shows tremendous difference, to a point where it's hard to even
imagine looking at the noisy image that it is even possible to extract the
denoised version.

------
gnode
Something I've wondered is whether technology like this could eventually be
self-defeating for hardware manufacturers. Rather than the evolution of
graphics deriving from improved accuracy of the optical simulation fuelled by
advances in computational power, it may instead derive from optimising
subjective video quality, similarly to video codecs.

While accurately simulating optics is needfully computationally expensive and
gives special-purpose graphics hardware an advantage, it's not clear that
psychologically subjective high quality graphics (i.e. generating visuals
which are inaccurate but convincing to humans) has such a need.

~~~
gruez
>While accurately simulating optics is needfully computationally expensive and
gives special-purpose graphics hardware an advantage, it's not clear that
psychologically subjective high quality graphics (i.e. generating visuals
which are inaccurate but convincing to humans) has such a need.

What you're describing is rasterization, which is what the industry standard
is (at least for games) for decades.

~~~
gnode
Techniques used to create realism with rasterization (e.g. normal mapping;
shadow mapping; screen-space anti-aliasing) are still simulations of optics,
just not entirely faithful ones.

Generating visuals with an autoencoder, albeit hinted by noisy physically-
based raytracing, is not an optical technique; detail is generated from a
visual statistical model, not an optical simulation.

~~~
magicalist
> _noisy physically-based raytracing...detail generated from a visual
> statistical model_

That is an optical simulation :)

~~~
gnode
The raytracing is, but you don't see the result of the raytracing, you see the
output of a neural network inventing detail based on higher definition
training data. It's like seeing some blurry dots through a microscope, then
drawing a sketch of detailed cells, based on your memory of pictures you've
seen. The microscope is an optical system, but the sketch is the result of
memory and style transfer, not simulation of optics. Hypothetically, you could
have no understanding of the behaviour of light in producing the detailed
sketch.

~~~
thfuran
I think the success of deep learning is quite unfortunate. There are a lot of
areas where "throw an ANN at it" has become a go-to even though they're
basically inscrutable blackboxes with minimal theoretical guarantees.

------
elihu
So far, what I've read about denoising is always the context of doing image
post-processing, but it seems to me that some of these techniques could be
used just as well to identify areas of the image that the denoiser is most
uncertain about, so that you can trace more rays in those directions.

~~~
cardiffspaceman
Sure if you want to reduce error at some cost, you can use a noise metric to
identify where you should send more rays. The premise of denoising is that
it's cheaper and you've already spent enough time on the analytical
algorithms. Also there is a chance that the noise/variance is due to a high
variance feature which (a) would have been fine to leave out and (b) causes a
cascade of "noise-driven" ray tracing.

------
stupidcar
What's fascinating to me about this is that it sounds like future renderers
may end up working very much like we think the brain does. There is a virtual
world, but very little raw data about that world is used directly, just a
small sample, and the rest of the image is filled in by a neural model that is
able to infer how the whole scene should look based its a priori understanding
of how things like light and depth work.

------
pavlov
Great post but disappointingly few images. It would have been really
interesting to see these techniques applied on a standard scene with
before/after comparisons.

------
rbkillea
Shirley published a paper in 1991 showing that low discrepancy samplers worked
well in a ray-tracing context. So I wouldn't say that's particularly new.

~~~
BubRoss
I'm not sure what exactly you are responding to, but low discrepancy samples
is not at all what this page is about. There have been a lot of papers on many
different techniques with various upsides and drawbacks when it comes to
reducing noise.

Comparing this overview to one of the most basic techniques that is used
everywhere and is a given is like reading an article on a modern car engine
and dismissing it because you saw someone light some gas on fire 30 years ago.

~~~
rbkillea
I'm sorry you took such offence to my comment. I guess I should have quoted
the statement that I was replying to within the overview section: "Recently,
the use of low discrepancy sampling [Jarosz et al. 2019] and tillable blue
noise [Benyoub 2019] has been used by Unity Technologies, Marmoset Toolbag and
NVIDIA in real time ray tracers."

~~~
BubRoss
Those papers are about specific low discrepancy sampling patterns, their ease
of use, their speed, their flexibility and the scalability of their properties
into higher dimensions. Papers written by knowledgeable researchers in 2019
were not used in all the things you listed.

I understand you know what low discrepancy samples are, but equating the very
first demonstration that random sampling wasn't ideal for day tracing, to the
state of the art that has evolved over three decades of research is ludicrous.

I don't know why you are desperate to be dismissive but it has no basis
whatsoever in reality.

~~~
rrss
whoa, calm down. The post makes it sound like low-discrepancy sampling is a
recent development, and 'rbkillea is pointing out that it is 30 years old.

That's it, no need to attribute malice anywhere. I think you are reading _way_
too much into 'rbkillea's comments.

~~~
BubRoss
There is nothing about low discrepancy sampling being a recent development.
This article is about recent research and suggesting anyone involved would
imply that one of the most trivial aspects of rendering is somehow new is
total nonsense.

There is a recent paper about generalizing n-rooks sampling to higher
dimensions which seems to have been misunderstood by yourself and others. It
was written by researchers who already have dozens of high profile papers on
many different topics.

