
NVIDIA Marbles at Night real-time raytracing demo - waffle_ss
https://youtu.be/NgcYLIvlp_k
======
brundolf
Very cool, but what I'd like to get a straight answer on is whether or not
_all_ lights/materials in this scene are ray-traced. Some things like the
marbles' reflections obviously are, but there are lots of tricks that could be
played to minimize the amount of other raytracing in the scene. On the other
hand, if a scene with this high of a poly-count is 100% raytraced, that sets a
dramatically higher bar than what's come before (the best that I've seen
previously was the Quake 2 demo).

Edit: To be clear on what I mean, for those who haven't been following this
stuff, most "real-time raytracing" that's started to crop up in real games
over the past couple years has been limited to only a subset of
lights/materials. A particularly shiny shield, a campfire, etc. And then the
rest of the scene is rendered the traditional way. Even with Nvidia's
dedicated hardware we can't (yet) just slap raytracing on an entire game and
call it a day. But this demo makes it look like we may be much closer to that
goal than I'd thought.

~~~
pixel_fcker
It’s all path traced (with denoising and DLSS upsampling I think).

The poly count is much less impressive than the number of different materials
and number of light sources.

~~~
brundolf
> The poly count is much less impressive than the number of different
> materials and number of light sources.

Is it? The basic tracing algorithms I'm familiar with scale linearly with the
number of polygons (minus whatever can be culled out), and don't really care
about the number of materials (other than for memory usage I guess) or light
sources (because the rays come from the camera). I do know that for edge cases
like refraction some more advanced algorithms will do partial forward-tracing
from the light sources, so maybe that's what you're referring to. Though I
didn't notice any refraction in this demo (it wasn't a big focus, at least).

~~~
pixel_fcker
All raytracing algorithms are based on some spatial subdivision structure
(usually BVH based) which gets you O(logN) instead of O(N). Increasing poly
count only gets interesting when you figure out how to compress truly huge
scenes into a limited memory footprint.

Lots of different materials are difficult for GPUs because of divergence - if
every ray spawned from a surface scattering event hits a different material
you’ve lost all parallelism.

Lots of lights: Evaluating them all for a given surface scattering event is
obviously an O(N) problem (for 10 lights no big deal, but what about
thousands?), so you want to choose some small subset of “important” lights to
consider. Making that choice is a hard problem because the correct choice
depends on the product of the incident radiance from the light, the BSDF at
the surface as well as the visibility function. There’s a lot of interesting
research being done on this (Path guiding by learning the incident light field
or its product with the bsdf, new sampling techniques, spatio-directional
acceleration structures) at the moment.

~~~
brundolf
Got it, that makes sense

------
RedBeetDeadpool
I get that this is really meant to be a technical demo, but in a way its more
art than most modern art is art.

------
air7
This is a absolutely mind blowing. Imagine this connected to a VR headset.

~~~
danudey
At the seemingly low framerate already, rendering this 2x for VR could be
nausea-inducing, though it appears that it's rendered at 4K so 2x 1080p should
be more practical.

~~~
jonny_wonny
Might be possible with foveated rendering

