
Nvidia Reinvents Computer Graphics with Turing Architecture - tosh
https://nvidianews.nvidia.com/news/nvidia-reinvents-computer-graphics-with-turing-architecture
======
elihu
> The Turing architecture is armed with dedicated ray-tracing processors
> called RT Cores, which accelerate the computation of how light and sound
> travel in 3D environments at up to 10 GigaRays a second. Turing accelerates
> real-time ray tracing operations by up to 25x that of the previous Pascal
> generation, and GPU nodes can be used for final-frame rendering for film
> effects at more than 30x the speed of CPU nodes.

Interesting. If one were to ray trace a scene at 1920x1080 and 100fps, tracing
100 rays per pixel (for soft shadows, global illumination,
reflection/refraction, and so on) at 60 fps, that comes out to be
12,441,600,000 rays per second. So, "10 GigaRays a second" seems to be in
about the right ballpark for interactive, high quality rendering.

Of course, the computational cost per ray is hugely dependent on the scene and
the memory access patterns (which tend to be kind of random), and the number
of bounding boxes a particular ray has to traverse (could be a few or it could
be hundreds).

One semi-popular rays-per-second benchmark used to be to place a Stanford
bunny in a particular location and generate random rays around the bunny,
testing each one for intersection. That way everyone is comparing their rays-
per-second scores in an apples-to-apples comparison. I don't know what the
most popular benchmark is these days.

I'm also curious if Nvidia has hardware acceleration for the process of
rebuilding the acceleration structure when objects in the scene move around.

------
sctb
Previous thread:
[https://news.ycombinator.com/item?id=17754445](https://news.ycombinator.com/item?id=17754445).

