
Real-Time Ray Tracing Demo - mariuz
https://www.unrealengine.com/en-US/blog/technology-sneak-peek-real-time-ray-tracing-with-unreal-engine
======
pasta
Some context: those real-time renders are done on a $150,000 rig.

Ofcourse when you are a movie production company this will be no problem. But
I think most gamers still have to wait a little.

~~~
polskibus
Apparently you can do this with 4x TitanV which is $ 12k total.

citation:
[https://twitter.com/BrianKaris/status/977224198148931584](https://twitter.com/BrianKaris/status/977224198148931584)

~~~
ksec
Interesting, he mentioned it was totally unoptimised at all.

We should be able to get close to 4x the transistor count of Volta @ TSMC 7nm,
sometime in 2020.

But the 1080P and 24fps is only enough for movies, Gaming needs much higher
then that.

~~~
jakear
It’s not really enough for movies anymore. 4k is growing in popularity
dramatically, and I for one am entirely sick of 24hz.

------
jessewmc
> using the latest GPUs and machine learning techniques

I have no expertise in either graphics or machine learning, but is this just
buzzword for its own sake, or is there actually any way that machine learning
is applicable to a mathematical problem like rendering?

Again, from my position of relative ignorance, it seems like graphics is
largely a well defined problem space that needs mathematical ingenuity and
bigger/better hardware thrown at it, rather than building a black box function
by training it on sample inputs and outputs (what would that even look like?)

~~~
meheleventyone
For raytracing it's been used to de-noise images:
[https://blogs.nvidia.com/blog/2017/05/10/ai-for-ray-
tracing/](https://blogs.nvidia.com/blog/2017/05/10/ai-for-ray-tracing/)

~~~
John_KZ
If you look closely, at around 0:22 it causes some artifacts in the rifle's
red light matrix thingy. It could be regular compression artifacts, but they
look like denoising artifacts to me.

~~~
faragon
Could it be just an intentional focus effect?

------
ramzyo
An anecdotal observation: there seem to be many ray tracing-related articles
and ShowHNs that make the front page. For someone who isn't in the computer
graphics space, what's the importance of improving existing ray tracers or the
novelty in writing one's own? Is writing one a particularly challenging thing
to do? Is improving existing ray tracers the equivalent of chip manufacturers
increasing CPU performance in the chip world? Just trying to understand this a
bit better.

~~~
SomeRando111
I think they're just cool.

First of all, "ray tracing" means at least two completely different things in
the world of computer graphics:

MEANING ONE: GRAPHICS HACK

The old meaning of "ray tracing" is a hack: you take your screen, you figure
out the projection into the world that represents the "camera", you trace out
from each final pixel in the image to see what geometry it hits first,
reflect, and allow for a certain amount of bounces before you decide on a
color for the pixel based on the surface material properties you interacted
with and where you got to a light.

The driving intuition here is that this bouncing this "ray" off geometry has
some surface similarity to how light actually works in the real world (Except
backwards: we don't shoot photons out of our eyes of course). However, it's
not really any more "physically-based" or physics-accurate than traditional
rasterization, which is just loads of hacks: i'm a triangle, i'm angled like
so towards a light, i should be this color, except i'm bumpy, so lets simulate
that by pretending im actually angled a bit off that angle here...

This particular hack does one or two things really well that normal
rasterization does not: mirrored surfaces and translucent materials with a
high index of refraction. This is why 99% of ray tracer demos include a bunch
of mirrored spheres floating over a fountain on a marble floor. Ray tracing,
the hack, _does not_ solve many other problems that ignoring the physics of
lighting leaves out: caustics (those effects you see on say a the table under
a glass of water when it catches the light), soft shadows, ambient occlusion
(the tendency for things that are hidden by other things to not receive as
much illumination, the way the joint where the wall meets the ceiling is a
little darker than the wall.)

Writing this type of ray tracer is actually really simple and easy, which is
part of the reason lots of people are interested in ray tracers, I'd wager.
It's an intro computer graphics project. It also provides ample opportunity to
hack on the performance because it's embarassingly parallelizeable and there
are all sorts of intuitive algorithmic enhancements available (octree
subdivision, etc.)

A lot of posts get on to HN about this type of ray tracer because the two
things these are good at are cool, and given the parallelization opportunities
fit with modern hardware development, this is hoving into range of "shiftable
from offline to real time" which is cool. The second big reason there's always
random ray tracer stuff on HN is because a ray tracer has one of the highest
code-volume-to-cool-effect ratios so it shows up in intros and demos all the
time. It's a great little project where you get something super neat super
fast, but you have loads of runway to make things better with satisfying
increments.

MEANING TWO: ACTUAL GRAPHICS SOLUTION

The "real way" to determine the appropriate illumination for every pixel on a
rendered image is to start with all the lights, characterize the photons they
emit, emit a zillion, trace them around the room bouncing off of objects, and
see which ones ultimately hit the camera plane and write those colors. This
will produce a photorealistic (or heat-camera realistic) rendering of a scene.
You are simulating the actual physics of illumination here.

Of course, this is computationally insane, so there are approaches to
dramatically reduce the computation at the expense of accuracy. The family
that concerns us here are "path tracing" algorithms (e.g. "Metropolis Light
Transport" is a monte carlo approach to integrating the "rendering equation",
the equation that determines how things should look under lighting
conditions). There are other ways to start with the rendering equation and
reduce the computational complexity (e.g. radiosity) but this is the one that
is sometimes called "ray-tracing".

These methods handle all the things I mentioned hack ray tracing not handling:
soft shadows, ambient occlusion, caustics, etc.

I think this kind of stuff gets to the top of HN because it's awesome? My more
serious answer might be that a lot of real-time rendering code is layered with
hack upon hack to handle adding bits of detail to a rendered scene to make it
more photorealistic. There's nothing particularly physical about bump maps,
environment maps, or a zillion other real time shader tricks. They're all cool
but they're hacks. The idea of being able to make a path-tracing-type renderer
that runs at realtime speeds would mean that we wouldn't have to spend so much
time hacking in things like prebaked shadow volumes or what have you in any
realtime renderer (e.g. a game engine) and consequently our engines might be
much more generic, instead of carefully tuned shader-by-shader to the specific
types of effects we want to create. I'm kind of doubtful of this (I don't work
in the games or computer graphics industries though so I'm extremely not an
expert) because the "look" now is so much a look: people like bloom and lens
flare. Film doesn't look "real" but when you make a movie tha looks more
accurate people complain it looks "fake": you need to flicker.

~~~
eutectic
There's nothing wrong with starting the paths from the eye; the laws of
physics are completely time-symmetric, so you can get equivalent results
either way.

~~~
kybernetikos
It's been a long time since I studied this, and I've forgotten most of it, but
I think the key problem is that you don't know in advance what the ray density
should be at the eye, so when you start the rays at the eye, you typically
emit a fairly even density of rays and see what they do in the scene, but this
won't give you correct results for some scenes, where paths emitted from light
sources should end up more concentrated in some parts of the eye/image than in
others.

There are other hacks too - the typical start-at-the-eye technique doesn't
bounce the paths around the scene until they hit a light, it just does a
couple of bounces, and at each surface it takes an estimate of how lit it is
by looking at the lights that are visible from the intersection. I guess you
_could_ simulate the rays bouncing until they hit a light source, but at that
point, why not just do the simulation from the light source like in the
physical world.

What you really want to do at a surface intersection when you're going from
the eye is an integration over all the possible paths light could take to hit
that point and reflect along the path you took from the eye. For specular
highlights and perfect reflections, this is reasonable, but in other cases
there are an infeasibly large number of possible incident paths that could
come out along the path you took.

------
sprash
In comparison to rasterization, ray tracing does a perfectly good job of doing
correct reflections.

However from a psychovisual standpoint correct reflections are the least
important thing to look for. Just use a dedicated cube map for a scene and use
environment mapping and focus on stuff that really matters and everybody
immediately notices. Like high resolution textures, some proper anisotropic
filtering and "HDR" lighting.

~~~
dragontamer
Ray traced shadows are noticeably better than rasterized shadows.

In practice, ray-tracing will never fully take over rasterization. At best, a
standardized ray-tracing API will perform a few calculations which are
superior in the ray-tracing style.

IE: Mirrors, Ambient Occlusion, and Shadow generation. Everything else
(texturing, among other things) will likely remain rasterization.

~~~
elihu
Why wouldn't some form of ray tracing or path tracing eventually take over
once we have the necessary hardware and software and standard APIs to do it at
high resolution and high framerates? What features does polygon rasterization
give us that can't be replicated any other way?

~~~
Retra
If you can do ray tracing fast, you can do rasterization faster. And so if the
ray tracing is not critical, you're wasting time computing it.

~~~
elihu
I don't think it's a waste of time. Ray tracing puts you on the path to real-
time global illumination, which I think everyone is going to want if it's in
reach. It also makes a lot of non-graphics tasks easier, like determining
line-of-sight visibility between arbitrary points in the scene or doing
collision detection.

Besides, computational power isn't something we necessarily need to conserve
for its own sake. We stopped using Gourad shading when the hardware became
fast enough that we didn't need it anymore. Polygon rasterization will stick
around for a long time I think, but I don't think it will always be an
essential part of everyone's real-time rendering technology stack.

[http://www.paulgraham.com/hundred.html](http://www.paulgraham.com/hundred.html)

> I can already tell you what's going to happen to all those extra cycles that
> faster hardware is going to give us in the next hundred years. They're
> nearly all going to be wasted.

> I learned to program when computer power was scarce. I can remember taking
> all the spaces out of my Basic programs so they would fit into the memory of
> a 4K TRS-80. The thought of all this stupendously inefficient software
> burning up cycles doing the same thing over and over seems kind of gross to
> me. But I think my intuitions here are wrong. I'm like someone who grew up
> poor, and can't bear to spend money even for something important, like going
> to the doctor.

~~~
Retra
"Real time rendering" mostly means games, and performance is a very very easy
bottleneck to hit for games. Because it is easy to make things more fun and
interesting by scaling them up: more entities, more interactions, more
options, etc.

So I believe you are quite wrong. It is trivial to scale-up a simulation to
the point where you're at a performance bottleneck. The only reason we don't
scale things up today is because our computers are far too slow to handle it.

And we stopped using Gouraud shading because it is horrendously ugly. We'd
still be using it still if it looked half as good as rasterization does.

~~~
elihu
One of the surprising characteristics of ray tracers is that if you've got a
reasonably efficient acceleration structure, they're relatively insensitive to
scene complexity. Say you're rendering ten thousand triangles at a certain
framerate, and then you bump the resolution of your meshes up so that you've
got a hundred thousand triangles. The framerate drops a little bit, like maybe
20 percent or so, but it's nowhere near a linear slowdown.

You can always slow the rendering times way down by adding a lot of reflection
and refraction, but those are things you can't really do in a general and
physically-correct way in a polygon rasterizer.

------
electricslpnsld
> However, for live-action film and architectural visualization projects where
> photorealism is valued over performance, ray tracing gives an extra level of
> fidelity that is difficult, if not impossible, to achieve with
> rasterization.

Don't essentially all modern production renderers use path tracing? Mental Ray
was the last 'major' renderer I know of that used ray tracing, and it was
discontinued a few years back.

~~~
petermcneeley
These terms tend to be used interchangeably. In the context of your original
quote I think path tracing is the more accurate term. Even Arnold Renderer is
described as a raytracer. (
[https://en.wikipedia.org/wiki/Arnold_(software)](https://en.wikipedia.org/wiki/Arnold_\(software\))
)

~~~
electricslpnsld
Poking around there does seem to be a fair amount of overloading of the term!
Even Arnold's website jumps between using ray tracer [0] and path tracer [1].

[0] [https://www.solidangle.com/arnold/](https://www.solidangle.com/arnold/)
[1] [https://www.solidangle.com/about/](https://www.solidangle.com/about/)

~~~
dahart
I like to see "path tracing" as a higher level rendering algorithm built on
top of ray tracing, and "ray tracing" as a visibility query in a scene using a
straight line. Ray tracing is more of a computational primitive that can be
used for direct lighting, or for reflections, or for global illumination, or
if you want, even for simulating audio.

Note that "path tracing" is also overloaded, some people use it to mean any
ray tracing that does global illumination, but the term also refers to a
stricter definition which is an algorithm that integrates lighting in
n-dimensional path space. A path tracer under that latter definition doesn't
do any branching along a path, instead it builds complete paths from camera to
light to calculate the contribution to a pixel. This is where the name came
from.

------
sounds
> The Reflections GDC demo ran on a DGX Station equipped with four Volta GPUs
> using NVIDIA RTX technology. For now, you will need similar horsepower to
> leverage real-time ray tracing. In the future, we can rely on Moore’s law to
> bring down the cost and complexity of how this technology is packaged and
> delivered.

While GPUs are getting the full benefits of Dennard Scaling, much more so than
Intel CPUs a la Goordon Moore, everyone is feeling the process pain of 10nm,
7nm, and beyond.

It stands to reason that the major advances in real time ray tracing will be
evenly split between more efficient designs and smaller transistor sizes.

It seems foolish in 2018 to "rely on Moore's law" to bring a 4 x Volta GPU
requirement to something more palatable for the mass market.

~~~
namibj
Who says that we can't just drop the cost of maxing large 14nm Finfet dies and
get some sort of stacking, possibly with low-tech, i.e. single-patterning,
silicon interposers? The cooling/power density is not a fundamental problem,
and as long as we stay under about 20-50W/cm^2 == 200-500mW/mm^2. At that heat
flux we'd need 1-5mm spacing between the hot side and the other side (I do not
know about experiments that cover two closely spaced hot surfaces, as you
would have in slightly-spaced vertically stacked dies), and would just pump
coolant with a boiling point 20K under the die's surface temperature through
this. On the other side we'd get a mixture of vapor and liquid coolant, and
the vapor could, at least at temperatures that a human can endure, be
condensed with a small radiator and a fan, or even a larger, fanless radiator.

The primary reason for such exotic packaging/stacking would be that chips need
way less power to transmit over such short distances, compared to longer reach
systems, as for a comparison between DDR4 and HMC [0], HMC uses about a third
the power for DDR4, and that at much higher clocks. Also compare the AMD Zen
IFOP intra-package links, which uses 2pJ/bit for the transceiver [1]. This
should be possible to extend to an edge-mounted package (where the processor
dies have contacts on one or two edges, and are soldered at a right angle to a
suitable interconnection/mounting plate), though apparently no one really
tried it. The downside would be that one does not have as many pins on the
processor dies as currently normal, but I doubt that would be a significant
problem, considering the integration benefits. A possible alternative in case
of significantly improved yield would be wafer-scale integration, using the
space outside of the normal reticle limit for routing larger feature-size mesh
interconnects and handling the necessary routing around this later, but that
would result in non-trivial software, as it would have to adapt to a given
processor's routing layout. TSMC is apparently able to manufacture with
reasonable yield individual 'cells' of slightly over 800mm^2, on 300mm wavers.
This comes out at about 90 dies, of which some will be dead. With the power
density of 50W/cm^2, that would be 3600W, but the upper limit, with
significant efforts (ways to increase the flow velocity on the surface) to
increase the maximum cooling capacity, would be around 100-300W/cm^2, which
would put this at the TDP of an average 42U rack. The weird part would
probably be some kind of reverse flip-chip packaging, where contact modules
would be bonded to the functioning dies on the wafer, which would then later
be fitted with some sort of cable or so, or they could be a miniature optical
module. This would also need to provide the power. Cooling would be done on
the back side, but the front would likely need some structural support to hold
the static/dynamic pressure of the coolant without flexing/breaking.

[0]:
[https://www.semiwiki.com/forum/attachments/content/attachmen...](https://www.semiwiki.com/forum/attachments/content/attachments/8472d1378287921-hmc-
performances.jpg) from [https://www.semiwiki.com/forum/content/2731-did-you-
miss-cad...](https://www.semiwiki.com/forum/content/2731-did-you-miss-cadence-
s-memcon-q.html) [1]:
[https://en.wikichip.org/wiki/amd/infinity_fabric#IFOP](https://en.wikichip.org/wiki/amd/infinity_fabric#IFOP)

------
shmerl
_> Now, leveraging the efforts of several technology companies, support for
Microsoft's DXR API and ray-traced area light shadows will be available in the
main branch of Unreal Engine 4 by the end of 2018._

MS only? No, thanks.

~~~
naikrovek
How many NVidia-only features do you use? Nvidia today are just as evil as MS
was 20 years ago, they just don't have the market share Microsoft did, so no
one notices how unbearably evil and unnecessarily proprietary and exclusionary
they are. Just like MS was.

~~~
shmerl
I don't, I have AMD :) And I agree, Nvidia lock-in is as bad as others.

------
mmcconnell1618
Epic is also providing the software rendering engine for the new Millennium
Falcon ride at the Star Wars expansion to Disney. This real time ray tracing
demo was probably part of the development process for the ride.

[https://wdwnt.com/2018/03/first-on-ride-image-revealed-of-
mi...](https://wdwnt.com/2018/03/first-on-ride-image-revealed-of-millennium-
falcon-attraction-for-star-wars-galaxys-edge-tech-details-discussed/)

------
hyperpallium
How do you do diffuse lighting with ray tracing? Wouldn't you need an
unreasonable number of rays to model it accurately?

~~~
dahart
Depends. The diffuse direct lighting from a point light only requires one ray,
same as with rasterization. Diffuse bounce lighting may need more rays. Area
lights may need more rays.

Since both of those things are lower frequency signals, they often don’t need
to be re-computed independently for every pixel, so people have various
algorithms and caching schemes that drastically reduce the number of rays
needed. All of these names do somewhat similar kinds of caching of bounce
lighting: photon mapping, irradiance caching, light maps, light baking.
Additionally, there are tricks to get decent estimates of shadows from area
lights in fewer samples. The demo-scene ray marching guys usually do a great
job with one ray (not ray traced, but compatible with ray tracing and very
fast.)

Sometimes though, people do use an unreasonable number of rays to render
global illumination, area lights & diffuse lighting.

------
peterchon
I lost it at the starwars elevator music.

~~~
always_good
I always chuckled at how imperial armor is so plasticy, and they only doubled
down on it since the trilogy.

This ray tracing demo just about caricaturizes it.

~~~
bufferoverflow
Some modern armor is made out of UHMWPE plastic.

[https://en.wikipedia.org/wiki/Ultra-high-molecular-
weight_po...](https://en.wikipedia.org/wiki/Ultra-high-molecular-
weight_polyethylene)

Armor tested:

[https://www.youtube.com/watch?v=UsjOV7tcGRI](https://www.youtube.com/watch?v=UsjOV7tcGRI)

------
bitL
NVidia, please don't tease us, release Volta-based gaming GPUs already!

------
elihu
There are a lot of inaccuracies in the article.

> "Rasterization starts with a particular pixel and asks, 'What color should
> this pixel be?' "

Nope. Traditional polygon rasterization just works its way through a drawlist
and rasterizes each polygon into a frame buffer (checking against a depth
buffer so that things far away don't get drawn in front of things that are
nearer). It's only at the end of the drawing process that we can inquire what
color a particular pixel is.

> "Ray tracing works from the viewing angle and light sources and asks, 'What
> is the light doing?'"

No, ray tracing traces rays out from the eye and figures out what they hit.
Path tracing traces rays out from the eye and samples a bunch of random rays
when they hit something, to simulate global illumination, soft shadows, and
similar effects. Photon mapping literally simulates photons by tracing rays
randomly out from the light sources and depositing their hit locations in a
data structure. (That's probably the most literal match for the description of
figuring out "what the light is doing".)

> "Ray tracing works by tracing the path of a light ray as it bounces around a
> scene. Each time a ray bounces, it mimics real-life light by depositing
> color from earlier objects it has struck, and also loses intensity. This
> depositing of color makes the sharp reflections and subtle, realistic color
> variations that, for certain types of materials and effects, can only be
> achieved with ray tracing."

This is a pretty accurate description of photon mapping. Ray tracing and path
tracing work in reverse, by tracing rays from the eye. Since path tracing and
photon mapping fundamentally work by tracing rays (i.e. doing ray-intersection
tests between rays and objects), it's not completely unreasonable to refer to
a path tracer or photon mapping renderer as a ray tracer, but not all ray
tracers work the way the article describes.

> "Because it mimics light’s real behavior, ray tracing also excels at
> producing area shadows and ambient occlusion. "

You can do area shadows and ambient occlusion in a ray tracer if you're
willing to do a lot of random ray sampling, but it's not inherently a
characteristic of ray tracing.

> "Until now, ray tracing was implemented only in offline rendering due to its
> intense computational demands. Scenes computed with ray tracing can take
> many minutes to hours to compute a single frame, twenty-four of which are
> needed to fill a single second of a film animation."

This may be an accurate description of path tracing with enough samples to
drive the noise down to a level that's imperceptible, but ordinary ray tracing
of non-pathological scenes doesn't usually take that long these days on modern
hardware.

People have been doing real-time interactive ray tracing for a long time (I
recall running Outbound on a Core2 duo laptop about 10 years ago. Framerate
was choppy and resolution wasn't great, but it worked). Brigade is a more
recent real-time path-tracing rendering engine, and it's been around for years
as well.

That isn't to say that their demo isn't an impressive technical achievement.

~~~
namibj
Once we have sufficiently good HMDs (maybe Oculus 5?), we could hook on up to
a supercomputer running a version of Luxrender, as that should be capable of
providing the necessary physical realism to get over the uncanny valley. Of
course this is not feasible for constant usage, but more of a technology demo.
I believe we have the necessary technology to get this working within a year,
provided the Luxrender guys get some engineers with experience in clusterizing
existing software. This could be the visual part of a holodeck.

~~~
skohan
HMD's are a pretty good solution to VR, but there are still a number of
unsolved problems which can't be addressed by higher pixel density, faster
refresh, and more powerful GPUs.

For instance, how do you account for the focal adjustments your eye makes when
you look at objects at different distances in a virtual scene when you are
staring at a screen only a short distance from your face?

------
mtgx
Will they support Vulkan/Khronos' alternative, too?

~~~
bitL
Is there any? I thought RTX is a technology demo by NVidia right now as is
DXR...

~~~
corysama
Nvidia: [http://on-
demand.gputechconf.com/gtc/2018/presentation/s8521...](http://on-
demand.gputechconf.com/gtc/2018/presentation/s8521-advanced-graphics-
extensions-for-vulkan.pdf)

AMD: [https://gpuopen.com/announcing-real-time-ray-
tracing/](https://gpuopen.com/announcing-real-time-ray-tracing/)

[https://gpuopen.com/gdc-2018-presentation-real-time-ray-
trac...](https://gpuopen.com/gdc-2018-presentation-real-time-ray-tracing-
techniques-integration-existing-renderers/)

