
Real-Time Global Illumination by Precomputed Local Reconstruction - ingve
https://users.aalto.fi/~silvena4/Projects/RTGI/index.html
======
m12k
As far as I can tell this is mostly the same capability as the Enlighten
middleware used by e.g. Unity and Frostbite (i.e. recent Battlefield titles)
with some small improvements on top (e.g. the opaque occluder approximation at
the end). A demo for reference:
[https://www.youtube.com/watch?v=ARorKHRTI80](https://www.youtube.com/watch?v=ARorKHRTI80)

Is there some improvement in the technique beyond that - e.g. some clever
speedup across the board?

~~~
Agentlien
I feel I should point out that Frostbite is used by a lot more than
battlefield. Including Anthem, Dragon Age Inquisition, the latest Fifa,
Mirror's Edge Catalyst and the Need for Speed games by Ghost Games (which is
where I work).

~~~
mstade
And don't forget about Star Wars: Battlefront and the upcoming Battlefront 2.
Ridiculously good looking games.

------
sjtrny
You know it's a good paper when you reach the end and ask "is that it?" and
yet no one else has thought of it before.

Side note: local-global approximation seems to do well in graphics and visual
tasks. For example in the field of alpha matting the state of the art for a
while was KNN Matting which sampled locally and globally. Most methods since
then have taken a similar approach.

------
sorenjan
The article mentions that they used a Nvidia Titan X Pascal, and that it took
< 5 ms to compute. 5 ms is still a large part of the time budget for each
frame, and most users have a slower GPU than that.

~~~
sclangdon
I remember buying a new GPU everytime a new edition of Quake came out. Those
were the days...

~~~
virgil_disgr4ce
Has anything changed, other than the particular game?

~~~
klodolph
Yes. Quake 2 was December 1997, Quake 3 was December 1999. In that span, the
game went from "use a GPU and you'll get colored lights and a higher
resolution" to "use a GPU or else you don't get to play the game at all".

GPU power seems to be increasing as fast as it always was, but the things we
can do with that extra power are less impressive. So if you're playing games,
it's not so big a deal any more if your graphics card is five years old.

~~~
seabee
Up to a point; obsolescence happens by games requiring graphics APIs that
didn't exist 5 years ago.

~~~
klodolph
There aren't many games that fall into that category. For example, look at
Direct3D 12. You can count on your fingers the number of games that require
D3D 12, and it's supported by graphics cards made in 2011 (although not all of
them).

Just look at the new API features we've been getting, they aren't really the
kind of things you'd rewrite your engine for, with the exception of Vulkan /
D3D 12 / Metal / etc. But Vulkan / D3D 12 are really just API changes that
don't reflect underlying hardware changes.

On top of that, a large percentage of games are made on top of engine
middleware with good support for mediocre hardware (like Unreal and Unity,
both of which run on phones if you want).

~~~
theandrewbailey
Not to mention that lots of neat effects can be produced by modern shaders,
and do not need further API support.

------
bicubic
Does this support dynamic moving lights? That's the 'wow' moment that's going
to feel like a move to next generation graphics - when all/most lights are
fully dynamic and contribute to GI.

~~~
wlesieutre
Haven't read the paper yet, but it looks like yes. The caption at the top says
"Real-time global illumination rendered using our method in 3.9ms with fully
dynamic lights, cameras and diffuse surface materials."

It's a couple of years old now, but Crassin's voxel cone tracing paper is a
neat one: [http://research.nvidia.com/publication/interactive-
indirect-...](http://research.nvidia.com/publication/interactive-indirect-
illumination-using-voxel-cone-tracing)

------
rayuela
Link to the paper (pdf):

[http://research.nvidia.com/sites/default/files/pubs/2017-02_...](http://research.nvidia.com/sites/default/files/pubs/2017-02_Real-
Time-Global-Illumination/light-field-probes-final.pdf)

------
londons_explore
The approximate dynamic occluders is the most impressive part of the demo.

Games etc. really need that stuff in real time.

~~~
ndh2
Games need more. The occluders from the paper completely absorb light. A more
realistic occluder would also reflect light.

~~~
dualogy
> _The occluders from the paper completely absorb light. A more realistic
> occluder would also reflect light._

My understanding, "approximate occluders" aren't visible-geometry but stand-in
replacements (very coarse to represent much finer-grained visible-geometry)
during the GI pass(es). Probably even geometric primitives (sphere, box, torus
etc pp) to, indeed, simply approximate occlusion.

Physically-based _reflections_ across _all_ geometry already works brilliantly
well for the last few years in real-time.

------
olegkikin
That is incredible.

Finally, realistic-looking GI in real time.

------
tobyhinloopen
It took me a minute to appreciate it, didn't know what to look for. But wow!

~~~
virgil_disgr4ce
What should I look for?

~~~
theandrewbailey
Light bouncing around, lighting up things that aren't in direct light.

~~~
Geee
This kind of lighting (global illumination) is typically "baked in", i.e.
lights don't move around.

------
kevindqc
Anyone have links to papers describing how GI works in Unreal and Unity?

~~~
djmips
Enlighten is used in UE4 and Unity as I understand it. You can read about
enlighten in Frostbite circa 2010 here:

[http://advances.realtimerendering.com/s2010/Martin-
Einarsson...](http://advances.realtimerendering.com/s2010/Martin-Einarsson-
RadiosityArchitecture\(SIGGRAPH%202010%20Advanced%20RealTime%20Rendering%20Course\).pdf)

~~~
yuhe00
UE4 doesn't use Enlighten out of the box. It uses its own system called
Lightmass, which generates very nice static lightmaps, but is not that great
for dynamic GI (it has some basic support through what's called "indirect
lighting caches"
[https://docs.unrealengine.com/latest/INT/Engine/Rendering/Li...](https://docs.unrealengine.com/latest/INT/Engine/Rendering/LightingAndShadows/IndirectLightingCache/)).
However, you can get Enlighten as a licensed plugin.

------
agorg_louk
Does anybody know where the Gallery scene is from?

------
sillysaurus3
Isn't it frustrating? All of this effort, and it looks nothing like real
camcorder footage.

That's not dismissive -- no one has ever made any program that outputs a
string of images indistinguishable from a real camcorder. It's just that hard.

I think whatever the next leap forward looks like, it will come from a
nontraditional approach. Something strange, like powering your real-time
lighting model by an actual camcorder -- set it up, point it at a real-world
scene, then write a program that analyzes the way the light and color behaves
in the camcorder's ground truth input. Then you'd somehow extrapolate that
behavior across the rest of your scene.

That last step sounds a lot like "Just add magic," but we have deep learning
pipelines now. You could train it against your camcorder's input feed. Neural
nets tend to work well when you have a reliable model, and we have the perfect
one. So more precisely, you'd train your neural net against the camera's input
video stream: at each generation, the program would try to paint your scene
using whatever it thinks is the best guess for how the colors should look.
Then you move your camcorder around, capturing how the colors actually looked,
giving the pipeline enough data to correct itself. Rinse and repeat a few
thousand times.

The key to realism, and the central problem, is that colors affect colors
around them. The way colors blend and fade across a wall has to be _exactly_
right. There's no room for deviation from real life. Our visual systems have
been tuned for a billion years to notice that.

There are all kinds of issues with this idea: the real-world scene would need
to be identical to the virtual scene, at least to start. The program would
need to know the camera's orientation in order to figure out how to
backproject the real-life illumination data onto the virtual scene. But at the
end of it, you should wind up with a scene whose colors behave identically to
real life.

It seems like a promising approach because it gets rid of the whole idea of
diffuse/ambient/specular maps, which don't correspond to reality anyway My
favorite example: What does it mean to multiply a light's RGB color by a
diffuse texture's RGB value? Nothing! It's a completely meaningless operation
which _happens_ to approximate reality quite well. There are huge advantages
with that approach, like the flexibility of letting an artist create textures.
But if the goal is _precise, exact_ realism as defined by your camcorder, then
we might be able to mimic nature directly.

(Those dynamic occluders looked incredibly cool, by the way!)

~~~
Xcelerate
We have spectral renderers that simulate light transport extremely accurately
and in fact _are_ indistinguishable from a photograph (see
[http://www.graphics.cornell.edu/online/box/compare.html](http://www.graphics.cornell.edu/online/box/compare.html))

The problem isn’t solving light transport itself (the rendering equation has
been known for years, and is asymptotically exactly solved using unbiased
numerical integration techniques); it’s modeling the interaction of light with
complex physical materials and doing so quickly that poses the challenge.

~~~
sillysaurus3
Note the important distinction: no one, anywhere, has ever created a real-time
video simulation indistinguishable from the output of a real camcorder. Real-
time video is significantly different than still-frames. It's why video
compression uses completely different algorithms than jpeg compression, for
example.

Our visual systems are tuned to notice problems with real-time outputs,
whereas a still frame is ambiguous. There are all kinds of transformations you
can do to a perfect still frame where it still seems perfect afterwards: You
can tweak the contrast, brightness, hue, throw a softpass filter on it, etc,
and a human observer still won't notice a big difference. In other words,
there are many ways to "cheat"! Even though we're doing operations that don't
correspond to how nature behaves, a human observer is still fooled by them.

Yet the moment that you string a bunch of still frames together into a video,
that human observer will pounce on you immediately and call it out as fake.
I'm not sure why motion makes such a massive difference, but it does.

To put it another way, if it were possible to make a real-time video whose
ground truth was identical to a camcorder, we'd see it in hollywood. But no
one has been able to; Avatar was the best we could do.

I think it should be possible to train a neural net to paint the scene,
similar to a human painter. DaVinci didn't make diffuse/ambient/specular
textures; he simply painted the scene according to how it would look in real
life.

And in real life, color changes are gradual. It's crucial that the colors
change in _exactly the right way_ , but the changes themselves are rarely
discontinuous. That means a neural net might be able to catch on pretty
quickly.

Bots are already doing similar work:

[https://www.reddit.com/r/aww/comments/505zzr/colour_run/d71g...](https://www.reddit.com/r/aww/comments/505zzr/colour_run/d71grax/?context=3)

Before: [https://i.imgur.com/EqMLNFo.jpg](https://i.imgur.com/EqMLNFo.jpg)

After: [https://i.imgur.com/omYmn7Q.jpg](https://i.imgur.com/omYmn7Q.jpg)

If it's possible to clean a puppy, it might be possible to extrapolate how
light should behave in the general case.

~~~
modeless
Hollywood is not optimizing for perfect realism. They are operating under many
constraints such as budget (yes, even on Avatar they had budgets), schedule,
and the need for directorial control (which often conflicts with realism). It
would absolutely be possible to produce prerendered videos of mostly static
scenes using today's technology that would be indistinguishable from reality,
although the expense would be high.

~~~
sillysaurus3
_It would absolutely be possible to produce prerendered videos of mostly
static scenes using today 's technology that would be indistinguishable from
reality, although the expense would be high._

And yet, no one has. :)

It'd be a fun trophy to claim.

My theory is that it's not merely a matter of computing power. We have massive
horsepower now, but our algorithms are still wrong at a fundamental level. We
still approach it by trying to simulate the underlying physics of light: the
transport algorithms, radiance sampling, and so on. But between the RGB
textures that have nothing to do with nature and the physics approximations
that we're forced to settle with, something ends up lost in translation.

No matter what we try, that human observer ends up looking at a final product
that looks nothing like a real camcorder video.

Back during ~iPhone 4 days, someone forged some pictures of the "next gen
iPhone." It was a hoax, but a lot of people were fooled at the time.
Remarkably, it was a computer rendering. The reason it looked real is because
he took a picture of the rendering using a real camera, which muddied up the
colors to the point that you couldn't tell it was a rendering.

The idea here would be similar: analyze the actual pixels coming out of a real
camcorder and try to mimic the nuances as closely as possible. In the best
case, it wouldn't be a physics simulation at all: the program might be able to
guess how the scene should look based on partial data and past experience.

~~~
dahart
> We still approach it by trying to simulate the underlying physics of light:
> the transport algorithms, radiance sampling, and so on. But between the RGB
> textures that have nothing to do with nature and the physics approximations
> that we're forced to settle with, something ends up lost in translation.

You keep talking about rendering being the problem and yet you acknowledge
that photos can be indistinguishable from reality and motion being the hard
problem. Do you not see the disconnect in your own argument? Light simulation
isn't the issue, motion simulation is. Movement and model and texture
complexity all have a long way to go.

The people practicing graphics in research and in production know what the
problems are, they are aware of the approximations they are making, they know
what they need to make graphics more realistic, and they are making conscious
choices of where to spend their limited budgets. The big problems just aren't
due to light transport or radiance sampling, your pet theory seems ignorant of
what's going on in CG practice today. If you would just talk to (and listen
to) some production people and researchers...

You're still ignoring the fact that realism in CG in increasing. There's a
clear trend, and we are closing the gap on the details that make CG look
unrealistic. The standards are higher now than last year, which was higher
than the year before. Realism has always increased every year, and yet at no
time was the increase due to a fundamental change in multiplying colors
together.

~~~
sillysaurus3
_Light simulation isn 't the issue, motion simulation is. Movement and model
and texture complexity all have a long way to go._

FWIW I completely agree, and this is a key observation. DaVinci devoted
several chapters to the problem of motion in art, and it will always be with
us.

That said,

 _You 're still ignoring the fact that realism in CG in increasing. There's a
clear trend, and we are closing the gap on the details that make CG look
unrealistic. The standards are higher now than last year, which was higher
than the year before. Realism has always increased every year, and yet at no
time was the increase due to a fundamental change in multiplying colors
together._

This is the same argument that's always trotted out. Graphics are improving,
but if you diff 2018 to 2013, it's nothing like 2013 to 2008. The fundamental
leaps we've been accustomed to seeing are simply not happening anymore. The
rate of progress is very clearly slowing down.

It depends which axis you measure, of course. We're able to render more things
each year, which is nice. But the visual quality from a fundamental
perspective is more or less the same as it was a few years ago.

The quality issues stem largely from light transport -- the colors are all
wrong! If you compare them to a photo, you'll see that we don't end up
remotely close. You can see this vividly in the YT link above (the Unreal
engine walkthrough). If you try to picture yourself _in_ the video, you'll get
a strange feeling of being in a candy world, or a shrinkwrapped house.

That's certainly a promising axis to explore, and there are hundreds of papers
published each year solely about light simulation.

~~~
dahart
>> Light simulation isn't the issue, motion simulation is. Movement and model
and texture complexity all have a long way to go.

> FWIW I completely agree, and this is a key observation. DaVinci devoted
> several chapters to the problem of motion in art, and it will always be with
> us.

Now we are getting somewhere! Let's talk about those things instead of
rendering! Which parts of motion are killing realism? Fluids and rigid body
dynamics are pretty good these days. Facial animation is still in the uncanny
valley. Why?

> Graphics are improving, but if you diff 2018 to 2013, it's nothing like 2013
> to 2008. The fundamental leaps we've been accustomed to seeing are simply
> not happening anymore. The rate of progress is very clearly slowing down.

The rate of progress of _rendering_ is slowing down, that goes right to my
point that rendering isn't the main problem anymore, it's approaching good
enough for realism. Movement and model and texture complexity progress is
increasing. Papers on fluid simulation and facial animation and foliage and
texture synthesis and multi-dimensional textures are on the rise.

> The quality issues stem largely from light transport -- the colors are all
> wrong!

No. The way we handle colors and light transport is fine, those pieces of
human knowledge are ready for the jump to realism. You've already acknowledged
that by refusing to consider still photos, because they already look
realistic. One bad example from a game engine doesn't prove anything. I can
show you lots of bad examples from game engines.

You would have a stronger argument if you gathered the very best examples on
earth and we talked about those. By pointing at known bad examples and
claiming that they say something about the state of the art, it makes me feel
like you either don't know what the state of the art is, or you're taking
cheap shots.

