
Real-Time Point-Based Global Illumination - bane
http://www.aduprat.com/portfolio/?page=articles%2FPBGI
======
rollulus
A bit of more information: with global illumination (GI) a point is shaded by
considering the light coming in from all directions to that point. It is the
integral over Omega in the rendering equation [1]. This is expensive to
compute. Especially since it is recursive to infinity, if the incoming light
was reflected by a surface. The author uses pre-computation and keeps the
geometry static, which is quite a common approach.

[1]:
[https://en.wikipedia.org/wiki/Rendering_equation](https://en.wikipedia.org/wiki/Rendering_equation)

~~~
conceit
> It is the integral over Omega in the rendering equation [1]. This is
> expensive to compute.

The approach would be just exhaustive summation, I suppose. Would symbolic
integration be equally expansive for simple enough geometry? Simplified
approximations to the geometry, i.e. bounding boxes, would probably not look
real enough. I get, there are all kinds of approaches.

I heard Monte Carlo method used to limit the search space to a random subset.

~~~
dahart
Symbolic integration, or more often called "analytic integration" (as opposed
to numeric integration or sampling) is usually much more expensive. But the
benefit is that you can get a much more accurate result, if you need an
accurate result. There is no known analytic / symbolic integral for arbitrary
geometry like curved surfaces, but I believe it can be done with purely
polygonal scenes.

A lot of work has been done on using analytic integration techniques, as well
as higher quality numeric approximations, for global illumination, in
particular with radiosity methods.
[https://en.m.wikipedia.org/wiki/Radiosity_(computer_graphics...](https://en.m.wikipedia.org/wiki/Radiosity_\(computer_graphics\))

[http://www.mi.uni-
koeln.de/c/mirror/www.cs.curtin.edu.au/uni...](http://www.mi.uni-
koeln.de/c/mirror/www.cs.curtin.edu.au/units/cg351-551/notes/restricted/lect13e1.html)

[http://dl.acm.org/citation.cfm?id=74367](http://dl.acm.org/citation.cfm?id=74367)

------
imaginenore
UE4 demonstrated it as well:

[https://www.youtube.com/watch?v=VHbHOQ1NRuw](https://www.youtube.com/watch?v=VHbHOQ1NRuw)

And so did Unity:

[https://www.youtube.com/watch?v=ouJNRJ2uPmY](https://www.youtube.com/watch?v=ouJNRJ2uPmY)

~~~
bhouston
Both of the above demos are done with the commercial product Enlighten. Would
be great to have an open source alternative to it that people can customize.
:)

~~~
CyberDildonics
This isn't that though. The geometry is static and the interaction between
points is precomputed. The cone tracing into voxels approach ends up being
much more reasonable since:

1\. The color is quantified into evenly spaced segments (ie a 3 dimensional
image)

2\. Because it is a 3 dimensional image the GPU's filtering hardware can do
efficient lookups over a volume using mip mapping techniques.

The approach was originally described in a siggraph paper and there have been
a lot of variations (which is usually another sign that something works well).
Someone could create an implementation on their own, it isn't much of a
stretch. You just rasterize into a voxel image then do a few cone traces at
every pixel you shade, guided by the BDRF (a good one to look at would be
ggx).

