
Rendering Worlds with Two Triangles on the GPU [pdf] - muyyatin
http://www.iquilezles.org/www/material/nvscene2008/rwwtt.pdf
======
algorias
Ah, the classic presentation that got me started in the demoscene a couple of
years back.

The Google cache version of the pdf doesn't include any images, so I put up a
copy over here:

[https://dl.dropboxusercontent.com/u/2173295/rwwtt.pdf](https://dl.dropboxusercontent.com/u/2173295/rwwtt.pdf)

~~~
vanderZwan
Thanks for the mirror.

Know of any novel techniques/rehashing of old techniques that have been
developed since?

~~~
algorias
Well, for very small intros (4k/8k) distance fields are still hard to beat due
to their compactness. A couple of examples not based on distfields, off the
top of my head:

[http://www.pouet.net/prod.php?which=62027](http://www.pouet.net/prod.php?which=62027)
(No idea what this is, but it's great)

[http://www.pouet.net/prod.php?which=62974](http://www.pouet.net/prod.php?which=62974)
(reverse fluid simulation)

[http://www.pouet.net/prod.php?which=59613](http://www.pouet.net/prod.php?which=59613)
(particles)

For 64k intros and size unlimited demos the possibilities are too many to
list. Procedural mesh generation is a classic approach which has found great
modern use recently:

[http://www.pouet.net/prod.php?which=61204](http://www.pouet.net/prod.php?which=61204)
(if you follow 1 link in this comment make it this one)

~~~
vanderZwan
Awesome, thanks!

What kind of mathemagical trickery is that reverse fluid simulation?!

~~~
algorias
Seven held a short seminar about it, it's the first one in this video:

[http://www.youtube.com/watch?v=DQ8eB_FORLo](http://www.youtube.com/watch?v=DQ8eB_FORLo)

------
userbinator
I think the most elegant thing about this method is that it describes a scene
in terms of its basic mathematical 3D objects and transformations on them
(list here:
[http://www.iquilezles.org/www/articles/distfunctions/distfun...](http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm)
) and then exploits the massive parallelism of the GPU for rendering all the
pixels.

Here's a demo of someone playing around with it, complete with a Slisesix-
inspired scene:
[http://www.rpenalva.com/blog/?p=254](http://www.rpenalva.com/blog/?p=254)

This set of slides is also related:
[http://www.iquilezles.org/www/material/function2009/function...](http://www.iquilezles.org/www/material/function2009/function2009.pdf)

------
rogerallen
And if you want to try things out yourself, iq has created a playground for
you here: [https://www.shadertoy.com/](https://www.shadertoy.com/)

~~~
z303
and one similar to Elevated
[https://www.shadertoy.com/view/4slGD4](https://www.shadertoy.com/view/4slGD4)

~~~
sitkack
It is disingenuous for the author to not cite Elevated.

------
DanBC
I love these.

People might enjoy noodling around the Geisswerks pages which have many code
snippets around ray tracing; graphic demos; and so on.

[http://www.geisswerks.com/](http://www.geisswerks.com/)

------
sp332
You can download it from here
[http://www.pouet.net/prod.php?which=51074](http://www.pouet.net/prod.php?which=51074)
It's been updated to run more reliably (edit: on Vista) but I can't find a
version that will run on Win7.

Edit: I found a similar one on Shadertoy
[https://www.shadertoy.com/view/lsf3zr](https://www.shadertoy.com/view/lsf3zr)

~~~
foxhill
i just ran it on my mac successfully with wine..!

------
hughes
I was working with distance fields back in 2008, and the idea of inverting the
process blew my mind.

I had no idea Iñigo Quilez's image was produced this way and I'm so glad I had
the chance to see how it was made.

Thanks for posting!!

------
yzzxy
Is the demoscene a good place to get into graphics programming? The prevalence
of older methods leads me to believe one could learn in a similar progression
to the graphics gurus of today, moving from simpler old methods with
performance and size optimization to modern techniques?

------
DanAndersen
This is a really impressive presentation -- after looking on from afar at the
seemingly magical works of the demoscene, this finally helped me understand a
little bit of how the magic happens. I've only got a bit of GLSL experience so
far but now I want to learn a lot more.

------
thisjepisje
Could someone explain to me what the _" two triangles that cover the entire
screen area"_ have to do with anything?

~~~
greggman
Basically you draw a single quad (2 triangles) covering the entire screen
using OpenGL (or DirectX).

A Pixel shader is run when rendering each pixel of the quad. It's only input
is often `time` and `resolution`.

At least in GLSL there's a global variable, `gl_Fragcoord` and provides the
integer position of the pixel currently being drawn. So for example the pixel
at the bottom left is gl_Fragcoord = vec2(0,0). The one directly to the right
of that is gl_Fragcoord = vec2(0,1)

Given you're also passed the resolution can get a value that goes from 0 to 1
over the screen with

    
    
       vec2 zeroToOne = gl_Fragcoord.xy / resolution;
    

If you were to dump that value directly do the screen you'd get a red gradient
going black to red from left to right and a green gradient from black to green
going from bottom to top. See
[http://glsl.heroku.com/e#18516.0](http://glsl.heroku.com/e#18516.0)

Now it's up to you to use more creative math that given just gl_Fragcoord,
resolution, and time write a function that generates an image.

You can play with that in your browser here,
[http://glsl.heroku.com](http://glsl.heroku.com) and here
[http://shadertoy.com](http://shadertoy.com)

~~~
thisjepisje
So the whole point of using a shader is that it's the GPU that's doing all the
work?

~~~
NickPollard
Yes, that's what this trick is for.

In most standard 3D graphics, the CPU passes a description of the scene as
polygons to the GPU, which then does two[1] shader steps - vertex and
fragment[2] shading. The vertex shading works at the level of triangle
vertices, effectively translating and and transforming the vertices, and then
the fragment shader colors in each individual pixel.

So for a standard scene, the CPU tells the GPU: 'Right, we've got a room, with
some pillars, and a monster, and a few lights, positioned like this', and then
the CPU calculates what that looks like.

What Inigo is doing is that the CPU only knows there are two triangles - a
quad covering the scene - so it just tells the GPU to draw a flat rectangle.
The vertex shader does nothing but maintain the flat rectangle. However,
because the fragment shader can be arbitrary logic, rather than just painting
it with a solid color or even a texture, it is running its own simulation that
involves drawing an entire scene.

\----

[1] More these days with Geometry shaders, but that's another topic

[2] Sometimes called a pixel shader, although really that's incorrect -
Fragment is a more accurate term

