
Breakdown of a Simple Ray Tracer - rhema
http://mrl.nyu.edu/~perlin/raytrace1_breakdown/
======
ThatOtherPerson
This is pretty cool. I've written a couple of toy ray tracers, but always in a
language that runs on a CPU, so it's interesting to see how you would do it on
a GPU. And I noticed that this is from _the_ Ken Perlin, which is cool - I'm
mostly just used to seeing his name as the "Perlin" in Perlin noise.

There's one thing I'm curious about - does anyone know why he's taking the
square root of the color to produce the final pixel color?

~~~
Impossible
He's assuming lighting is in linear space and doing gamma correction at the
end of the shader. Sqrt is a faster approximation of pow(c,1.0/2.2).

I first saw this trick in writing in this article (by the co-creator of shader
toy).
[http://www.iquilezles.org/www/articles/outdoorslighting/outd...](http://www.iquilezles.org/www/articles/outdoorslighting/outdoorslighting.htm).

For reasons why linear space lighting is important read
([http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html](http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html))
The gist of it is gamma space lighting tends to look unnatural and blown out,
and it becomes more of a problem the more math you do in the lighting (adding
specular, multiple lights, etc.)

~~~
darkmighty
A small note, linear space lightning is important because the lightning
equations are linear equations with respect to physical quantities (unless you
specifically introduce materials with non-linear response, which are quite
rate). So simply

    
    
      intensity(pixel)=intensity(pixel illuminated by source 1)+intensity(pixel illuminated by source 2)+...
    

It's _perception_ that is non-linear, so you can separate it as a last non-
linear step instead of always calculating

    
    
      perceived_intensity(pixel)=pow(x,pow(1/x,perceived_intensity(pixel illuminated by source 1))+pow(1/x,perceived_intensity(pixel illuminated by source 2))+...)

~~~
yorwba
> It's _perception_ that is non-linear,

Since perception happens when you look at the image on your screen, applying
an explicit "perception" step in rendering is actually counterproductive,
since those nonlinearities would compound.

Gamma correction is used to allow for better contrast resolution for the
brightness levels where the eye is most sensitive, when you have to compress
your color values down to 8 bits per channel. The display then inverts this to
produce ordinary linear space intensities.

~~~
tripzilch
there's more to the story than that, even. there's also the part where old CRT
monitors had nonlinearity of similar exponent (or maybe its inverse, I can't
find the details of it quickly on mobile). newer display technologies emulate
this nonlinearity so they can be compatible receiving the same signals / data.
(someone correct me if I'm wrong, I feel like I am not being 100% exact here)

But the point that remains today, is not the bits (as shaders work with floats
internally), nor the response curve of a CRT (because almost nobody uses those
any more), but the fact that the physics calculations of light operate on
linear quantities (proportional to an amount of photons), so you better do
those in linear space.

Then at the end you must convert to perceptual space, which can be done in a
number of ways. I'm not sure how much of a win replacing one pow(color, 2.2)
with a sqrt is, at the end of a fragment shader. Especially when the
pow(color, 2.2) is already a quick approximation by itself, there are much
fancier curves to convert to perceptual space (that don't desaturate the darks
as much, for instance).

~~~
foolrush
Accurate. The 2.2 and various hacks compensate for the baked in hardware.

Although perception is nonlinear, the nonlinearity here specifically addresses
hardware on standard sRGB displays. After all, as cited above, why would you
apply a nonlinear correction for perception when it has that baked in?

What is missed by many, is that standard LCD technology, despite being
inherently linear, has that low level hardware nonlinearity baked in.
Typically, it is a flat 2.2 power function, although other displays or modes
may use differing transfer functions.

The irony is that the nonlinear 2.2 function adopted into most imaging results
in a closer-to-linear signal, that ends up further nonlinear based on
tonemapping, SMPTE2084, or other such nonlinear adjustments for technical or
aesthetic reasons. In the case of raytracing, a mere 2.2 adjustment is
woefully worthless.

PS: The CIE1931 model, of which the RGB encoding model is effectively derived
from, used _visual_ energy. That is, it doesn't model "reality" so much as the
psychophysical byproduct that happens in the brain. Luckily, the base model,
XYZ, operates on a linear based model with some extremely nuanced caveats.
Raytracing using RGB tristimulus models as a result, sort of work.

------
AceJohnny2
I loved Ken Perlin's java applets years ago. Sadly, the demise of Java as
webapps means they now require jumping through a few hoops to get them running
nowadays.

[http://mrl.nyu.edu/~perlin/](http://mrl.nyu.edu/~perlin/)

Tangentially, there's always a bit of heartache when I use the text input in
Steam's Big Picture mode, as I wish Perlin's innovative input modes saw wider
usage (see the "pen input" section of the above link). Of course, I understand
why not: it faces the same hurdle of teaching people a new input layout that
dooms all non-traditional input methods.

~~~
Crespyl
That's really interesting, I hadn't seen his input method stuff before.

Reminds a lot of MessagEase, which I use on every touch system that supports
it.

------
kowdermeister
This is sort of the tutorials that doesn't make any sense if you are not
already knowledgeable in the subject of math and shaders and rendering basics.
At first I was hopeful that it would finally walk me though the basics of a
renderer and explain the concepts behind it, but I was let down :)

I have so many questions regardless :) The struggle of a rookie goes as:

SECTION A

\- why on earth do you take the square root of c? Is there a sane reason to do
this?

SECTION B

\- why normalize the vector? I'd love to see a detailed explanation. \- why is
vPos.xy the first two parameters? \- is vPos.xy a constant value or is it
evaluating as the shader script is executed pixel by pixel?

SECTION D

\- V -= S.xyz; // what's going on here? Why use the -= operator? Is this
something to do with the way shaders operate? \- float B = 2. * dot(V, W); //
what take the dot product? \- float C = dot(V, V) - S.w * S.w; // Why take it
again and substract a square? \- return B _B < 4\. _ C ? 0. : 1.; // Why 4.?
No other value seems to work

SECTION E

\- "Improve raytrace to return distances of near and far roots." OK I admit
that I'm totally lost at this point. Asking line-by-line explanation doesn't
really help :)

SECTION H

\- YAY, sin and cos!!! Finally I can mess around with something :)

------
tnecniv
Very cool! This is my favorite toy raytracer example:

[http://www.kevinbeason.com/smallpt/](http://www.kevinbeason.com/smallpt/)

------
jwatte
We used to have a sign in the office: "Absolutely no chrome spheres over
checkerboard planes."

(First day it went up, I wrote a ray tracer from scratch in C in two hours :-)

------
melling
I've got some ray tracing links on GitHub:

[https://github.com/melling/ComputerGraphics/blob/master/ray_...](https://github.com/melling/ComputerGraphics/blob/master/ray_tracing.org)

------
imaginenore
Here's a more interesting one that runs on WebGL, has various materials and
surfaces:

[http://madebyevan.com/webgl-path-tracing/](http://madebyevan.com/webgl-path-
tracing/)

and another, simpler one:

[http://hoxxep.github.io/webgl-ray-tracing-
demo/](http://hoxxep.github.io/webgl-ray-tracing-demo/)

------
phkahler
It still blows my mind that people have to define Vec3 and Vec4 types. These
should be native to modern programming languages as should Vec2. Modern CPUs
have vector registers that can support these sizes and GCC has intrinsics for
them that can be passed by value (even returned from a function IIRC), added
and subtracted or multiplied by a scaler, and yet no language that I'm aware
of has them as built-in types.

Vectors are not just for parallel computations, these small sized ones deserve
to be a proper data type.

~~~
tgb
You're not aware of OpenCL, GLSL, or CUDA then!

------
legodt
This is a really cool demonstration, love the code breakdowns. For a less
technical, but broader scale description of ray tracing, Disney made an
excellent video describing their use of ray tracing in movies. Check it out
here:

[https://youtu.be/frLwRLS_ZR0](https://youtu.be/frLwRLS_ZR0)

------
Doerge
Expected to see a simple ray tracer break down on some edge case. Saw a simple
ray tracer, simply ray trace. Was quite disappointed. Why is this posted? What
value does it add?

~~~
iak8god
This explains and demonstrates, piece by piece, how a ray tracer works it's a
"breakdown" in the sense of "an explanatory analysis", not in the sense of "a
failure."

Good for you that you already know how ray tracers work, and so possibly this
adds no value for you. I already knew how they work and I thought it was a
nice clear demonstration.

~~~
Doerge
I see that it is a _nice_ demo, but don't see how this ends up as #1 here?

~~~
iak8god
Other people found it interesting. It's pointless to complain about content
that doesn't strike your fancy. Just upvote stuff that does...

