This is pretty cool. I've written a couple of toy ray tracers, but always in a language that runs on a CPU, so it's interesting to see how you would do it on a GPU. And I noticed that this is from the Ken Perlin, which is cool - I'm mostly just used to seeing his name as the "Perlin" in Perlin noise.
There's one thing I'm curious about - does anyone know why he's taking the square root of the color to produce the final pixel color?
For reasons why linear space lighting is important read (http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html) The gist of it is gamma space lighting tends to look unnatural and blown out, and it becomes more of a problem the more math you do in the lighting (adding specular, multiple lights, etc.)
A small note, linear space lightning is important because the lightning equations are linear equations with respect to physical quantities (unless you specifically introduce materials with non-linear response, which are quite rate). So simply
intensity(pixel)=intensity(pixel illuminated by source 1)+intensity(pixel illuminated by source 2)+...
It's perception that is non-linear, so you can separate it as a last non-linear step instead of always calculating
perceived_intensity(pixel)=pow(x,pow(1/x,perceived_intensity(pixel illuminated by source 1))+pow(1/x,perceived_intensity(pixel illuminated by source 2))+...)
Since perception happens when you look at the image on your screen, applying an explicit "perception" step in rendering is actually counterproductive, since those nonlinearities would compound.
Gamma correction is used to allow for better contrast resolution for the brightness levels where the eye is most sensitive, when you have to compress your color values down to 8 bits per channel. The display then inverts this to produce ordinary linear space intensities.
there's more to the story than that, even. there's also the part where old CRT monitors had nonlinearity of similar exponent (or maybe its inverse, I can't find the details of it quickly on mobile). newer display technologies emulate this nonlinearity so they can be compatible receiving the same signals / data. (someone correct me if I'm wrong, I feel like I am not being 100% exact here)
But the point that remains today, is not the bits (as shaders work with floats internally), nor the response curve of a CRT (because almost nobody uses those any more), but the fact that the physics calculations of light operate on linear quantities (proportional to an amount of photons), so you better do those in linear space.
Then at the end you must convert to perceptual space, which can be done in a number of ways. I'm not sure how much of a win replacing one pow(color, 2.2) with a sqrt is, at the end of a fragment shader. Especially when the pow(color, 2.2) is already a quick approximation by itself, there are much fancier curves to convert to perceptual space (that don't desaturate the darks as much, for instance).
Accurate. The 2.2 and various hacks compensate for the baked in hardware.
Although perception is nonlinear, the nonlinearity here specifically addresses hardware on standard sRGB displays. After all, as cited above, why would you apply a nonlinear correction for perception when it has that baked in?
What is missed by many, is that standard LCD technology, despite being inherently linear, has that low level hardware nonlinearity baked in. Typically, it is a flat 2.2 power function, although other displays or modes may use differing transfer functions.
The irony is that the nonlinear 2.2 function adopted into most imaging results in a closer-to-linear signal, that ends up further nonlinear based on tonemapping, SMPTE2084, or other such nonlinear adjustments for technical or aesthetic reasons. In the case of raytracing, a mere 2.2 adjustment is woefully worthless.
PS: The CIE1931 model, of which the RGB encoding model is effectively derived from, used visual energy. That is, it doesn't model "reality" so much as the psychophysical byproduct that happens in the brain. Luckily, the base model, XYZ, operates on a linear based model with some extremely nuanced caveats. Raytracing using RGB tristimulus models as a result, sort of work.
Why should anybody care about speed in this demo? That sqrt just serves to confuse. Which IMO is at consistent with that terrible book Perlin wrote (co-wrote?) years ago.
The sqrt() function adjusts for the output curve of your display (usually sRGB nowadays, but in the past various gamma values have been used). The strips on the left should have an approximately linear gradient between light and dark, whereas the rightmost gradient will be too dark.
The question of why your display doesn't just output linear colors is more interesting. Small differences in dark colors are more easily perceived than differences in lighter colors, so it's useful to spend more encoding space on the low end. With more bits, you could use linear colors throughout, but all color data would take up proportionally more memory. It ends up more efficient to just decode from sRGB, do your computation, and encode into sRGB again each time.
Modern graphics APIs can automate this sRGB coding for you by letting you specify an sRGB format for textures: each time you read from an sRGB texture, the system will decode the color into linear space, and all writes will automatically encode back into sRGB.
WebGL + GLSL should already adjust this for you, giving a linear colorspace. Notice that 'unadjusted' actually looks correct (at least on my browser, Chrome @ Linux).
This would be in violation of the standard (in a way that would cause a lot of things to render wrong -- anybody who was rendering correctly would now be wrong!), and also it isn't practical in most cases.
It's a common misconception that this is taken care of for you by <platform X>, but it basically never is. I'm positive this is the case for WebGL.
It is an attribute of a properly defined encoding model. Further, there are two forms of linear display linear and scene linear, which are fundamentally essential to grasp for rendering approaches.
I never said so. I said it gives >> a << linear colorspace. There are many such color spaces. Some are linear in different aspects (better at close or far away colors), and some only preserve some aspects (linear in brightness, or chroma, etc). I didn't intend to fully go into color theory.
I'm not even certain how well defined the color space is, and how properly linear it is. I do know that it accounts for gamma correction though (at least on my machine).
Linear is not a colour space, as a properly defined colour space, as per the ISO specification, is:
- A well defined transfer function.
- Properly described primary light chromaticities.
- Properly defined white point colour.
Where you reference "there are many such color spaces" is the issue. Linear specifically relates to the first point, and even that doesn't properly describe whether one is speaking of display linear or scene linear.
The other points relate to perceptually uniform discussions, which would be a misuse of the term linear.
I agree that it is not a suitable forum for color theory. “Linear” however, is a much confused subject, worthy of explanation.
Unadjusted looks correct for me in Chrome and Firefox in Fedora. I have an nvidia card so I'm not using Wayland, I'm not sure if that would make a difference.
Square rooting the color is a trick I have used for ages, but I have not frequently seen it elsewhere. It will do pretty much the opposite of multiplying the color by itself - i.e. decrease the contrast and produce a "softer" color space still in the 0-1 range.
In general, you can produce a variety of interesting color-space transformations with simple math on the current fragment, as opposed to more complex methods that rely on sampling output, like HDR processing.
Gamma correction is achieved by raising the color to a power, the power being the gamma value. So sqrt is the same as gamma correction with gamma = 2.0; On typical monitors, gamma is 2.2 , so this is close and maybe faster if the GPU has optimizations for fast square root.
I thought about that, but some of the results I found on Google suggest that output from WebGL shaders is supposed to be in a linear color space [0] [1]. Then again, some of the comments here [2] suggest that you do have to do gamma correction manually.
Incidentally, I didn't know to do gamma correction until I came across [3], which explained why the output from my raytracers always looked a little off ;)
If you load a texture like from a JPEG and use that in a shader, the texture is converted to gamma space when loaded, so you need no correction. If you are generating colors in the shader, you will need to correct for gamma.
GL does have the capability to handle linear color space buffers correctly, but you have to enable SRGB, and initialize the framebuffer correctly with SRGB color space, and I'm not sure if GLES (WebGL) can do this.
What happens if you need to alter the color of a texture that's been loaded? For example, if you want to do diffuse shading on a sphere with a texture mapped to it. Do you first need to convert the texture pixel back to linear space, apply the shading, and then correct for gamma?
I believe that, in WebGL, if you sample the texture in a shader the resulting color is in a linear space. And output colors from a fragment shader are meant to be linear as well.
I realize now that I can't fully answer your question, because I'm unsure whether WebGL implementations are supposed to respect color correction information in the image file format. But I'm pretty sure that the color space in WebGL "shader land" is always linear.
I have absolutely no experience with WebGL, so I have no idea. Sqrt would diminish intensity in highlights, if nothing else. So, in that way it would seem to be an approximation of gamma correction. I'd have to see the result (can't on tablet).
It's not that those demos are obsolete, just that browsers and Oracle have screwed up Java so much that access to this kind of resource is being lost. :-(
Try it without `sqrt(c)`! It adds a bit of ambient lighting so you can see the sphere when the light is added in. Without it, the dark side of the sphere fades into the background.
If you're interested in GPU ray tracing, I played with nVidia's OptiX a few years. It was a fun (and pretty easy, IIRC) way to do it on a GPU (nVidia anyway).
Square root of color sounds like gamma encoding. He had a linear color, and wants to store it as sRGB; raising it to 1/2.2 is "correct" but sqrt is close enough.
I loved Ken Perlin's java applets years ago. Sadly, the demise of Java as webapps means they now require jumping through a few hoops to get them running nowadays.
Tangentially, there's always a bit of heartache when I use the text input in Steam's Big Picture mode, as I wish Perlin's innovative input modes saw wider usage (see the "pen input" section of the above link). Of course, I understand why not: it faces the same hurdle of teaching people a new input layout that dooms all non-traditional input methods.
This is sort of the tutorials that doesn't make any sense if you are not already knowledgeable in the subject of math and shaders and rendering basics. At first I was hopeful that it would finally walk me though the basics of a renderer and explain the concepts behind it, but I was let down :)
I have so many questions regardless :) The struggle of a rookie goes as:
SECTION A
- why on earth do you take the square root of c? Is there a sane reason to do this?
SECTION B
- why normalize the vector? I'd love to see a detailed explanation.
- why is vPos.xy the first two parameters?
- is vPos.xy a constant value or is it evaluating as the shader script is executed pixel by pixel?
SECTION D
- V -= S.xyz; // what's going on here? Why use the -= operator? Is this something to do with the way shaders operate?
- float B = 2. * dot(V, W); // what take the dot product?
- float C = dot(V, V) - S.w * S.w; // Why take it again and substract a square?
- return BB < 4. C ? 0. : 1.; // Why 4.? No other value seems to work
SECTION E
- "Improve raytrace to return distances of near and far roots." OK I admit that I'm totally lost at this point. Asking line-by-line explanation doesn't really help :)
SECTION H
- YAY, sin and cos!!! Finally I can mess around with something :)
It still blows my mind that people have to define Vec3 and Vec4 types. These should be native to modern programming languages as should Vec2. Modern CPUs have vector registers that can support these sizes and GCC has intrinsics for them that can be passed by value (even returned from a function IIRC), added and subtracted or multiplied by a scaler, and yet no language that I'm aware of has them as built-in types.
Vectors are not just for parallel computations, these small sized ones deserve to be a proper data type.
This is a really cool demonstration, love the code breakdowns. For a less technical, but broader scale description of ray tracing, Disney made an excellent video describing their use of ray tracing in movies. Check it out here:
Expected to see a simple ray tracer break down on some edge case. Saw a simple ray tracer, simply ray trace. Was quite disappointed. Why is this posted? What value does it add?
This explains and demonstrates, piece by piece, how a ray tracer works it's a "breakdown" in the sense of "an explanatory analysis", not in the sense of "a failure."
Good for you that you already know how ray tracers work, and so possibly this adds no value for you. I already knew how they work and I thought it was a nice clear demonstration.
There's one thing I'm curious about - does anyone know why he's taking the square root of the color to produce the final pixel color?