For anyone who suffers from terribly long load times/freezing when compiling the shader, this is caused by the ANGLE wrapper Firefox and Chrome use to convert the WebGL calls to Direct3D 9. You can get better results by disabling it: http://www.geeks3d.com/20130611/webgl-how-to-enable-native-o...
I think the rationale for using ANGLE is to avoid driver bugs, but in my experience, it causes much more damage than the problems it solves. Practically any page with a non-trivial shader freezes Firefox for up to a minute or sometimes even more on my machine.
I just checked http://www.browserleaks.com/webgl on Win 7, and Firefox 30's ANGLE uses Direct3D 9, whereas Chrome's Canary (v38, best performance) uses Angle with Direct3D 11.
Firefox might upgrade this sometime, but Aurora (Firefox 32) still uses Direct3D 9.
My understanding was that ANGLE was done so that DX9, 10, 11 could be used underneath for rendering while doing OpenGL -> DX translation, main reason being unstable GL drivers.
Nowadays, NVIDIA provides GLES2/3 apis, but not sure about AMD/Intel.
You should check whether your drivers and browsers are uptodate. Most WebGL pages work just fine these days, although some (e.g. many shadertoy shaders) tend to use too expensive shaders which might be slow on older integrated GPUs from Intel or generally mobile GPUs.
Great work. I love seeing how the renderer quickly draws the basic details and then ever more slowly over time fills in the finer points. I know that's pretty much how ray tracing is working but still cool to see.
I’m learning about rasterization based computer graphics at the moment and it somehow bothers me that all of it might be obsolete in a couple of years when we can do path tracing in real-time. It’s such a fast moving field.
Don't think raytracing will kill off rasterization for the crucial "first bounce" step anytime soon if we're talking about increasingly-detailed ever-content-richer realtime 60FPS game renderers. Usually as soon as RT gets semi-real-time for simplistic scenes at a rather-too-low resolution (without AA), consumers move on to double the previous standard resolution (four times per-pixel workload) such as the move from FHD to retina/hiDPI, we're back to square one. Especially if say you find you need 4 split-screens stereoscopic at 4k with full shading fidelity.
Rasterization is just too cool of a hack to avoid the tracing of primary rays. In fact, it's rather all the hardware and software hacks that evolved around rasterization for 2 decades combined that make it so, not rasterization by itself per se. Various GPU & CPU based culling methods, early & hierarchical Z-buffer, today's realtime rendering pipeline is an amazing combination of cool techniques that keeps getting better. Raytracers say "pff, with real raytracing we wouldn't need all those dirty hacks", but for one most of them come with the GPU or are well-known and intuitive to implement, plus to get anywhere near-realtime they need to implement another set of even more (and less comprehensible/intuitive/hardware-accelerated) hacks of their own.
All shading stages are also designed around rasterization and matured over the last decade. It's good fun to write some pixel-shader-based raytracer but that alone doesn't make it the better fit for current/next-gen gamedev at all.
The oft-antipicated "hybrid" approach however is finally arriving. Once you have some "depth-aware screen-space" (whether voxelized or N-layers), you can shoot specialized and rather simple rays to create diverse outstanding effects in post-process, from much smoother water surfaces to of course "screen-space raytraced reflections" (SSRR).
When we are able to trace multiple rays with multiple bounces per pixel in real-time, would the first bounce really carry that much weight compared to the subsequent bounces that it would justify a separate rendering technique?
He's referring to demos that have been around for years and I've been following the space, too. His argument doesn't really refute any of my points. Knowing how the rasterizers work is never going to hurt you, quite the opposite. If you're serious about computer graphics you'll really want to have internalized both raytracing basics and rasterization basics. If I didn't know how triangle meshes are fed into the GPU and processed until the "final fully lit&shaded output pixel" for any one of my 94 favourite games, I'd feel pretty uneasy. Rasterization will always be orders of magnitude faster by necessity: that means when raytracing can finally render ca. 2004 scene complexity (think GTA:SA -- hint, it still can't even at quarter-res), rasterization can render ca. 2016 (or 2018 or whenever) scene complexity at full-res. Guess what the folks at Rockstar, Ubisoft, Crytek wanna do? Throw more details, more content variation, more procedurally-generated or -perturbed geometry at the screen at 60+ FPS. Sure Brigade can reach almost/barely 30FPS at some low resolution in a small limited preprocessed scene and then there's no more room for any physics, any AI, any animated crowds etc etc at all. They're doing important work and one day the big payoff will come, but for the next 10 years knowing how our rasterizers works will not be wasted at all.
After seeing the Brigade Engine and LuxRender path-tracing demos, I think that you are 100% correct to be worried, and beyond that, you should avoid wasting your time as much as possible. Actually, when you say "learning" that leads me to believe that you are enrolled in some course under some curriculum in some school.
I don't believe that any individual school, professor or course can really be up-to-date with the most leading edge practices or even theory, especially in high technology.
I think that the only really interesting area now in computer graphics is in doing path-tracing in hardware.
There is a massive culture that still is interested in other things but I think they are behind the times.
Sooner or later nVidia or ATI/AMD will put some native capabilities for path tracing a scene (not like OptiX where you program the existing architecture for it, but circuits/architecture that is truly optimized for path tracing) given a set of geometry, materials and lights into their graphics cards.
At that point all of the rasterization and lighting calculation tricks will be obsolete.
Probably this will be buried because there is an enormous amount of effort going into those old-fashioned approaches, but oh well. I have to say it.
continue learning that stuff: the algorithms will be useful, and you may find yourself working on an embedded device without the CPU to do path/ray tracing.
It works well on OSX 10.10 with an HD4000 in Firefox, Chrome and even in Safari. The "framebuffer not complete" warning looks like you're using an extension which isn't supported everywhere, probably OES_texture_float or WEBGL_depth_texture?
I don't think the author took effort to focus on control logic like motion clipping through objects. It's more of a showcase of how quickly his raytracing algorithm is able to render images. That was the real takeaway from me, having written a raytracer in c++ it is amazing to see this running so quickly, and in the browser too.
This is a damn impressive demo, but it's not really "in the browser". The browser is the shell, but absolutely all of the heavy lifting is being done on the GPU. No matter what you're using to push shaders over to the GPU, you're really looking at the same performance (so long as that's all that's happening -- it's obviously easy to do horrible things and slow down WebGL or D3D or whatever you happen to be using for a graphics API).
Author here! It's very true that the GPU does the heavy lifting, although I'm not sure exactly where an "in the browser" line could be drawn (conceptually any page uses the browser as an execution engine and library).
It's a bit trickier due to restrictions in WebGL (OpenGL ES based) compared to the desktop (e.g. no bitwise operators makes it a pain to get randomness that doesn't bias the result), but it's basically the same.
Could you please list literature/papers that you found especially useful while making that renderer? I plan to do the same thing for education purposes.
Please let me know if you have any questions, (see my email at http://jonathan-olson.com/about), and please feel free to use my code however you like (things I wrote are MIT, but I use CC-by and CC-by-non-commercial HDR images).
If you're interested in ray/path tracing or photorealistic rendering at all I would seriously recommend Physically Based Rendering[1]. It's probably one of the most satisfying book purchases I've made. The authors go through every aspect of implementing a quality path tracer: image sampling, surface shading models, statistics and integration methods, intersection testing and acceleration and more. It's an absolute treasure trove of information. (Be prepared to do a lot of math.)
I think the rationale for using ANGLE is to avoid driver bugs, but in my experience, it causes much more damage than the problems it solves. Practically any page with a non-trivial shader freezes Firefox for up to a minute or sometimes even more on my machine.