I especially like the up-to-date examples for how to write WebGL in shorthand and the common practices examples. The explanations are phenomenal too. I guess what I'm trying to say is, this is likely the best WebGL reference I've seen yet.
My only suggestion is that the author name himself, and date the content (at least month/year)! Both of these can be inferred from the github link, but it belongs on the page itself.
So does anyone here have links to similar documentation for the above?
started reading the tutorials... and i have to say the quality is questionable. from very big things like the claim that the API is fundamentally 2D (really?) down to small things like doing the pixel space to clip space conversion in an inexperienced way (its one 2d vector multiply or a matrix multiply... surely?)
he is clearly aware of some of this stuff though (he mentions the magic perspective w divide in a later tutorial), so maybe its intended and i am just being over-critical since i understand how this stuff works and disagree with the approach taken to introduce it...
saying that he seems to have confused scene graph with a transform hierarchy. oh dear...
TL;DR OpenGL 1.0 is a 3D library because you can give it 3d data and will draw a 3D with no 3D knowledge on your part. WebGL is not a 3D library because you have supply the 3D knowledge.
Arguably it would have been better to say WebGL is a rasterization library that has some features that make it good for rasterizing 3d. If you think about it though it is, by no reasonable definition, a 3D library.
As for any other issues patches and suggestions welcome.
As for the 2d rasterization vs 3d math question, I've bumped into your philosophy online before and never commented but found it did touch a nerve with me as you suggest. I'm not entirely sure why, but its as much an emotional argument as a logical one.
After I thought about it for a while, I concluded that you have a valuable and interesting point. But every time I read it I keep wondering a couple of things, so since you're here I figured I'd just ask.
First, why deny so absolutely that WebGL is 3D? I've been feeling like your point might have more traction with graphics programmers if it left the question open than claim to close the book. Even though you're right, you need to do your own math, WebGL still does accept 3d data, and it does 3d operations on that data. WebGL enables 3d graphics, and denying that it does that is inherently problematic, isn't it? The argument that amount of prior knowledge necessary to use a library should dictate the category of a library also seems pretty subjective to me, and in my experience doesn't generally hold - not many libraries I use are truly black box libraries that don't require prior knowledge. My personal use of OpenGL has always involved me doing a lot of 3d math myself, just like my personal use of OpenCV requires a lot of prior image processing and computer vision knowledge.
Second, why try to convince the internet that WebGL is 2d and not 3d, as opposed to, say, the Khronos Group? If all the official sources of documentation call WebGL a 3d library, then isn't this argument probably going to be futile and never ending?
The 3D perspective page http://webglfundamentals.org/webgl/lessons/webgl-3d-perspect... clearly describes the rasterization phase as having 3D: the automatic perspective divide and the triangle clipping to a 3D space. Clip space being 3D seems very strange if WebGL is only a 2D API.
My suggested patch would be to rephrase it for the beginner. "WebGL is largely a 3D rasterizing API and not a fire-and-forget scene rendering solution."
If WebGL were only 2D it would be trivial to implement 3D without shaders in <canvas> -- and that is not true, canvas cannot rasterize 3D triangles.
It is called depth testing. There is a depth buffer (or z buffer) which contains the depth value for each pixel. When a pixel is to be written, its depth is calculated and compared the value in depth buffer, if it is closer to camera then it is drawn and value of depth buffer is changed, else the pixel is ignored.
you should be careful with bold claims, especially if you don't have a great depth of knowledge (everyone should believe this...)
i commented on the post.
(don't let me nay-say too much. you have done a really wonderful thing here. that is what matters most.)
also the idea that old desktop OpenGL is valid any more grates and encourages using it... its something we would like to have died its death 5-10 years ago when it was already out of date.
its also the case that all of the 'desired' functionality for 3d which is 'missing' is equally applicable to 2d and so equally missing from the API. there are no rotations 'out of the box', no aspect ratio correction, no texture mapping, no anything. this is what we expect from a low-level API, the minimum set of features required to make the hardware do the job - not an encyclopaedic library of derivative functionality (this is the place of a higher level library than the hardware interface).
i think the author has confused it being a low-level API with it being 2D. these are not the same. its just as devoid of features regardless as to what space you are rendering in... i can really understand it if coming from a web background where the stack is gigantic and "low-level" is an utterly foreign concept.
(also, in my defence, i didn't go into great detail, since i commented on the post itself to avoid polluting HN with a conversation thread)
It is completely up to the user whether the position data for the triangles is given in 2D coordinates from the start or in 3D coordinates and then transformed to 2D coordinates in the vertex shader.
The fact that it makes it fast to do the math for 3D, does not make it a 3D API.
also, please don't confuse OpenGL circa 1999 with modern OpenGL. the features removed in ES were considered bad practice 10-15 years ago... i do disagree with removing default shaders (our beautiful flexibility is now relegated to mere boilerplate) but i'm sure the OpenGL ARB had a good reason behind that choice (reducing driver responsibility, reducing points of failure etc.).
also, what is never up to the user is the perspective divide. if a naive user is not aware of this particular, very 3D specific gem, of the hardware/API then he will end up with subtle bugs and maybe even struggle to fix them due to the incomplete knowledge...
its got nothing to do with performance.
Also, what's the final verdict on Fast InvSqrt()? Is the optimal magic number Q3's 0x5f3759df, Chris's 0x5f375a86, or your 0x5F375A7F?
the differences are tiny though, and there are other measures that can be used (which will result in different constants)
in any case, on most modern hardware it is not relevant since there is some lookup based instruction that is more accurate, and faster.
the problems here aren't enormous, a smart programmer will work them out for himself, but they are misleading. i think there is some confusion just because this is a low-level API, essentially a hardware abstraction layer, but the user is expecting high level features like a complete maths library. (maybe?)
For one thing, WebGL is not a userland "library", it's an API layer on top of device driver implementations of the OpenGL spec. It's not designed to be something that "makes 3D easy"; it's not even a "library" in the npm/jQuery sense of the word.
I would not recommend any tutorials written by an author whose reasoning about OpenGL is so fundamentally flawed. You should really re-think what you're claiming here.
Good that it covers really WebGL. And not Three.js as do several "WebGL" books and tutorials. (as I wrote two days ago: https://news.ycombinator.com/item?id=9320571 )
If you are doing a game or a 3d application on browsers- WebGL would be the way to go obviously. But apart from that, where could it be used? Could websites and web applications benefit from using WebGL? If so, in what way?
Would it improve or worsen battery performance?
ps: Sorry for all the noob questions but i am genuinely curious about the implications of WebGL.
Only the latest version of iOS supports it, on Android it is a joke with Chrome blacklisting many handsets and Windows Phone only partially supports it.
 You can re-enable it via developer flags, but no normal user is going to know about it.
You can see (somewhat skewed) stats on http://webglstats.com/
Interestingly iOS is ahead of Android in tablets, but behind on phones.
Note, the "Frustum" example linked on the 3D perspective page is not working on Firefox (screenshot comparing FF and Chrome).
 - http://webglfundamentals.org/webgl/frustum-diagram.html
 - http://webglfundamentals.org/webgl/lessons/webgl-3d-perspect...
 - http://nacr.us/media/pics/screenshots/screenshot--18-21-58.p...
Are you kidding? Most programmers with really no graphics experience are along the lines of "wtf is a shader?!" or "wtf is clip space?!"... this give absolutely no clear explanation of what those things are, even the fundamentals section assumes you already know the fundamentals...
Second, WebGL has builtin depth buffers, vertices are specified with 3+ coordinates, and most of the shader math operates on things obviously in three dimensions... so, saying it's 2D is incredibly misleading.
If I download a png library I expect it to decode and/or encode PNGs not just be a zlib library and some pointers to some docs on the png format.
WebGL is not a 3D library by any possible definition as it does not provide 3D. It's a rasterization at best.
Why does that matter? Because if you want a 3D library, as in a library that does 3D then you want something else like three.js, (http://threejs.org). Knowing that WebGL is not a 3D library and that you're going to have to write your own from scratch if you use it directly seems like an important distinction
What you're saying is "3d" is just utility functions to transform something into clip space. The reason they got rid of those in OpenGL ES (and WebGL is based on ES) is that most of it was useless outside of demos. pushMatrix/popMatrix/translate/rotate/scale are simple to use, but as soon as you need to do something like interpolate a camera between two rotations you end up doing your own vector math and using glLoadMatrix anyway.
The author is (imho) correct in pointing this out from the perspective of having produced a tutorial for newcomers to this scene - so far my observation is that those who wish to argue with him about this point seem to want to generalize it into a ball of mystery, as they are more expert and know the territory better. That may well be the case, but use the rear-vision mirror and look at this from a total newbie perspective. You can't just load up Wavefront .obj's into a WebGL instance and expect to get a bouncing ball. You'll need a 3D engine - or library - to do that. There's room in this discussion to fill that void - for the newbie - by using terminology that allows a better understanding than, perhaps, those who understand it all, already, permit/allow/agree to.
The distinction seems to have a locus around the data formats - either of the files being loaded, or at the parameter/buffer-object layer. There is quite some territory between an .OBJ vector (for example) and some other vector format that needs to be loaded into buffers and passed over to the hardware device through a driver layer. Where does the physics go? What is a shader for and why bother with it in this context? These questions get answered by having firm terminology: WebGL is not a 3D library, because a 3D library would have some sort of blackbox'ed object loader, maybe a bit of physics/collision-detection too, and so on .. and it may well be an engine which uses WebGL simply as one of a series of front-ends, through a driver layer, successfully too - they're out there.
It's the opposite, this stuff is already hard, there's no need to make it more complicated by introducing a distinction that's misleading.
> WebGL is not a 3D library, because a 3D library would have some sort of blackbox'ed object loader, maybe a bit of physics/collision-detection too, and so on ..
That's a pretty arbitrary cutoff, but all the same: you're describing a 3d engine, not a library. Besides why would you want all that in one library? I'd rather have my mesh loading and my physics be separate libraries so I can have more choices and keep my includes small.
The author is just making it clear that there's no magic here. It's all just 2D points on a screen eventually.
It also isn't a fully 3D API either, otherwise we wouldn't have to be so concerned about the order in which we draw transparent objects. This is the result of using a 2D drawing surface plus a depth buffer.
That said, it might be too subtle a point to open with. Especially since you are showing code, the understanding of which requires material that they haven't covered yet.
all of the 3d operations and 3d relevant functionality breaks the argument. the depth buffer for instance, and even more so, the perspective transformation hack (1/w) have limited utility outside of 3D...
Thank you for sharing this!
Although I have to recognize that NeHe tutorials were quite good.
Personally I think the biggest problem with modern 3D APIs is learning the shading languages and how to separate work across them.
Just giving semi-commented code samples isn't enough when a student doesn't know what an attribute is.
It's not complete by any means, but will get there eventually, hopefully. :)