Second, WebGL has builtin depth buffers, vertices are specified with 3+ coordinates, and most of the shader math operates on things obviously in three dimensions... so, saying it's 2D is incredibly misleading.
If I download a png library I expect it to decode and/or encode PNGs not just be a zlib library and some pointers to some docs on the png format.
WebGL is not a 3D library by any possible definition as it does not provide 3D. It's a rasterization at best.
Why does that matter? Because if you want a 3D library, as in a library that does 3D then you want something else like three.js, (http://threejs.org). Knowing that WebGL is not a 3D library and that you're going to have to write your own from scratch if you use it directly seems like an important distinction
What you're saying is "3d" is just utility functions to transform something into clip space. The reason they got rid of those in OpenGL ES (and WebGL is based on ES) is that most of it was useless outside of demos. pushMatrix/popMatrix/translate/rotate/scale are simple to use, but as soon as you need to do something like interpolate a camera between two rotations you end up doing your own vector math and using glLoadMatrix anyway.
The author is (imho) correct in pointing this out from the perspective of having produced a tutorial for newcomers to this scene - so far my observation is that those who wish to argue with him about this point seem to want to generalize it into a ball of mystery, as they are more expert and know the territory better. That may well be the case, but use the rear-vision mirror and look at this from a total newbie perspective. You can't just load up Wavefront .obj's into a WebGL instance and expect to get a bouncing ball. You'll need a 3D engine - or library - to do that. There's room in this discussion to fill that void - for the newbie - by using terminology that allows a better understanding than, perhaps, those who understand it all, already, permit/allow/agree to.
The distinction seems to have a locus around the data formats - either of the files being loaded, or at the parameter/buffer-object layer. There is quite some territory between an .OBJ vector (for example) and some other vector format that needs to be loaded into buffers and passed over to the hardware device through a driver layer. Where does the physics go? What is a shader for and why bother with it in this context? These questions get answered by having firm terminology: WebGL is not a 3D library, because a 3D library would have some sort of blackbox'ed object loader, maybe a bit of physics/collision-detection too, and so on .. and it may well be an engine which uses WebGL simply as one of a series of front-ends, through a driver layer, successfully too - they're out there.
It's the opposite, this stuff is already hard, there's no need to make it more complicated by introducing a distinction that's misleading.
> WebGL is not a 3D library, because a 3D library would have some sort of blackbox'ed object loader, maybe a bit of physics/collision-detection too, and so on ..
That's a pretty arbitrary cutoff, but all the same: you're describing a 3d engine, not a library. Besides why would you want all that in one library? I'd rather have my mesh loading and my physics be separate libraries so I can have more choices and keep my includes small.
The author is just making it clear that there's no magic here. It's all just 2D points on a screen eventually.
It also isn't a fully 3D API either, otherwise we wouldn't have to be so concerned about the order in which we draw transparent objects. This is the result of using a 2D drawing surface plus a depth buffer.
That said, it might be too subtle a point to open with. Especially since you are showing code, the understanding of which requires material that they haven't covered yet.
all of the 3d operations and 3d relevant functionality breaks the argument. the depth buffer for instance, and even more so, the perspective transformation hack (1/w) have limited utility outside of 3D...