This should really be titled "Introduction to WebGL". The GL programming model is very different from what's actually going on under the hood. GL it at best a faint reminder of how GPUs worked during the 90's.
This is by now also over a decade old but still gives a much better idea how current GPUs work:
FWIW OpenGL only mapped to how the hardware worked for a particular vendor (Nvidia) and a very short time (~GeForce 2, IIRC).
OpenGL was always a higher level API, it just is an API that provides functionality instead of being a framework (so you had to make your own). Even in SGI hardware a lot of features were done only in software and during that time on PCs most cards could barely do more than rasterization and transformations (and some not even that).
Even that wasn't really OpenGL anymore, even though some internal registers matched OpenGL directly.
The only HW that was truly OpenGL was Silicon Graphics IRIS workstations. IrisGL was released as open standard with some minor changed with the name OpenGL.
For me, I had to go deep into NVidia docs & sample code before it really clicked. And there's still a dearth of online materials for webgl 2.0 compute & webgpu. But it's still early ;)
That's pretty nice. I focused on the practical parts in my article, but diving into the rasterization process itself is also very interesting. Thanks for sharing!
Some years ago, I came over a website that interactively demonstrated how a (software) 3D engine rendered rotating gears similar to glxgears in a step by step fashion. You could set the speed of the rendering process from a running animation to each and every triangle being rasterized, with depth testing and everything. I still try to look for the site, but it seems to have vanished from the internet.
The WebGL tutorial has the advantage of running everywhere, while WebGPU will take years to be available, assuming it does indeed get released this year across all major browsers, then it requires a Vulkan/DX 12 card anyway.
I don't see any reason for WebGPU outside of the browser, we already have middleware engines that can take advantage of latest Vulkan/DirectX 12/Metal/LibGNM/NVN features, without being castrated in capabilities.
Compile to WASM is a popular thing nowadays. It's enticing to be able to write your app in native WebGPU and then with almost no changes in code be able to compile your app to the web browser.
WebGL is a JavaScript interface to OpenGL ES, with some limitations. If you only want the core feature set (no extensions), it is pretty much the same API, and with emscripten it is possible to write portable C, C++ or Rust code against the OpenGL ES API and have it work not only natively but also in the browser via WebGL.
OpenGL ES is almost a subset of OpenGL, though there are also some ES-exclusive features. With some care it's possible to write code that works on both OpenGL ES (and WebGL if you want) and OpenGL.
Annoyingly there isn't a name for the common subset that works in both APIs.
My understanding is that webgl is a straight port of gles, and newer versions of gles incorporate most of the interesting features of newer versions of gl.
Hmm. This isn't really much to do with a GPU, it's more of a very, very 101 intro to OpenGL. It takes me back to grade 8 or 9 when I was originally learning this stuff! Very nostalgic. But very little todo with GPU instructions themselves.
While there are a lot of resources available for different graphics API's, I have yet to find a good comprehensive tutorial or library for software rendering. Being interested, I would love to know if anyone has a reference at hand.
> CPUs are highly specialized in computational speed and have a broad range of low-level instructions. GPUs are quite the opposite, because they are slower and simpler, but focuses on parallelization (high number of cores), and have a more limited set of instructions.
-------
The only thing I'd change is that I'd say that "CPUs are specialized in computational latency", because speed is an ambiguous word with regards to performance. Latency is more specific and a better description of how CPUs specialize.
GPUs are of course the opposite: GPUs specialize in bandwidth rather than latency.
I'm mildly surprised that people still write tutorials using OpenGL nowadays.
I thought the concensus is that 1) OpenGL is bad due to hidden states (and if I remember correctly, the hidden states can behave differently depending on the vendor, which exacerbates the problem), and that 2) people should move to Vulkan (or Metal, or DirectX 12).
Don't believe the hype. OpenGL is still fine for getting started with and applications where Vulkan etc is overkill. It makes perfect sense for game engines and heavy duty applications to make the switch, but for the simple stuff stick with OpenGL where it makes sense.
Vulkan/DX12 is quite intimidating, even for an experienced programmer, let alone a novice. I am a DX12 graphics driver developer and I still find the API intimidating at times, and a missing barrier can cause hard to debug graphical corruptions, something you didn't have to worry about in OpenGL. Metal is simpler in comparison. WebGPU is a good stepping stone between OpenGL and Vulkan. It feels closer to Vulkan/DX12 in design, but takes care of the tougher things like synchronization, barriers, memory management.
I think one of the main issues is that Vulkan is really bad from a usability perspective. OpenGL is too, but there is little incentive to switch from one bad API that you know how to use, to another bad API that you don't know how to use. On the other hand, I've spent time learning WebGPU in the past year and found that it was an excellent API that is an enormous quality of live improvement over OpenGL, so even if it has fewer features (due to Web, not API design), it's still something that I really want to switch to, unlike Vulkan.
I had a really good time writing a Vulkan renderer for a project I did. There's _A LOT_ of boilerplate, but there are helpful error messages every step of the way and the API makes sense because everything is explicit, unlike OGL.
There's nothing to be surprised about, Vulkan creators themselves said that they do not want to replace OpenGL for these kind of use-cases.
Vulkan is very deliberately a very very verbose low-level API to allow maximum optimization for game engine developers. It's not meant for your my-first-3d-program usage and it shouldn't be sold as such. Same way as using ASM to write CLI tools isn't a sensible approach.
<start a timer>
vi vulkan_one_pixel.cpp
<add the required code to open a window and draw *one* pixel>
<save, compile, debug, rinse, repeat until a window is opened and a pixel is drawn in it>
<stop timer>
<read timer>
wc -l vulkan_one_pixel.cpp
du -sh vulkan_one_pixel.cpp
<despair>
Vulkan has a very high initial cost when it comes to lines of code. You have to recreate a lot of stuff that you took for granted in OpenGL, such as a default framebuffer to draw to. Also, a lot of boilerplate code to create resources can be abstracted away into functions to reuse later. So while you might need a lot of lines to even draw a triangle, you won't need many more lines to draw a full 3D model.
What is nice to see that with Rust's linearity, cleaning resources once they become unused can be cleared quite easily by implementing a custom drop function. The challenge remains though that to know what order you have to do it requires consultation of the Vulkan specification. Though even for this you can enable Vulkan validation layers which actually give quite clear idea what you are doing wrong.
Also for a beginner, WebGL is far more accessible. I'm just getting into it myself, and started with WebGPU. Bad idea lol. Went back to WebGL and even though it was still tough, I stuck with it and now have a basic grasp of what's going on. I get that there's a lot of hidden stuff under the hood, but now when I return to WebGPU at some point, I'll at least have context as to what it's trying to do.
Talking about OpenGL feels very regressive - the 'G' in GPU is hardly "graphics" any more, so presenting through the lens of rendering is backwards. These days, GPU is more like "General (data parallel) Processing Unit", and rendering is just a relatively uninteresting special case.
Think of it this way: wouldn't the world be a better place if Linux provided a way to create a GPU process without reference to displays?
This is by now also over a decade old but still gives a much better idea how current GPUs work:
https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...