Hacker News new | past | comments | ask | show | jobs | submit login
Introduction to GPUs with OpenGL (monstar-lab.com)
155 points by Seb-C on March 7, 2022 | hide | past | favorite | 47 comments



This should really be titled "Introduction to WebGL". The GL programming model is very different from what's actually going on under the hood. GL it at best a faint reminder of how GPUs worked during the 90's.

This is by now also over a decade old but still gives a much better idea how current GPUs work:

https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...


FWIW OpenGL only mapped to how the hardware worked for a particular vendor (Nvidia) and a very short time (~GeForce 2, IIRC).

OpenGL was always a higher level API, it just is an API that provides functionality instead of being a framework (so you had to make your own). Even in SGI hardware a lot of features were done only in software and during that time on PCs most cards could barely do more than rasterization and transformations (and some not even that).


Even that wasn't really OpenGL anymore, even though some internal registers matched OpenGL directly.

The only HW that was truly OpenGL was Silicon Graphics IRIS workstations. IrisGL was released as open standard with some minor changed with the name OpenGL.


AFAIK even though from a very high level perspective they are similar, in practice IrisGL differs a lot from OpenGL - it isn't just a rename.

Also AFAIK IrisGL had functions for creating windows, etc, that didn't exist in OpenGL (GLUT was meant to bridge that gap).


No, IrisGL did not have any functions for creating Windows, you used libX11 and the Xt toolkit to do that.


This is a nice very high level view.

But if I were to teach modern GPUs and rendering I would recommend starting with something like https://tayfunkayhan.wordpress.com/2018/11/24/rasterization-...


Scratch A Pixel is really comprehensive and very "talk to me like I am 5 years old" kind of tutorial: https://www.scratchapixel.com/


another nice resource is https://github.com/ssloy/tinyrenderer/wiki

The wiki is nice and it has a functional renderer with cpu vertex/pixel shaders, texture mapping and obj loader. A Python version is just 100 lines:

https://github.com/rougier/tiny-renderer


For me, I had to go deep into NVidia docs & sample code before it really clicked. And there's still a dearth of online materials for webgl 2.0 compute & webgpu. But it's still early ;)


That's pretty nice. I focused on the practical parts in my article, but diving into the rasterization process itself is also very interesting. Thanks for sharing!


Some years ago, I came over a website that interactively demonstrated how a (software) 3D engine rendered rotating gears similar to glxgears in a step by step fashion. You could set the speed of the rendering process from a running animation to each and every triangle being rasterized, with depth testing and everything. I still try to look for the site, but it seems to have vanished from the internet.


If you are looking to use a more modern API (and language) I can recommend this wgpu/Rust tutorial https://sotrh.github.io/learn-wgpu/.


Saved for later. Also looks like I should get on the Rust train. This and Tauri are things I could definitely see myself using a lot in the future.


The WebGL tutorial has the advantage of running everywhere, while WebGPU will take years to be available, assuming it does indeed get released this year across all major browsers, then it requires a Vulkan/DX 12 card anyway.


wgpu has a wgpu-native component which is a native library that exposes a C API. I use it in D for my 3D projects and it's quite nice.


I don't see any reason for WebGPU outside of the browser, we already have middleware engines that can take advantage of latest Vulkan/DirectX 12/Metal/LibGNM/NVN features, without being castrated in capabilities.


It’s true that WebGPU native could be seen as just another middleware. However, what differentiates it is:

  1. Well-thought API with an actual standard
  2. Large conformance test suite
  3. Trivial deploy on the Web


Compile to WASM is a popular thing nowadays. It's enticing to be able to write your app in native WebGPU and then with almost no changes in code be able to compile your app to the web browser.


Fashion driven development more likely.


Which engines are you referring to?


to slow to compile, OP's example doesn't require anything other than a browser, refresh the page and your changes appear instantly


Tutorial is relatively nice, however OpenGL, OpenGL ES and WebGL aren't the same, besides sharing GL in the name.

Sure the basic examples look similar, until one wants to start porting existing applications.


WebGL is a JavaScript interface to OpenGL ES, with some limitations. If you only want the core feature set (no extensions), it is pretty much the same API, and with emscripten it is possible to write portable C, C++ or Rust code against the OpenGL ES API and have it work not only natively but also in the browser via WebGL.

OpenGL ES is almost a subset of OpenGL, though there are also some ES-exclusive features. With some care it's possible to write code that works on both OpenGL ES (and WebGL if you want) and OpenGL.

Annoyingly there isn't a name for the common subset that works in both APIs.


My understanding is that webgl is a straight port of gles, and newer versions of gles incorporate most of the interesting features of newer versions of gl.


WebGL 2.0 is a subset of OpenGL ES 3.0, which last version was OpenGL ES 3.2.

Additionally, even the overlapping subset isn't 100% equal due to the security constraints on the browser.


Sure, there are lots of differences between OpenGL and OpenGL ES, but the fundamental logic and architecture is pretty much the same.


This is a WebGL tutorial -- for OpenGL there's a fantastic tutorial at https://learnopengl.com/


Hmm. This isn't really much to do with a GPU, it's more of a very, very 101 intro to OpenGL. It takes me back to grade 8 or 9 when I was originally learning this stuff! Very nostalgic. But very little todo with GPU instructions themselves.


While there are a lot of resources available for different graphics API's, I have yet to find a good comprehensive tutorial or library for software rendering. Being interested, I would love to know if anyone has a reference at hand.


There are a few around.

https://raytracing.github.io/books/RayTracingInOneWeekend.ht...

https://graphicscodex.courses.nvidia.com/app.html

https://www.gabrielgambetta.com/computer-graphics-from-scrat...

I can vouch for the first two as being good. I haven't actually looked at the third but I found I had a bookmark for it.


I thought the framing of CPU's as specialized, and GPU's as the opposite was a bit odd and missleading.

If anything it's GPU's that are specialized to address a narrow set of problems.


You might want to reread the paragraph.

> CPUs are highly specialized in computational speed and have a broad range of low-level instructions. GPUs are quite the opposite, because they are slower and simpler, but focuses on parallelization (high number of cores), and have a more limited set of instructions.

-------

The only thing I'd change is that I'd say that "CPUs are specialized in computational latency", because speed is an ambiguous word with regards to performance. Latency is more specific and a better description of how CPUs specialize.

GPUs are of course the opposite: GPUs specialize in bandwidth rather than latency.


I'm mildly surprised that people still write tutorials using OpenGL nowadays.

I thought the concensus is that 1) OpenGL is bad due to hidden states (and if I remember correctly, the hidden states can behave differently depending on the vendor, which exacerbates the problem), and that 2) people should move to Vulkan (or Metal, or DirectX 12).


Don't believe the hype. OpenGL is still fine for getting started with and applications where Vulkan etc is overkill. It makes perfect sense for game engines and heavy duty applications to make the switch, but for the simple stuff stick with OpenGL where it makes sense.


Vulkan/DX12 is quite intimidating, even for an experienced programmer, let alone a novice. I am a DX12 graphics driver developer and I still find the API intimidating at times, and a missing barrier can cause hard to debug graphical corruptions, something you didn't have to worry about in OpenGL. Metal is simpler in comparison. WebGPU is a good stepping stone between OpenGL and Vulkan. It feels closer to Vulkan/DX12 in design, but takes care of the tougher things like synchronization, barriers, memory management.


I think one of the main issues is that Vulkan is really bad from a usability perspective. OpenGL is too, but there is little incentive to switch from one bad API that you know how to use, to another bad API that you don't know how to use. On the other hand, I've spent time learning WebGPU in the past year and found that it was an excellent API that is an enormous quality of live improvement over OpenGL, so even if it has fewer features (due to Web, not API design), it's still something that I really want to switch to, unlike Vulkan.


I had a really good time writing a Vulkan renderer for a project I did. There's _A LOT_ of boilerplate, but there are helpful error messages every step of the way and the API makes sense because everything is explicit, unlike OGL.


It's not just the boilerplate, but also seemingly noncensical things like stypes.


Getting Vulkan to work is one thing. Getting Vulkan to work well is another.


There's nothing to be surprised about, Vulkan creators themselves said that they do not want to replace OpenGL for these kind of use-cases.

Vulkan is very deliberately a very very verbose low-level API to allow maximum optimization for game engine developers. It's not meant for your my-first-3d-program usage and it shouldn't be sold as such. Same way as using ASM to write CLI tools isn't a sensible approach.


Do the following:

    <start a timer>

    vi vulkan_one_pixel.cpp

    <add the required code to open a window and draw *one* pixel>

    <save, compile, debug, rinse, repeat until a window is opened and a pixel is drawn in it>

    <stop timer>

    <read timer>

    wc -l vulkan_one_pixel.cpp

    du -sh vulkan_one_pixel.cpp

    <despair>


Vulkan has a very high initial cost when it comes to lines of code. You have to recreate a lot of stuff that you took for granted in OpenGL, such as a default framebuffer to draw to. Also, a lot of boilerplate code to create resources can be abstracted away into functions to reuse later. So while you might need a lot of lines to even draw a triangle, you won't need many more lines to draw a full 3D model.


For anyone interested in the Vulkan end of things, I recommend this: https://hoj-senna.github.io/ashen-aetna/

What is nice to see that with Rust's linearity, cleaning resources once they become unused can be cleared quite easily by implementing a custom drop function. The challenge remains though that to know what order you have to do it requires consultation of the Vulkan specification. Though even for this you can enable Vulkan validation layers which actually give quite clear idea what you are doing wrong.


This seems to focus on WebGL, and the alternative, WebGPU, is still only available in nightly builds.


Also for a beginner, WebGL is far more accessible. I'm just getting into it myself, and started with WebGPU. Bad idea lol. Went back to WebGL and even though it was still tough, I stuck with it and now have a basic grasp of what's going on. I get that there's a lot of hidden stuff under the hood, but now when I return to WebGPU at some point, I'll at least have context as to what it's trying to do.


Talking about OpenGL feels very regressive - the 'G' in GPU is hardly "graphics" any more, so presenting through the lens of rendering is backwards. These days, GPU is more like "General (data parallel) Processing Unit", and rendering is just a relatively uninteresting special case.

Think of it this way: wouldn't the world be a better place if Linux provided a way to create a GPU process without reference to displays?


OpenMP and OpenCL support GPUs as target devices -- this is the fastest way for AMD GPUs. Nvidia has CUDA, which is faster but only for Nvidia cards.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: