Hacker News new | past | comments | ask | show | jobs | submit login
Learning Modern 3D Graphics Programming (2012) (paroj.github.io)
180 points by jasim on Dec 28, 2022 | hide | past | favorite | 119 comments



The most popular modern graphics (GPU) APIs are DirectX 12 (Microsoft), Metal (Apple), and Vulkan (Khronos Group).

Targeting all three can be tough, hence WebGPU which lets you write to a single API. Support on non-web runtimes have been appearing as well, so you're not necessarily limited to the browser (e.g. https://github.com/samdauwe/webgpu-native-examples).

Good read: https://surma.dev/things/webgpu/

Tutorial for learning Vulkan: https://www.vulkan.org/learn#vulkan-tutorials


To be honest though, if you are just starting out and don't have experience on things like homogeneous coordinates, shaders, and the basics of real-time rendering, just learn damn OpenGL. learnopengl.com is the titular "bible" site and you will learn most of the basic concepts there.

I really do not recommend DX11 / DX12 / Vulkan / WebGPU, just because of the fact that they don't have that much good learning resources compared to OpenGL. Even DX (though more widely used in the gaming industry) doesn't have that much material compared to OpenGL. People often point at RasterTek's tutorials (https://www.rastertek.com/tutindex.html) for DX stuff, but the explanations are much more sparse compared to LearnOpenGL and they often just dump lines of code at you without any high-level explanation. And I especially do not recommend Vulkan (or DX12), because it's a hell of an unfriendly API that exposes too much of the underlying GPU hardware for the programmer to manage (especially with synchronization). Writing Vulkan in a naive way will probably give you less performance than what you will achieve with naive OpenGL / DX11. And using Vulkan in an optimal way isn't something that's explained in any Vulkan book or tutorial you can find online/offline: they generally teach how to do simple stuff (like drawing a triangle, which is essentially what learnvulkan.com does), but do not really teach you with the more important architectural decisions when writing a performant renderer (this I am currently struggling, and searching through blogs/papers/videos as much as possible).


This, OpenGL (ES) 3 is the one; VAO is the last feature!


this is super helpful! I'm a self taught web dev that wants to dive into 3d graphics programmer, but its super difficult to find material that doesnt assume a lot of pre req I have no idea about. its difficult to figure out what to even learn. its like not knowing what I don't know situation here.


Reality check: WebGPU is not shipping yet. Browser supports varies - dev preview (Chrome), far from preview with big missing pieces (Firefox), tumbleweeds (Safari [1]). If it's anything like WebGL, it might take many years after shipping to be a good dev target (tooling, major browser bugs and regressions, missing support depending on end-user hw, Safari lagging other browsers by 5 years like with WebGL 2, etc).

WebGL 2 on other hand works today and is about the only way to do portable 3d (unless you use higher level engines). .

[1] apparently a preview appeared in Safari tech preview releases in 2019 when Apple was trying to argue for more Metal style API design in WebGPU, but since disappeared from there. https://developer.apple.com/forums/thread/692979


Forgot to mention: also the WebGPU spec itself is unfinished.


samdauwe's work is fantastic. We're using WebGPU in Mach engine, and also have quite a large set of examples for how to use WebGPU from Zig[0] that are pretty much standalone (only use our engine for windowing/input), may be a bit easier for folks to run (zero-fuss installation, you can cross compile seamlessly to any OS, etc.):

[0] https://machengine.org/gpu/


links to the github examples are all busted: https://github.com/hexops/mach-examples/tree/main/cubemap

also, for a cross-platform tool i expected that clicking the images would take me to live demos that work in a browser, not mp4 and png files.


Thanks - we're in the process of redoing the whole site so I didn't notice those were busted. Fixed now :)

Our browser support is still a work in progress (and browsers don't support WebGPU yet), so we primarily run on windows/linux/macOS natively right now. Once that improves we will replace them with actual browser demos.


Are you into graphics? Vulkan and DX 12 are not what you start with to learn modern graphics programming. One thing, you should realize by modern they mean use modern graphics pipeline programming instead of an immediate mode API. So “Modern OpenGL” is modern graphics programming and actually it is sufficient for many cases but not for AAA real-time graphics but learning DX11 or Modern OpenGL is the right place to start.


worth mentioning that Vulkan is the only API that runs on all three major platforms. Natively on Windows and Linux, via MoltenVK on MacOS.


Vulkan has a lot going for it these days, but IMO the problem with its future is that (a) it's too verbose to use without an abstraction on top and (b) vendors won't stop pushing their native APIs as a competitive advantage (Microsoft still pushing DirectX - often getting features a year or two ahead of Vulkan, Apple is pushing Metal, GNM[X] from Sony, etc.)

I think this means the future of graphics APIs is software abstractions around the truly native APIs (DirectX, Metal, etc.) - things like WebGPU, sokol_gfx, SDL's new GPU abstraction, and Sebastian Aaltonen's stuff.


Shame though that by the time you've gotten your first cross-platform triangle on screen in Vulkan you'd already be finished covering Windows and macOS via D3D11 and Metal, and with fewer total lines of code ;)

(D3D11 is not a typo, unfortunately D3D12 isn't that much better than Vulkan in that regard)


I know you mean major desktop platforms, but in terms of market share for real time 3D applications, mobile and console dwarf Mac and Linux.


Yeah, but mobile means Linux and Apple, and console means Windows, BSD, and whatever Switch is, Vulkan works across pretty much all of that. Actually, considering Zink exists, OpenGL may be the most universal...


Zero Vulkan or OpenGL on PlayStation and XBox.

On the Switch if your really want the full 3D capabilities, NVN is the answer.


Middleware runs on more platforms, with more friendly tooling.


Middleware is a better option, given consoles support, and being the way studios have always dealt with multiple API support.


Is “Middleware” the name of a graphics library? Or are you talking in general, saying that using an intermediate library is better than using the platform’s API?


An intermediate library like Ogre3D, or more full blown engines like Unreal/Unity/Open3D and so forth.

All of them use the platform's API to take advantage of all features, while providing a more high level of 3D programming concepts.


I understood this to mean things like Unity


That’s a bit of an over generalization. Many studios do not, and many more have not always used middle ware to deal with multiple APIs.


Sure, many studios also only do console exclusives, while other studios specialize in porting services, for example.


> Targeting all three can be tough, hence WebGPU which lets you write to a single API.

And which doesn't exist (except behind a flag, will probably ship in Chrome 113). And the spec already has over 1500 issues (over 300 still open).

So... It's not "targeting three graphics APIs is tough, target just WebGPU". It's "here's a third incompatible graphics API for the web (after Canvas and WebGL), and people may expect you to also support it an addition to non-web APIs"


You can already use WebGPU today, if you're targeting something other than the browser. There's already game engines & graphics libraries being made thru one of the WebGPU implementations, and you can always start using it now after some setup. wgpu (Firefox's) for Rust or Dawn (Chrome's) for C/C++/Zig.


> You can already use WebGPU today, if you're targeting something other than the browser.

Why would I?

> There's already game engines & graphics libraries being made thru one of the WebGPU implementations, and you can always start using it now after some setup

Again, what's the point? Just to say "oh look I can"?


I suggest learning and doing the hard way(tm) without acceleration from very primitive rendering techniques moving from basic to advanced, and old to new, gradually

Step -1. Math - Learn about quaternions, frustums, camera matrix, and transformation matrixes.

Step 0. Pixels - Draw pixels / scan lines to a window or display

Step 1. "Triangles" - Draw scan-line aligned trapezoids with a W-/Z-buffer painter's algorithm (back to front). Multiple or single trapezoids in 2D are the results of projected triangles in 3D.

Step 2. Constructive Solid Geometry - Generate triangles given additive and/or subtractive shapes - Cube minus sphere intersecting

Step 3. Lighting and shading - The old school Phong and Gouraud materials and lighting

Step 4. Texture mapping - The math to map a raster image onto triangles

Step 5. Simple MIP mapping - The math to map various resolutions of raster images onto triangles

Step 6. Isotropic (bilinear and trilinear) filtering

Step 7. Anisotropic filtering

Step 8. Bump mapping - Perturb the normals

Step 9. Oversampling (FSAA) - Render on an N-times bigger virtual canvas, apply a kernel reduction matrix (possibly mean average), and copy to the real display area

Step 10. Clipping and culling - Octrees, quadtrees, AABBtree, BVH, BSP, and k-d trees

Step 11. Get fancy with animation


I agree. Any materials you recommend to guide people in each of the steps? Maybe a software 3D renderer that is well written and documented?



Cool. Here's a list of more resources:

https://web.archive.org/web/20170705193028/http://graphics.c...

v-- Vector, matrix, point, color, material, and lighting classes for C++9x, easy to port to C++11 onwards --v

https://web.archive.org/web/20150305000047/http://graphics.i...


If you're willing to spend some money, the classical,

https://www.amazon.com/Computer-Graphics-Principles-Practice...

Don't be scared by the price, it tends to be available on many university libraries, and I guess some 2nd hand deals as well. So that could be a way to try to get hold of it.

First edition used Fortran and Pascal, 2nd edition C, the latest C# and C++ for the samples.


a. Linear algebra: vector and matrix math

b. OpenGL resources here https://www.cs.ucdavis.edu/~ma/ECS175/syllabus.html

c. There are 2 main ways to "make an app" to draw to the screen: webassembly canvas [0] and DirectX/OpenGL native with a library like SDL.

0. https://github.com/aminya/webassembly-canvas

1. https://www.libsdl.org


Abrash's Black Book is worth a read, even if it's not very modern.


thank you so much for this! this is invaluable


Here's my curated list of CG resources for self-study:

https://legends2k.github.io/note/cg_resources/

After learning Vulkan, OpenGL or the next shiny API: it doesn't matter. Computer graphics and 3D math concepts don't change across APIs. The concepts you learn and make GPU do things is the same. Just the language or dialect differs. I started with OpenGL, but I now program in D3D, WebGL, etc. Learn the basics, avoid the religious wars.

In fact when learning something not as complex as Vulkan will help, since lots of noise (boilerplate) to hide the signal (cg concepts).

Cheers!


Given that OpenGL is basically deprecated at this point this arguably is no longer "modern 3D graphics programming"

If you still want to learn OpenGL there's https://learnopengl.com


OpenGL is not close to deprecation. It may be decades before mainstream GPUs stop supporting it. The amount of non abandoned software built on top of OpenGL that is not being migrated to Vulkan is mindboggling.


It is close to deprecation. In fact it's already deprecated on a device ~50% of American's use (an iPhone). It's also deprecated on MacOS.

Many hardware manufactures are getting rid of it from their drivers and just using 3rd party libraries that emulated it on newer APIs.


> In fact it's already deprecated on a device ~50% of American's use (an iPhone). It's also deprecated on MacOS

This is an Apple problem, not an OpenGL problem. Linux (and by extension Android, maybe Switch), Windows (and by extension Xbox), BSD (and by extension Playstation) all support OpenGL 4.6 and Vulkan. Apple has had issues supporting OpenGL since forever, and its OpenGL support (even before Vulkan) stagnated at 4.1, which was released ten years ago.

OpenGL is not deprecated at all. OpenGL 4.6 is a very, very modern API supporting things that many games still don't have, like mesh shaders, SPIR-V, etc.


> Linux (and by extension Android, maybe Switch), Windows (and by extension Xbox), BSD (and by extension Playstation) all support OpenGL 4.6 and Vulkan.

Yeah... no.

On Windows you're better of using DirectX since its tge only officially supported API. Xbox only supports DirectX.

Playstation only has its own graphics API.

Switch nominally has support for Vulkan, but they have their own graphics API.


> it's already deprecated on a device ~50% of American's use

America is one market among many. iOS usage is ~28% globally.

Even if Google suddenly announced their plans to deprecate OpenGL ES on Android, it would happen in the mid 2020s at the very earliest for bleeding edge devices, and late 2020s for whatever is considered "legacy" at that point.

A realistic scenario is that billions of people will still rely on some form of OpenGL at the end of the 2020s, and full deprecation will happen somewhere in the 2030s.


Same for Dx9, but we're not going to see any updates to either spec ever again.

And on Apple platforms it is actually deprecated.


On Apple platforms it's still being used via stuff like MoltenVK.


Kind of, GL 4.6 is still much easier to deal with than Vulkan.

Also it took a decade for good enough WebGL support, and WebGPU is yet a year away to make its 1.0 release on Chrome, let alone other browsers.


Still much easier? They is a truly massive understatement.


This is also like 10 years old. OpenGL did not die easily, the new replacements are at wars and I have yet to see an universal API/library replaces OpenGL.


OpenGL was never universal on game consoles, despite urban myths of support.


So wait MonoGame 2D cross platform… isn’t that using OpenGL?


Alongside DirectX, and LibGNMX.


Is OpenGL really deprecated? I know Apple wants to kill it, but isn’t there a ton of software that is written on top of OpenGL?


a bunch of industry lobbyists that run committees decided to kill it, the alternatives are worse for many applications though


The reports of my death are greatly exaggerated

https://github.com/KhronosGroup/GLSL/blob/master/extensions/...


Modern OpenGL is not deprecated on non-MacOS platforms and even then there are shims for Vulkan.


I’ve begun to learn Vulkan recently, and honestly, the API is not the best. But I feel like it’s the best long-term option - it’s really more about the mental model: buffers, pipelines, commands and queues, all exist in one way or another in Vulkan, Metal, and DX12. OpenGL is the odd one out as it’s a global state machine. That said, you can build a vulkan-like abstraction atop of OpenGL (I’ve tried).

The problem however, is compatibility. We’re in a very weird transitional era for graphics. Things aren’t unified anymore. Modern OpenGL features that aren’t difficult to implement are held back by drivers since most of that work is done by drivers in the first place. It’s not like the GPU can or can’t do these things, it’s that the vendors don’t expose that functionality. Vulkan is not available on my ivy bridge thinkpad (unless I run Linux and an open source driver). OpenGL greater than 4.1 is not available on MacOS because apple writes the drivers and they chose to stop writing them. Vulkan is not available natively on MacOS, except, with MoltenVK you can enable it, since at the end of the chain a GPU is just a GPU and it doesn’t care what API you use. Microsoft, in their commitment to legacy compatibility, allows DX12 to run on any windows machine made in the last decade, the catch is it’s windows only. So all the options have their ups and downs and pitfalls whereas it felt like 10 years ago OpenGL was ‘the’ answer.

Honestly, the state of graphics programming is a mess. To be compatible, you have to stick to some narrow versions of OpenGL that are uncomfortable and outdated, because computers or drivers are either too old or too new or both, or you have to write individual abstractions for every platform you target.

I hate the state of it but I get why indie devs are all just using Unity now. When I was growing up it was SDL2+OpenGL, and then it was XNA, but it seems coding at all is too much now. And I don’t really blame them, the problem domain has expanded significantly and the knowledge required has expanded along with it. Libraries can’t just be libraries to solve one task, they have to be frameworks or engines or entire graphical editors because all this is just too much for one person to figure out anymore. I just wanna write some code that draws some shapes, man.


We do seem to be in an odd transitionary period as you say.

I got basic 3D graphics for my Common Lisp system up and running rather quickly using early OpenGL calls which I was familiar with. I first used GL from SGI in the mid-80's.

We are now looking to move to Vulkan across all three desktop platforms, and it's a major undertaking. Installation and dependencies especially are a concern of mine. Our system is open source, and having painless onboarding is important to me.


I'd really like a high level overview of the Vulkan data model. What components have many to many relations, one to many relations, etc. What's the data flow between components, what data is shared vs local, when do you need to sync.

I don't even need explanations, there are plenty of those.


SDL + OpenGL was never a thing in game consoles, nor in Apple world pre-OS X.

They only became unified in a subset of computing world.


Old graphics dev here who has mentored new hires in basic rendering tech.

Learn JavaScript+WebGL to the point where you can render simple models with textures, shaders, and a basic camera controller.

Then, if you enjoy it and want to go further, do the same but in your favorite language + Vulkan.


I agree. You can start with https://threejs.org/ to get things rolling.


If someone wants to get into graphics API programming that’s probably a bit too high level, as lovely as it is.


In 2022 starting with raw 3D APIs is like telling someone they need to learn Assembly from day 1.


(Or WebGPU, I keep forgetting that one exists...)


No wonder, they are years away of doing 1.0 MVP.


If all goes well, looks like it'll be shipping enabled by default in Chrome M113 (scheduled to go out at the beginning of May).

https://groups.google.com/a/chromium.org/g/blink-dev/c/VomzP...


It is serious this time?

It was going to be early Summer 2022.


Once I've got some neat stuff, how do I find someone to pay me money to work with these technologies?


Inspiring. Thanks. I noticed you said WebGL. Is threeJS too high level?


I'd recommend taking a look at either https://vkguide.dev or my preferred https://vulkan-tutorial.com/

I remember learning "modern" OpenGL back in 2005


ThankYou. Are these the learnopengl.com of vulkan?


Did you have a time machine? OpenGL 3.3 came out in 2010.


OpenGL 2.0 came out in 2004 with a pipeline that still looks quite modern, using buffers and shader programs with GLSL, as opposed to 1.0-style immediate mode rendering. Both 3.0 and 4.0 were comparatively quite minor updates.


Not so modern after a decade.

For OpenGL, there is DSA and SPIR-V support.

And then there is Metal, Vulkan and DX 12, alongside NVN and LibGNM/GNMX.


Every big players are running their own 3D graphic APIs these days, wish they all agreed on something like Vulkan.


Yes. WGPU is often being used as a common layer to abstract Vulkan, Apple's Metal, and Microsoft's DX12, which do pretty much the same thing.

Modern 3D graphics programming mostly abstracts to:

- Load standard gLTF shaders into GPU.

- Create mesh objects. Send to GPU.

- Create texture maps. Send to GPU.

- Create light objects. Send to GPU.

- Create 4x4 transform matrices. Send to GPU.

- Create objects referencing meshes, textures, and transforms. Send to GPU.

- Create camera transform matrix. Send to GPU.

- Tell GPU "Go!".

Beyond that, it's mostly special effects, such as atmosphere, lens flare, etc.

Most of the headaches today involve managing the memory in the GPU, for which there are libraries.


Small nitpick: 'WGPU' is somewhat ambiguous because it could refer to either WebGPU itself, or wgpu[1] which is a library implementing a derivative of the WebGPU API that can be used by native code.

[1]: https://docs.rs/wgpu


I'm thinking mostly of the latter, which is useful for cross-platform work.


Khronos keeps not having an idea how to make APIs not full of extensions and low level SDK tooling.

Big players have always used their own 3D APIs since the dawn of 3D graphics.


I don't think it's entirely khronos support as basically each vendor what's to embrace, extend and extinguish it constantly.

CUDA is kind of winning it for nvidea.


Sure it is, not even LunarG's SDK is at the same level as proprietary API SDKs in frameworks, debugging, IDE integration, beyond the basics.

CUDA is winning on compute exactly because of that as well.


You think the big players don’t (didn’t) use OpenGL, Vulkan, DX, etc?


Sony definitely not, Nintendo to some extent after GX, Apple only after adopting NeXTSTEP (QuickDraw 3D was their thing), Microsoft alongside NVidia and AMD design first for DX, then the features get ported into GL/Vulkan as extensions,...


I’m not sure what you mean but big players typically means AAA studios and they don’t create their own graphics drivers and their engines are build on top of the aforementioned systems.


When you look at how much nicer the programming experience is for something like Metal it’s hard to blame them.

Think I’d prefer to write 3 backends for Metal level complexity APIs than a single one targeting Vulkan.


Yes, so much this. It’s like someone looked at OpenGL and said, “Hey, how can we take all the hard, ugly, terrible parts of OpenGL and throw out all the nice, useful parts?” And that became Vulkan. I’ve written code for OpenGL since the 1.0 days. These days I do Metal. I wanted to check out what it would take to port something to Vulkan. I couldn’t make it through the basic Vulkan tutorial of putting a single triangle on the screen. It was too long, and required too many really low-level things. It felt almost like you needed to write your own memory allocator to use it properly. It was nuts.


Worse part is that Khronos doesn't seem to get it, as with OpenGL, they expect the community to come up with ways to fix it.

There are some steps into that direction with LunarG SDK efforts, NVidia was the one coming up with C++ bindings, but that is mostly it.

Nothing as nice out of the box as the proprietary APIs.

A guy from a studio I know has best described it, as one needs to be a graphics developer and a device driver expert to properly code against Vulkan.


For anyone wishing to get a taste of very modern graphics programming with Vulkan, I enjoyed Arseny Kapoulkine’s YouTube series [0]. Although he focuses somewhat on mesh shading, he does also build a more traditional pipeline alongside it. I found it much nicer to follow along with this than grind through the first X hours via written tutorials.

[0] https://youtube.com/playlist?list=PL0JVLUVCkk-l7CWCn3-cdftR0...


https://vkguide.dev/ Might be a better alternative for 2022


Related:

Learning Modern 3D Graphics Programming - https://news.ycombinator.com/item?id=7746192 - May 2014 (11 comments)

Learning Modern 3D Graphics Programming - https://news.ycombinator.com/item?id=3294840 - Nov 2011 (51 comments)

Learning Modern 3D Graphics Programming with OpenGL 3 - https://news.ycombinator.com/item?id=2528740 - May 2011 (16 comments)


all dead links?

The one from 2012 seems great, in the meantime it is 10 years old, is there newer version on the same topic somewhere else?


Those 'related' lists are mostly so people can look at the old comment threads, which are not dead, and hopefully won't be for a long time.

archive.org has copies of most URLs that have been submitted to HN over the years, and one of these days (years) we should try to take advantage of that more formally. In the meantime you have to look them up manually or write code to do that.


(Shameless Plug)

If you're interested in learning about 2/3D graphics and then applying it to build a 3D graphics engine—similar to something like THREE—check out my book: https://www.amazon.com/Real-Time-Graphics-WebGL-interactive-...


If you were designing a new application from scratch, meant to be cross-platform on Windows, macOS, and Linux, what 2D graphics library would you use? I have been operating under the assumption that Skia is that library but would like to hear thoughts.

What about 3D?

This is all assuming one wants something packaged and does not want to write OpenGL or Vulkan code.



"meant to be cross-platform on Windows, macOS, and Linux, what 2D graphics library would you use? "

It really depends what you want to do, but in general I would say: the HTML Canvas and the web in general. WebGL for 3D, or rather a framework like threejs or babylonjs.

Doing it not on the web, but still cross-platform makes only sense, if you need the maximum performance, or if you already have a working toolchain you are familiar with. But since you don't want to write OpenGL or Vulkan code and you are not familiar with it - I think you are up for great adventures by diving into Skia and co. There is a reason the web became sort of standard for cross platform despite its numerous flaws - the alternative is likely more pain. But maybe not, depending on your skill set. But since you did not mention Qt, I would assume you are new in the area in general? Then I really recommend the web.


I should have stated that I am looking for desktop only. The reasons are performance, window management (I want to be able to manage multiple windows in a single application, easily), and OS integration.

I'm not necessarily new to the area, but I have already yak shaved down to the level of writing my own windowing framework, based on GLFW, and then graphics/GUI framework using Skia. Summary: GLFW (windowing) + Skia (2D graphics). I was curious if there is some other choice. Qt, GTK, Wx, etc. are all too complicated, tightly coupled to OOP, and are generally not fully portable between OS. I dislike their application frameworks, and I don't want to use their widgets. I've already taken on all this because of that, but you are correct that it is indeed pain. Bugs in Skia or GLFW or drivers or what-have-you are frustrating and blocking.

Let's say I have a desktop windowing framework (which I essentially do with GLFW, although I've threatened to throw my bindings for it out and write my own bindings against Win32, X11, Wayland, and Cocoa), and I'd like to draw to the window. Is there a better option than Skia? Is there a WebAssembly (wasm) or WebGPU (wgpu) or some HTML Canvas abstraction available?

Flutter is basically the only solution like this, but desktop is not its primary support target, it currently lacks multiple window support, and it requires Dart.

Skia is actually pretty nice and not hard to use, but I was curious if there is something more simple and lightweight but still featureful.


>I am looking for desktop only.

>I have already yak shaved down to the level of writing my own windowing framework.

>Qt, GTK, Wx, etc. are all too complicated (...) I dislike their application frameworks, and I don't want to use their widgets.

>Bugs in Skia or GLFW or drivers or what-have-you are frustrating and blocking.

I'm subject to similar simplicity and dependencies-averse tendencies, and made that: https://github.com/jeffhain/jolikit/blob/master/README-BWD.m...

PS: I've been building a toolkit on top of that, which I use for a file explorer/editor, but it's not stable yet, still many little design points to think more thoroughly.


Honestly, if it's a desktop application and you're not using heavy 3D graphics, why not use wgpu with Cairo? With some smart region tracking it should be plenty performant.


For 3D, use a game engine. Unity, Unreal, maybe Godot depending on your perf requirements.


Thanks. That seems to be the case. My project is currently on top of .NET, so Godot and Stride have been the two I've looked at, although Stride is the only one supporting .NET Core (i.e., .NET 5+). Godot and Unity require Mono still.


LibGDX - https://libgdx.com/

I used that a few years ago -- not sure how it's held up.


I just started another libGDX project a few months ago (2D only), in Kotlin; it still works as well as it ever did. Have had no issues running on Win/Mac/Linux. Could not find any reason not to use openGL.

I have a feeling GLSL shaders will never die and I'll be writing them when I'm 80.


SFML isn’t perfect, but for 2D in the c++ space, that’s what I have reached for


Maybe check out QT?


This vector tutorial really helped me: https://chortle.ccsu.edu/VectorLessons/index.html


Outdated. Learn VULKAN


are all big players supporting it? is there a good document.

while this is all about graphics, I wonder how GPGPU will end up in the future. OpenCL did not get much love, CUDA still rules the market(leaves Intel's OneAPI and AMD's ROCm and the legendary OpenMP etc in the dust). Apple's Metal does both graphics and GPGPU, Vulkan is also said to do both graphics and GPGPU, not sure about Microsoft's, does it even have a GPGPU API considering it is the only player without its own hardware.


> I wonder how GPGPU will end up in the future.

Didn’t the acronym GPGPU refer to when GPUs and APIs didn’t provide general compute capability? I thought this abbreviation was mostly out of use already, since we have CUDA and compute shaders now. It seems Metal and Vulkan and DirectX and OpenGL all use the term “compute” (Microsoft does have DirectCompute and compute shaders, they’ve been around for at least a decade).

Not sure if that answers the question - all the graphics APIs provide a mechanism for compute shaders - or if you are asking more about a direct competitor for CUDA or something else?


I personally find the experience of writing GPU compute code pretty nice on graphics APIs. The interface is pretty much the same “dispatch a 1-3D set of 1-3D work group indices”.

The main pain points vs dedicated compute stuff like cuda is libraries and boilerplate to manage memory and launch kernels.


and, how to make the kernel and memory-allocation code working with tensorflow/pytorch, GPGPU is really now just a few libraries made for Tensorflow and Pytorch to invoke, same as CUDA, as far as ML is concerned.


DirectX12 has ok compute capabilities, but is increasingly being left in the dust by Vulkan. It doesn't have pointers or a memory model, but it does have device scoped barriers (which Metal lacks), so you can do things like single-pass prefix sum (monoid scan) operations, which Metal and WebGPU can't. Also, the Nvidia "cooperative matrix" operations (which access the "tensor cores") exist in CUDA and Vulkan but not DirectX12.

Programming GPU compute outside CUDA is still quite painful; the developer experience of tools is terrible and the ecosystem has a lot of catching up to do. I'm most bullish on WebGPU going forward, but there are definite growing pains.


Vulkan Compute hardly matters and DirectX future is on mesh shaders.

Also all major graphics vendors tend to create their hardware designs in collaboration with Microsoft as part of DirectX, and eventually add them as extensions to Khronos APIs.

So it hardly matters what Khronos is doing with Vulkan.


all major graphics vendors tend to create their hardware designs in collaboration with Microsoft as part of DirectX

That's pretty sickening, I hope the EU will give them a rap across the knuckles for that. Very anti-competitive.


It is up to Khronos to make it more interesting to do otherwise.

Hardware Transformation and Lighting, CG/HLSL, RayTracing, MeshShaders and more recently DirectStorage, are all examples of features that appeared first in DirectX, with AMD or NVidia hardware implementation, before showing up as OpenGL or Vulkan extension.


Could you elaborate on how device scoped barriers are benefficial? I’m not sure I follow where that would help for your example, but I’m definitely curious (and Google searches have become increasingly useless of late)


Sure! I have blogged about this[1], but I'll summarize here. Basically a barrier helps you do a "message passing" pattern, where one workgroup prepares some data then sets a flag, and another workgroup can read the flag and then read the data from the first workgroup. However, in Metal (and hence WebGPU) the barriers aren't powerful enough to guarantee you won't see stale data. Thus, to do prefix sum on Metal, you need to at least two dispatches, one to aggregate reductions over partitions, then another to do the sum within a partition. That's less efficient, both because of the dispatch overhead, and also because you need to read the data at least twice. You also need a bunch of different versions of your shaders to handle different problem sizes, which is really annoying (in fact, Vello can't handle more than 64k path segments until I write the larger version). Vulkan, CUDA, and DirectX12 (even DirectX11) can all do this; it's one of the ways in which Metal is an inferior basis for doing GPU compute.

Btw, this is one of the reasons I feel a bit burned by MoltenVK, as it happily and silently translates correct SPIR-V into MSL that's lacking the correct barriers. In my experience, GPU translation layers are some of the leakiest abstractions around.

[1]: https://raphlinus.github.io/gpu/2021/11/17/prefix-sum-portab...


Thanks for the great write up!


What really killed them (as in not CUDA) is no easy way to use them.

OpenCl pretty much broken on mesa so really amd is no better than Nvidia on Linux. And Nvidia killing it with CUDA where it works.


Vulcan is on windows, Linux, Android, and OSX via moltenvk. Compute (GPGPU) is a requirement for all Vulcan drivers, technically drivers do not have to support rasterization but I don’t know of any hardware that doesn’t.

Outside of Apple every other current maker of GPU hardware have native Vulcan drivers for at least some of their target products.


MoltenVK acts as middleware and cannot support everything that isn't available on Metal.

Outside of Apple there is zero support on XBox and PlayStation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: