Hacker News new | past | comments | ask | show | jobs | submit login
SDL gets an Epic MegaGrant to work on next-gen Vulkan API (patreon.com)
186 points by sph on Nov 12, 2021 | hide | past | favorite | 35 comments



When I was beggining to learn game programming, I tried really hard to like SDL, but I just couldn't.

The official documentation [1] is just an API reference. There's no official tutorial or getting started guide that gives enough context to a beginner. There's just a bunch of links to (mostly outdated) third-party tutorials and books and some articles talking about specific parts of the library, but nothing official that serves as a foundation to a begginer.

The SDL_* subprojects ... I'm not sure if they're official SDL subprojects, third party projects that share the name, or something in between. The homepages (e.g. [2]) of these subprojects are totally different from the main project's, and their documentation is also just a listing of every function, without a general context.

Maybe for someone who already is a professional game developer, SDL is just perfect, but for a begginer (like me at the time), it's just obtuse.

[1] https://wiki.libsdl.org/FrontPage

[2] https://www.libsdl.org/projects/SDL_mixer/


I used to lurk in #SDL and help beginners, the common misconception is that SDL encompasses a much larger scope of game engines than just providing an abstraction of platform-specific details.

It's not SDL's fault beginners have poorly defined/uninformed expectations.

But you're right about the documentation being kind of a mess. The SDL1.2 vs SDL2 differences can be annoying and for a beginner just figuring out what version the example/tutorial code they're looking at relates to can be challenging.


> It's not SDL's fault beginners have poorly defined/uninformed expectations.

It might not be their fault but I'd still say they're probably in the best position to correct it.


There really isn't much to learn aside from the API. SDL isn't a game engine, or even close to one.

Once you know how to open a window, poll for events (maybe even create custom events and use the event system for messaging or signals), load a texture from a file and make sprites, and maybe basic AABB collision detection with rects, you know most of what SDL is useful for. Everything else is "code the rest of the f'ing owl" territory, which is outside SDL's purview.


I'm not sure what SDL can do to prevent this.

From what I saw the primary cause was beginner game programming tutorials on the web, outside the SDL project, created by volunteers, judging by the code quality, often beginners themselves. The lazyfoo ones were especially popular back when I was involved.

These tutorials would go immediately to creating a main.c with an SDL dependency, before even discussing game engine architecture or frankly anything at all. So you have a total noob who likely doesn't know anything about C let alone game development, being faced with their first C listing and it's immediately mentioning SDL junk for creating a window and this stuff called an event loop.

Of course the newbie is lost, most the time they can't even get it to compile. The tutorial is some random abandoned long-forgotten thing on the web, with zero support available. So where do they go? SDL of course! It's the only thing on their radar at this point, and while barely on life support, there's still something resembling a community in #SDL, more recently I think they shifted to slack or something so maybe it's improving.

SDL isn't in the business of teaching to build game engines. Just because awful game programming tutorials on the web tend to start with SDL because it's the platform abstraction so you initialize it first, doesn't mean SDL should take on that role, which seems to be what you're implying they're best positioned to do. There's already insufficient resources to maintain the API docs and fix bugs as-is. If you're interested in volunteering for the job, Ryan Gordon is the man to reach out to.


Interesting, I had the exact opposite experience. SDL + Lazy Foo's tutorials [1] gave me my start into programming.

[1] https://lazyfoo.net/tutorials/SDL/


I was about to link it . My start was graphing calculators, but once I moved to PCs LazyFoo SDL and OpenGL got me started

That being said, I think what throws a lot of people off is that SDL isn't really the best tool for someone focused on just writing a game.

To me it's a toolkit to make your engine cross-platform, whereas a lot of people are expecting an engine.


I've also tried to learn SDL and had trouble in the past. Recently however I've been working with openbsd and it also doesnt really have a tutorial but i've found myslef productive by virtue of good docs that I know are there, and I think SDL has the same type of thing, where it takes a bit more effort to understand that a tutorial can feed you but once you understand it and how to read its documentation you can really get very proficient with how it and it sub-libraries work.


SDL is great. It's such an easy way to abstract over the often ugly and always platform-specific details of how to get a window open, get it fullscreen, and get input in (including game controllers) and graphics out. Sometimes I want a bare framebuffer, sometimes I want an OpenGL context. I've used it for dozens of things, mostly inconsequential (but fun).

I'm still partial to SDL-1.2. I think SDL-2.0 suffers from a lot of scope creep, such as multiple windows, menus, and trying to constitute a more general graphics library while complicating things for people doing basic framebuffer stuff. I'm slowing coming around to think that for a lot of people, it makes sense to have such a thing, though - particularly, hardware accelerated basic 2D graphics without dropping down to increasingly obsolescent OpenGL code. It's entirely laziness stopping me from converting my remaining SDL-1.2 code to SDL-2 at this point.


SDL deserves it. Its used by a lot of open source software that we take for granted.


Do I understand correctly? This basically means that instead of

    "Need more power? Use OpenGL."
we will be able to

    "Need more power? Write a shader."

?


That is the same thing since OpenGL core profile in GL 3.0 onwards, there are no pixels on the screen without shaders.


TL;DR: SDL will develop a cross-platform GPU API that is easier to use and Just Works. Vulkan is neither because it needs 1000 lines to draw a triangle on screen and is so low level that differences between AMD and nVidia GPUs are exposed.

But I'm not sure this SDL effort is a good thing. SDL has always been behind on fixing its bug backlog for the hard things it does well, like supporting so many game controllers and audio backends, supporting iOS and Android, etc., and other efforts at an easier to use 3D API that better abstracts different vendor GPUs have been underway for some time:

- WebGPU

- Godot's shading language

I have always loved that SDL separates its libraries so you don't have to use what you don't need: if you don't need an audio mixer don't include SDL_mixer, etc. Hopefully those of us that don't need SDL 3D will still be able to skip building and including it, avoiding bringing in all the bugs, bloat, and complexity therein.


> is so low level that differences between AMD and nVidia GPUs are exposed.

I interpreted it quite differently, the author writes:

> there's just less knobs in general. Almost all the magic you want to do is done with shaders. Since these GPU APIs have a little programming language of their own, the GPU API itself doesn't have to be super powerful. It can be small, designed to exactly how one would control the hardware efficiently, and the programmer has tons of freedom, because they can get buck-wild in a shader with almost no limits.

Which to me implies the Vulkan API is so simple, and close to metal, that it doesn't differ between implementations.


> Which to me implies the Vulkan API is so simple, and close to metal, that it doesn't differ between implementations

As someone with 20 years 3D programming experience including paid work in DirectX and OpenGL ES 2.0, I attempted vulkantutorial.com but gave up because it is too hard. It really is 1000 lines of C/C++ to even get a flat shaded triangle on screen. Vulkan has validation layers to help with writing conformant portable code, but the validation layers are written by your GPU manufacturer and so you can get things that pass on an Intel GPU but fail on nVidia. If you've ever tried porting your "portable" ISO C++ code that works with GCC over to Visual C++ and dealt with 1000s of lines of compiler errors it's kinda the same thing. Vulkan has shaders that precompile to "portable" bytecode but I also noticed that the time delay of loading these "precompiled" bytecode shaders was instant on one Intel GPU but took about 5 seconds on an nVidia GPU. The nVidia driver then cached the results (I know not where) so that subsequent runs were faster, but the cache can be invalidated by either a driver update or a shader update, so your game level loader will need an "ifShaderLoadingSlowerThanMolasses() {...}" code path to deal with your users experiencing slow load times after a video card driver update, etc.

Summary: in ages past it was possible for mere mortals to write visualizations and simple games directly in OpenGL/DirectX, but with Vulkan most of us will need a middleware layer on top of it to get anything done within a deadline. If you want an open source portable solution I instead recommend Godot. If you want to make a 3D game that makes millions of dollars with less than 3 years of development and can deploy to Nintendos and Playstations with a button click I recommend Unreal Engine or Unity.


The funny thing is (mentioned briefly in the article) SDL just released a geometry API so pushing geometry to the renderer is easy now. Dear ImGui already updated their SDL renderer for it.


... apparently I was mistaken, this hasn't been released yet, it will be out with 2.0.18[0]

[0]https://github.com/libsdl-org/SDL/milestone/2


> the validation layers are written by your GPU manufacturer

What? I mean they could write extra ones, anyone could, but The Main Official Pack Of Validation Layers™ is maintained by Khronos:

https://github.com/KhronosGroup/Vulkan-ValidationLayers


Exactly, in the age of middleware sticking with the broken designs of Khronos, is a waste of time except for the platforms where we cannot avoid it.

And WGSL seems to be going the way of Vulkan in what concerns shading languages, taking the fun out of playing around with WebGPU.


I'm not a 3D person or whatever, but is it possible to (asynchronously?) pre-compile shaders before you use them? Like, to run shader compilation behind the scenes while the user is playing the tutorial or whatever?


So there are two separate "pre-compile" steps with Vulkan: #1. Compiling shader source code to SPIR-V bytecode; #2. compiling SPIR-V bytecode to native GPU shader instructions. #2 is always done at runtime (though the vendor driver may silently use a cached previous run result), and yes, it is possible to do it async. But like you speculated, you have to figure out something for the user to do while this async compilation happens. In an open world 3D game this adds considerable complexity. If your hero is approaching the Ice Cave of Wonders you need to smoothly preload your fancy ice shaders before the cave mouth is in sight and this preload shouldn't make the current framerate drop. In simpler times past with simpler shaders (like OpenGL 2.1 / ES 2.0 GLSL) you could compile shaders from source in perhaps a single 1/60th second frame.

It's funny how 1980s Nintendo cartridges loaded instantly but now that we have hardware at least 10 million times faster we have long load times for games.


Couldn’t #2 shaders be compiled once and shared between users? A game could compile them on first run, submit them to a server with a unique hash of the OS, driver and shader version (and what else). Then other users could download the already compiled shaders if there’s a network connection. If not, you simply compile them anyway.

If security is an issue (can a shaders be dangerous?) you could wait until you have X copies of the same hash and verify that they all have compiled to the same code before adding it to the online cache.

Developers could even pre-compute such shaders for popular GPUs that they have access to.


Steam already does that automatically.


The Vulkan API does not change between Intel, AMD, NVidia and MoltenVK (iOS/macos) GPUs.

What the grandparent is saying though is "that differences between AMD and nVidia GPUs are exposed." Vulkan is so low-level that your application needs to implement workarounds for each GPU's quirks.

OpenGL was high level enough that if you didn't care too much about raw speed, the only place you got into the weeds of GPU quirks was writing your shaders. With Vulkan, the weeds of GPU quirks pop up everywhere.


It surely does, because the Vulkan API is a bare bones loader for extension spaghetti, where the first thing an application is supposed to do is to load the extensions required for the application.


Is OpenGL really not a good choice for developer for simple games any more? I mean, as an API it is not going anywhere, right? On the inside there will be compatibility layer to Vulkan or whatever.


I wouldn't recommend OpenGL for new games. If you are serious about learning how 3D graphics really work I instead recommend a web search for "raytracer in one weekend." Or learn how to write a simple software-based triangle rasterizer if you don't want to learn to make a raytracer. Too many younger devs get sucked into trying to write their own engines for their games, when it is best to just write a simple "learning engine" and then move on to something prebuilt like Godot, Unreal, or Unity. If you were building a hotrod car it is so much easier to just order a crate motor than to try to machine your own engine block, pistons, piston rings, valves, valve springs, etc. Would you drill your own engine oil out of the ground too or just buy a few gallons off the shelf? Same deal with food. Lots of people nowadays moving to the country thinking they will raise their own farm animals and escape industrial civilization and its discontents. Then they learn animals have expensive and time-consuming medical needs....


It seems they are going to duplicate the efforts from bgfx (https://github.com/bkaradzic/bgfx)


So?


More like sokol-gfx header only library.


or webgpu-native/wgpu :/


Tried SDL2 a couple of times but never really got into it.

Recently though, I’ve created a native application with wgpu and Rust, that I can run on my MacBook Pro M1 where it uses Metal, and on my Linux desktop where it uses Vulkan.

wgpu translates into Vulkan, Metal, Direct3D 12, Direct3D 11 or OpenGL ES depending your platform. I like it so far.

I used the guide at https://sotrh.github.io/learn-wgpu/ as a starting point and built the thing I wanted to build.

My application is a LED simulator that makes use of wgpu and Rust to visualize LED strips, and uses FFI to call functions that I’ve written for Arduino in C++. With a thin layer of wrapping that I also wrote in C++, I have made it so that the same code that runs on physical Teensy 3.2 and is controlling physical LED strips, can also be compiled and run and visualized with my LED visualizer application on my desktop or laptop computer.

This allows me to iterate more quickly as well as to better be able to debug my code, because I can do all of the development without involving the steps of flashing the firmware to a physical microcontroller and without communicating with a physical microcontroller and LED strips.

When I have done a bit of work in my simulator I then flash it to the microcontroller where the physical LED strips are controlled to see it in the real world. But being able to work in this LED visualization simulator of mine is a great boon for the development process IMO.


I agree wgpu is great, but wgpu isn't a substitute for SDL. The Rust project that replaces SDL is called winit

https://github.com/rust-windowing/winit

One can use wgpu with winit (which is more common) or, if they prefer, wgpu with sdl2. Here is an example:

https://github.com/Rust-SDL2/rust-sdl2/tree/master/examples/...

edit: but, the "next-gen Vulkan API" the article refers to, that SDL will be developing, it would be a competitor to wgpu.


> The Rust project that replaces SDL is called winit [...] One can use wgpu with winit

Right, sorry, I should've specified that I am using winit and wgpu together, and that this is in comparison to my previous experience which includes using SDL2 alone (software rendering using SDL_Surface struct and the SDL_BlitSurface function) on one occasion during a game jam, and another time using SDL2 with Glad (OpenGL loading library) for an isometric terrain editor application that I tried to make. I think I also tried using SDL2 with GLEW at some point.

> edit: but, the "next-gen Vulkan API" the article refers to, that SDL will be developing, it would be a competitor to wgpu.

Yup, that's what prompted me to comment about wgpu, but I see how my comment ended up as incorrect because I didn't mention winit. Thanks for pointing it out.


Hope that includes better documentation. I've tried SDL2 a few times but I always have trouble after getting a window appear. Api docs are great but only go so far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: