Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Building Outer Wonders for Linux (utopixel.games)
115 points by lukastyrychtr on May 22, 2021 | hide | past | favorite | 78 comments


I highly recommended flatpak for this. It provides an isolated build and runtime system, so you don't have issues with glib and such. And lots of common libraries come included in the runtime, including SDl2 I'm pretty sure. It works on any linux distro (that has flatpak installed, but there's a package available for pretty much any distro that doesn't already preinstall it). And as a bonus, you get sandboxing if you feel like setting that up.

I also have a Rust game, and I packaged it in a flatpak here (https://github.com/JMS55/sandbox/tree/master/flatpak). The main thing you want to look at is the .json manifest. The flatpak-cargo-generator.py script on the flatpak github takes a Cargo.lock, and outputs another flatpak manifest of third party dependency download links, so you don't need arbitrary internet access to build (a requirement for distributing on flathub), but you can ignore this if you're using itch.io.


>works on any linux distro (that has flatpak installed

Given that they were wrangling about not requiring an external libSDL dependency, it's unlikely they'll want to require an external flatpak dependency.


I don't think it's a big deal, while libSDL is. If you dynamically linked to libSDL, you'd have to tell users "find your libSDL package (probably not called libSDL), install it, and then hope it's new enough / no bugs / etc. All the usual issues with dynamically linking to libraries that aren't under your control.

Flatpak is "We use the flatpak format, if you don't already have it, go to https://flatpak.org/setup/ and press your distro icon". Much more user friendly, and the hope is that flatpak becomes the standard way to distribute linux apps in the future anyways. Fedora already preinstalls flatpak, hopefully ubuntu and debian will follow (although that's unlikely since ubuntu has snap).


I'll go ahead and be naive here, but hasn't Linux solved the dependency issue with all their packaging formats? With a CI setup, you can have a pipeline that tests your game, builds it on multiple platforms in multiple package formats (AppImage, flatpak, snap, deb, rpm, etc.) and checks if the game will start (on CI) with headless X11 (Xvfb).

Locally, you could use vagrant to startup multiple linux distros then build and startup the game for a visual test. It would then be easier to output a list of "tested on platform/distro X,Y,Z" as well as the deps for each of them.


Deb, rpm et al don't help here at all.

You can't use their dependencies or you'd need to provide builds for different versions of the same distro (because many many many libraries aren't that stable, and distros typically only provide one version of each).

And you probably don't want to run your own repository (and I'm not even sure apt can do repositories with a login, so you'd be making your game available to everyone with the link), so you can't use their update functionality either.

So then all you have left is a bad archive format - essentially a distro-specific .zip that's more annoying to build.

The other tools ship dependencies with the program, so they don't have this particular issue. AppImage and Flatpak are also cross-distro, while snap is tied to Canonical and awkward on other distros.


This is not really a problem for open source games with an active Debian/Fedora maintainer. If there are interested packagers and you give them source code to work with, I've noticed they will be happy to build and test your package and try to keep it updated. The point of using deb/rpm is to integrate with the distro. If you want to go directly to customers and you have no intention of tying your package to the distro release cycle and working with their package maintainers, then yes, it seems it would always be a bad fit.


For open source games, sure.

But this isn't an open source game, as far as I can see, and most games aren't.

Also I often find the fixed releases of most distros to be a bad fit for games because the stability argument matters less and less - there's little in the way of backward compatibility to keep, and hence no real reason not to upgrade.

(not that I'm much of a believer in fixed releases to begin with)


I guess that's why most games are a bad fit for trying to ship as part of an open source Linux distribution, where the requirement to get packages upstream is that they have to be open source and compatible with the distro's usage and redistribution policies.


The problem is installing a random deb, and it's dependancies, can be quite tricky. Also, you need your users to know if they need AppImage, flatpak, snap, etc. (I'm an Ubuntu user. I know it uses deb, I have no idea which of AppImage, flatpak or snap it uses).

Also, you have to keep them up to date as new distro versions are released, whereas many games work on a principle of "release, and done".

Making a release where you link as much as you can statically increases the chances it will work in the future.


I know this isn't the point of your comment but: AppImage works everywhere, including Ubuntu. It a file called TheProgram.AppImage and runs like any other binary would. Snap works primarily on Ubuntu and is pre-installed. `sudo snap install [package]` works just like `sudo apt install [package]` does. Flatpak technically works on ubuntu though it's not pre-installed: it's an `sudo apt install flatpak` away.

AppImage is a great option: it's the equivalent of statically linking everything. Users only need to have the binary and they can run it. Developers bundle everything into that binary.


To chime in: I'm super impressed by flatpak. There is a small learning curve as with any tool, but once you get through it it works very well.

Snap uses a compressed image format that makes startup horrid. And snaps don't seem to work well with hidpi resolutions when scaling is enabled (snap doesnt know about the scaling so it draws everything super small).

I haven't used AppImage but between apt, snap, and flatpak, I really see flatpak as the winner for how games should be installed.


I played with Appimages a while ago, trying to create one from scratch with a Ruby interactive shell inside. It worked well until I tried it on an older Ubuntu, it gave me glibc errors. To make it work I had to find a docker image with an old enough glibc to use as my "compiler", then copy over everything to the Appimage. I also tried compiling an older glibc from scratch inside the Appimage, that solved nothing, as well as needing to install all the binutils that I wanted to embed in the Appimage and make sure the PATH does not contain any host os paths.

It was a nice learning experience, but far from the "Easy to make, runs everywhere" marketing Appimages have.

Does using muls libc and statically linking everything actually work out in practice? I read somewhere (HN comment, don't have reference) that it still might not be enough and your programs still need to the OS libc.


Changing out glibc for musl can be done if you control the linker invocations on the object files. This would be extremely difficult (but not impossible) for those like myself that use closed-source third-party shared objects linked against glibc.


> To make it work I had to find a docker image with an old enough glibc to use as my "compiler",

it's not like it is esoteric knowledge though: https://docs.appimage.org/reference/best-practices.html#bina...


I really like AppImage and its one file = one application philosophy, but the unfortunate reality is that the distro landscape is so messed up that AppImage doesn't reliably work everywhere unless you take similar pains in building your application (old glibc, etc). Above the kernel, the Linux ecosystem just isn't designed to support the concept of binary application distribution, unless you want to maintain a separate package for every distribution you want to target and keep it up to date.


Package managers solve distros' own problems. For an outsider, it's still a massive pain.

Distros are usually tied to specific versions of libraries. Your package can't require a library that is too old or too new, because the distro won't have that version, or won't install it, because it'd conflict with other packages. The way around it is to prefer static linking, but then you're reducing the whole dependency and packaging system to just being a zip file with extra steps.

There are many package formats, and many distros with different versions. It's a PITA to set up a build farm for everything and keep it from bitrotting.

And even if you build a two dozen packages, you need to help users download the right version, and tell them how to install it (my pet peeve in many distros is that when you double-click a deb/rpm, they won't offer to install it, but browse it like a folder or treat as an unknown file).

All of this is so much more fuss than "Here's your Windows.exe, or Mac Universal.app".


> In the case of Outer Wonders, we weren't able to set up old Linux distributions, because the installation of such distributions depends on online repositories where installable dependencies are stored, and repositories containing the packages for old distributions are either removed or archived. This is why we fell back on Ubuntu 20.04.

docker - we do exactly this for a non-game app at $WORK - building an app in some ancient centos or something docker image.


Even without Docker, the normal Debian mirrors all seem to carry oldoldstable, AKA Jessie at the moment. That was released in 2015. If you wanna go crazy, there's always https://snapshot.debian.org


(I misread the post initially and thought baq was talking about using Docker to run the game, not build it. I've now edited this comment to talk about using Docker to build the game.)

Yes, we do that at $dayjob too. We support multiple versions of CentOS, Debian and Ubuntu, and the best way to build for all of them is inside their respective Docker containers. We depend on libssl and that lets us use the platform's default libssl instead of forcing 1.0 on everything.


You don't need docker to run the game. You run the build process of the game in a container with old dependencies.


Shit, I can't read.


Are you not worried about Docker Hub clearing down those older images (isn't their new(ish) policy to remove any images that haven't been active for 6 months?) Or are you hosting those images on private docker repository?


not touching docker hub with a ten-foot pole except the first pull. everything is mirrored internally.


I'm actually super curious - and really wish they did a deeper dive on - how they narrowed down the glibc dependency to just the 3 math functions, and then how they found the singular places where those were used, especially in a Rust codebase...?

Also, as to patchelf, does it require an absolute path? If yes, are they patching post-download? I doubt they would somehow try to force Linux or the user to always put the binary in the same abs path, right?


glibc uses versioned symbols, symbols are visible in the blog post's `objdump -T` output if you scroll to the right.

Then you can use `objdump -d` to disassemble and see the callers. The fact that the binary is build from Rust isn't relevant at this point.

patchelf is passed a special string, '$ORIGIN', which references the same directory as the binary, and won't require modifications for the target system.

One disadvantage not mentioned in the blog post is that this facilitates something similar to the infamous DLL preloading attacks on Windows.


I think game engines should have something that warns the developer that they are using the wrong file paths , what I mean is I seen a few html based games that fail on Linux because the developer used a file system that is case insensitive.

So if you make a game engine maybe you can try detecting this issue and warn the developer(and maybe ask to automatically fix it)


Really interesting write-up. I've had to consider some of the issues in the past for non-game applications, and what I usually resorted to was the second option, and semi-automated updates.

I don't think the first option is that much of a hurdle for Linux users. I guess there might be package conflicts that aren't trivial to resolve sometimes, but I'm not sure how often those pop up in the game dev scene, though.

I wonder if it would be viable for games to come with an AppImage (or one of the other two), and something like `game_assets.dat` for the game files (so the AppImage doesn't grow too large).

Also, slightly unrelated, but I think it would be make sense for ich.io to have a steam-like client for automating the game's life cycle on player machines. It would make it convenient for both devs and players. Maybe it could it could come with a set of tools for developers too to help automate some of the packaging/publishing woes.


>I think it would be make sense for ich.io to have a steam-like client

They do! https://itch.io/app. It's even open source: https://github.com/itchio/itch.

> Maybe it could it could come with a set of tools for developers too to help automate some of the packaging/publishing woes.

They do have those as well! https://itch.io/docs/butler/

Technically it's shipped with the client so it does come with a set of tools.


I'm curious about why they didn't statically link SDL2 in. It wasn't an option with SDL1.2 due to licensing, but SDL2 explicitly allows it.


We did this when I worked on factorio (on all three desktop platforms, not just linux), it works really well. Side bonus is you can make the inevitably required small tweaks inside the library easily because it's all integrated in the same build system as the rest of your game.


> you can make the inevitably required small tweaks inside the library

How do you handle Steam or the user updating SDL2 to a newer version, then (see the sibling comments about dynapi)? Does your game depend on those tricks to run?


We statically linked it, and didn't use the system/steam provided version at all. I guess if a user tried to use SDL_DYNAMIC_API, it would break in strange and unexpected ways. TBH I didn't know that option existed until just now. We never had any problems with our approach, afaik.


SDL2 will even allow the user to override the static library and substitute their dynamic copy at runtime, so even though you distribute it statically you don't lose out on the advantages of updates.

...why don't all libraries work that way?


You can override a bundled dynamic library so aside from having a single file when you link statically there isn't much of a practical use for it from the user's perspective and from the library developer's perspective it is a PITA (SDL2 does a lot of manual symbol juggling).


This is not a good idea because SDL can get fixes down the line to be more compatible with changes in desktops. I have a bunch of very old games on Linux using SDL 1.2 and whenever they bundled their own version pretty much never works, but i can easily fix that by deleting it and letting the game use the system provided one.

And TBH i'd rather games just assume SDL or is available and have it part of the system requirements (they already have other software requirements anyway), perhaps with a tiny launcher that tries to load the library dynamically and if fails it reports a more user friendly error. Most gamers on Linux will have it installed anyway since they'd be either launching from Steam (which bundles it as part of its own runtime) or also have other games installed.

(sadly SDL2 breaking ABI backward compatibility with SDL1 puts a wrench to that idea, which is why personally instead of using SDL2 i just write my own code - e.g. SDL2 fullscreen mode doesn't work in many window managers)


Like the sibling comment points out, you can easily load your own by setting an env var:

    SDL_DYNAMIC_API=/my/actual/libSDL-2.0.so.0 ./MyGameThatIsStaticallyLinkedToSDL2
https://github.com/libsdl-org/SDL/blob/main/docs/README-dyna...


Yes i know about this (mentioned it in another reply) but this assumes there weren't any changes (like another dev made here) and really there isn't much of a reason to do that when you can just bundle the library with your binary or, even better, just rely on it being there and have it as part of your system requirements.


I agree, for their use case they should have linked to SDL2 as a static library. It was the best option instead of patching the binary with patchelf.

I think many developers don't know about static libraries. For some reason they think they have to use a shared libraries. This is probably due to many tutorial saying something like: download the SDL2 library from there, it is provided as a shared library, here the instructions to use it with Visual Studio.

The way I do is with static libraries, I ship a single executable, all the third-party libraries are linked in as static libraries. The only dynamic libraries the exectuable will use are the standard libraries that are part of the OS.

I do this very easily with the help for lhelper:

https://github.com/franko/lhelper (I am the author)

that has recipes to build many libraries including SDL2. It builds the library on you system using your compiler and your settings. By default it will build static libraries so you don't have to bother distributing additional dynamic libraries.


Replacing the functions that required the older version of glibc might seem trivial, but I've seen countless devs beat their head against the wall trying to get conflicting NPM packages to work for what is essentially the same problem.

Criticism of the NPM / Node ecosystem aside (leftpad, etc) - this is a great reminder of a lesson I should be including when mentoring other devs: never hesitate to dig into a dependency, figure out how it works, and even rewrite it yourself if/when it makes sense. Removing the "magic" of libraries and dependencies is a really enlightening moment for new developers!


While the detail provided is plentiful, the article reads less like a technical guide and more like a story of how windows devs learned the caveats and tradeoffs inherent to dynamic linking/loading. I'm glad it was written and I'm glad to read it, but I'm a little surprised they chose dynamic over static linking. Yes, by default on most major linux distributions, you're at the mercy of the runtime loader. You either live with the complications associated with shared objects, or you change your linker invocations to avoid them.


> I'm a little surprised they chose dynamic over static linking.

Yep, it's worth noting that the vast majority of closed source games I've encountered on Linux are static linked. For what it's worth (as a big proponent of dynamic linking and the maintainer model) I think that's totally fine. I wish there were no closed source applications to begin with, but if there have to be these programs, the process necessarily bypasses the model of maintainer quality control and patching. Games are supposed to be very long lived anyway, once they're past the initial period of getting regular bug fixes.


Depending on how you build SDL2, your application could be entirely statically linked at build time, but you could still be using libraries dynamically loaded at runtime for other necessary system components: https://github.com/libsdl-org/SDL/blob/c59d4dcd38c382a1e9b69...

This is a hard requirement if your application uses OpenGL or Vulkan, you can't really statically link those libraries at all, and you have to use dlopen/dlsym to check each individual function in order to support extensions.


They're not bundling libX11, etc in any case, only SDL. So I don't see why it matters that SDL in turn depends on other system components. Presumably it's looking for said components in a way that works on all the distros they tested. So for the specific libraries that they're dealing with in the blog post (libc and SDL), static linking would probably be easier than what they're doing.


It doesn't particularly matter, I'm just addressing a common misconception. Dynamic linking doesn't have to happen only at build time and it doesn't have to depend on any particular symbols being present. Even if you static link your libc and SDL, you're still likely going to be doing some dynamic linking at runtime, and in most cases this is probably what you want. You just don't have to handle it yourself because it's implemented within SDL -- it handles all the version detection logic at runtime for you and can decide how to handle things based on the existence of some symbols, so with that it's possible to support multiple version of the same dynamic library, within the same program, and without a recompile.


Exactly, alternatives to dynamic linking can go either way: static linking if you make decisions earlier or dynamic loading (via libdl) if you make decisions later.


How on earth could they only install Ubuntu 20.04, because everything else was unavailable.

Last I checked, you could install Debian from a set of CDs which got regular point releases. Having something with decent power like a Core 2 Quad you could go back to around 2007 I guess (though I fear at this point they might have problems with their API). How do they solve the problem of running on Windows XP?


> How do they solve the problem of running on Windows XP?

I suspect it's similar to how building a C++ project works. In Visual Studio you select the minimum version you want to support and you're done. This in general is a pain point of Linux - the toolchain is set up for building the system itself rather than producing binaries for general distribution and there is no easy way to set up a generic toolchain for a given basic version of libraries. This is great for letting you modify your system, but it's not ideal for software development.

Installing an old distro via docker or in a chroot works, but I wish I could have just a compiler plus libc rather than an entire Ubuntu. I think Redhat allows something like that, but it should be more widespread.


well there's no reason why, containers can't just be a compiler + libc? Noone stops you from removing the GNU coreutils from your image or build your own. However I don't know if stripping these (a whopping 30MB for most Ubuntu-versions) is worth the hassle of dealing with a shell-less development environment.


well you don't even need a container just the libs and a lillte in some directory and you have to learn a little about cross-compilation.


Will this game still be playable in 20 years?

In this case, wouldn’t bundling everything be more stable?

It’s notoriously difficult to install some very old software on modern Linux distributions, because installing old dependencies might just be extremely hard or no longer possible.

Also, what happens when the glibc has a new version with breaking changes? Then the game won’t run, even though it depends on an old version of that lib.


> Will this game still be playable in 20 years?

> In this case, wouldn’t bundling everything be more stable?

It might; or it might not. Relying on shared libraries is generally a good idea, as those get updated: for instance, you can get pulseaudio, wayland and more modern joysticks API compatibility by updating SDL.

However, some libraries cease development. This will not happen for SDL or glibc, as those have too many users, there will always be a compat layer. Plus, their source is available to actually create such a layer. It's a good idea to ship those libraries, but I sure hope they rely on shared libraries for system calls or hardware access.

As a bonus, relying on shared objects makes it easier to run something on other CPU architectures using a native build of the libraries, at least in theory.


Yeah, unfortunately Linux Desktop doesn't really consider this a valid use case. It is still very much designed with the idea that everything will be open source and compiled and packaged by a third party middleman, or by the user themselves assuming they can get the appropriate development environment set up. It's one of the many reasons I think it is terrible as a Desktop.

In my opinion, the best way to ensure your game will run on the majority of Linux distributions, now and in the future, is to distribute a Windows binary and rely on WINE (or use winelib). Just make sure to actually test it.


glibc has so far managed to guarantee backwards compatibility going back to every version that has offered libc.so.6 (which goes back two decades!). They explicitly guarantee that backward compatibility: https://developers.redhat.com/blog/2019/08/01/how-the-gnu-c-...


Ok, is there a way to say "please compile my binary such that it will work with the oldest version of glibc"? I mean, other than to locate a copy of a distribution with a glibc that old to build on.


I remember you could decide which glibc version, for some specific symbol, you wanted to use.

For example, if you wanted to make your program use the "realpath" version from glibc 2.0, instead of the current one, you could do something like this:

__asm__(".symver realpath,realpath@GLIBC_2.0");

So, even if the glibc version in the build system is more modern (lets say 2.3), your code will also work with older ones, such as 2.0.


Is there a way to do that for every symbol? I'm asking because from what I've read from developers the only reliable way to do this is to compile on an old distribution with an old libc. So either there are problems with using this method or it is very poorly documented.


I haven't found a way, although I never had the need. I remember you could compile your software targeting a specific LSB release, and there were some tools to do that, but I don't know if it would be the same.


Not using new features will usually do that, but I wonder what's up with the math stuff pulling in symbol versions from 2018. There should be an explanation for that one...


Sure, but is there any convenient mechanism to ensure that you don't use new features? I guess what I'm asking is, why does glibc not make this easier to do?


I think the generally accepted way to do that would be a container image running a relatively old distribution. This is exactly what python packages do when they need to distribute binary packages on linux [0]. You are supposed to compile the package in a container (or VM) that runs CentOS 7 (or older if you want broader support), although now the baseline is moving gradually to Debian 9.

[0]: https://github.com/pypa/manylinux


So round about, the answer is a firm "no".


I think you can pass a linker flag to make it use the oldest version of each symbol, but I'm not sure what that flag is.


probably, because even today you can drop the program into a container and it will work - I guess a working X11 server is going to stay around for some time too! The only problem is 3D acceleration, but good luck running DX8 and older apps on Windows 10.


glibc does not (usually) do breaking changes. That's why the symbols are versioned as they describe.


Couldn’t they also have solved the glibc requirement by bundling it as well, just like the SDL library?


I made an attempt to compile glibc myself when I ran into the same issue when playing Nebukadnezzar on steam. But that is no easy task. I think it's directly tied into the rest of Linux. Targeting an older version of glibc is much easier.

(Nebukadnezzar recommended using Proton, which worked fine.)


In general this is not a good idea, as glibc is tightly coupled to the kernel it's compiled for.

https://stackoverflow.com/questions/57476533/why-is-statical...


glibc makes this extremely difficult, and has a tendency to break in unexpected ways when you try.

But... They could have used a different libc, like musl which is designed for static linking.


Personally, if you're not going to release the source for your software I'd rather just run it in wine.

Linux OSes are not designed to run binaries downloaded from the web like this and that's intentional.


Usually, getting older Linux games to run is way harder than getting their windows equivalent to run through wine, simply because the libraries it's using have moved on since.


Setting rpath=$ORIGIN is such a common practice I'm surprised it's still treated as a special case. It should be a common, standard switch at this point.


This is all very admirable, but if it were me, and I wanted to get a game in people's hands, and I was intent on supporting Linux, and the major engines' Linux support had issues (or I was writing a custom engine), I'd honestly target the web and maybe provide an Electron wrapper if it felt appropriate. Even if I was using a native language.

I'm sure this is an unpopular opinion in these circles, but unless you're doing game development for the programming side of things, I just can't see how it would be worth all the hoops you'd have to jump through when there's an open, pretty performant, 100% cross-platform runtime-and-graphics-toolkit, just sitting there waiting to be used.


I'd much rather do this (which isn't all that difficult in the end?) then chasing performance issues in the web stack, which are a lot harder to chase down and fix, because the browser is a massive layer of the rendering process you can't easily tweak. (I've seen that in customer projects where I'm sure that if they had done it natively, the issues would have been easy tweaks to their engine, but instead they spent weeks on trying to understand how to get the browser to do the right thing)


Not even that, there is no control over what the browser blacklists, no debuggers for WebGL (just SpectorJS kind of), and no matter what, it is just OpenGL ES 3.0.


That just goes to show how little experience you have with WebGL.

Native is guaranteed to work minus driver bugs, but it works nonetheless.

WebGL is dependent on the whims of the browser version and the set of blacklisted platforms, GPGPUs and drivers.

Then no matter how good the graphics card is, it will never go beyond OpenGL ES 3.0 capabilities minus a set of "dangerous" features.

It is no wonder that indie development moved away from Web based games into mobile platforms, after Web community managed to kill Flash.


The Web is pretty powerful these days with WebGL and similar technologies, but it's still orders of magnitude slower than a similar app written in a native language with native bindings to OpenGL or Vulkan. Sure, this ain't DOOM, but tight low latency loops are common in any type of game, and JS+WebGL is not the best technology for it except for tech demos or very basic games.


As long as one feels like using OpenGL ES 3.0 subset.

For reference, OpenGL ES 3.0 was released in 2012.

In what concerns WebGPU, the first draft was just released, and it will be basically a kind of MVP when it gets finalized.

Then expect it to take as long as WebAssembly is taking to move beyond MVP 1.0.


If you're going to do that you may as well just compile a Windows binary and rely on Wine. Shared overhead and you can use any language you want.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: