Hacker News new | past | comments | ask | show | jobs | submit login
Developing Games on Linux: An Interview with Little Red Dog Games (system76.com)
368 points by nixcraft 3 months ago | hide | past | favorite | 238 comments



I published my first Steam game on Linux as well as Windows and MacOS, but I don't think I'll do it again because for a single person developer (as indie as it gets) the time spent supporting Linux doesn't pay off. Within days of publishing my game I had support request emails that said "so I'm on this specific Red Hat version, with this oddball graphics driver and three monitors and full-screen doesn't work with your game properly". As I only officially supported Ubuntu I couldn't really help each exotic (to me) setup that came into my email inbox, and there was more than a few. Which I still think is a shame.

But that was five years ago. I'm pretty sure Photon supports my Windows builds on Linux better than I was ever able to do with the native executables, and at least there's that.


I've always wondered how feasible it would be if an indie developer (or maybe even a big studio) could just put out a Linux build of their game and just say "we're not going to support this outside of these very specific constraints - or even at all - we'll fix issues with our Linux build if we can reproduce those issues in the environment we developed the game in... but that's it."

Then the Linux community gets another game. Does it not work on K.I.S.S Linux running sowm as a window manager and an entire custom userspace? Probably not. Does it work using the latest Ubuntu version? Probably.

But the notion that developers have to support every possible Linux configuration out there just seems toxic to the Linux game development effort as a whole.


Personally I'd just respond to stuff like that with "Sorry, but that's a very obscure combination, and as a single indie dev I unfortunately don't have the bandwidth to support that, I can send you a refund if you want".

As long as you're nice about it and don't turn it in to a bland cookie-cutter "corporate" response most people will understand.

As someone who just runs Linux, and occasionally runs some games on it, I'm always a bit annoyed when people report these kind of very specific bugs with old/weird drivers/distros that aren't in the supported platforms list. It turns developers off for completely understandable reasons. It's a shame, because for most people it does usually work.

I run Void Linux, it almost always works, but I'd never report these kind of bugs without testing Ubuntu (or whatever is supported) first.


I suppose I can only speak for myself but I don't see that as an attractive option, and what you're describing is mostly what I did do for the game release I mentioned.

If I sell on Windows (I do) and Mac (I do) then I have to support a certain range of OS versions and ongoing OS releases - even if that means (for example) I have to figure out how to 'notarise' a Mac executable so that a user doesn't have a big scary Security Warning pop-up. Not ideal, but fine. The challenge with Linux is that I would have to communicate against expectations - that I would have to make it clear that when I 'support Linux' it looks different to the support for Windows or Mac. I do genuinely think that 99% of Linux people get this, it's just the 1% that's maybe less forgiving of different standards.

For me, just personally and selfishly, passing the buck to Photon or Wine is an easier sell for my business.


You're looking for the Steam Runtime, not Ubuntu for future projects - it bundles the dependencies into a container similarly to Docker so that no matter what distribution a user is running it'll work for them.


I wonder - what's wrong with supporting just the Steam runtime and that's it? Why deferring to Photon/Wine at all?


Do you mean just support SteamOS? The biggest problem is, as a few other commenters have mentioned, there's no further break-down in categories other than "SteamOS + Linux" (and the url is .../linux). So you can't tell the Steam Store "I only support distro x, y and not z". I can't hide it for those who I am not actually supporting, Steam will advertise it to them regardless.


No, Steam runtime: https://github.com/ValveSoftware/steam-runtime

When your game runs under Steam runtime, the real distribution is (almost) irrelevant - everything in your address space is supplied by the runtime, the things you get from the host system is the kernel/kernel modules and services you talk to via IPC (i.e. X11/Wayland, Pulseaudio).

It solves the problem of what version of what library is installed (if at all, maybe user removed it as "bloat") on the host system. You get known set of binaries that you can test against / coherent SDK target like with Windows or Mac.


Doesn't "X11/Wayland, Pulseaudio" include almost all the surface where my game's bugs will arise? It certainly includes the full-screen bug described by the GGGGP.


That bug would be there with Wine/Proton too. They would run the same display server.


I believe that GOG.com doesn't accept games that rely on the steam runtime; I know some games which have Linux ports aren't available on GOG.com because of this.

Whether this matters is up to the developer. But it's a potential downside.


Yes, this is downside.

Gog is quite contend with just Ubuntu being supported; they are not that different.

Not sure whether that is the only reason why some games are not on Gog though; often Mac ports are missing too. It seems more like missing rights for the ports than technical reasons.


> Not sure whether that is the only reason why some games are not on Gog though; often Mac ports are missing too. It seems more like missing rights for the ports than technical reasons.

I don't know; probably not. But this was the response I got back from the devs of "Expeditions: Conquistador" when I asked if they could release the Linux version on GOG (when I bought it originally I still had a Windows machine).


But Steam only really supports the latest stable Ubuntu. It works on most distros because package maintainers put a bit of work into make it but it isn’t officially supported on them,


That doesn't stop people from making support requests, though, which is the entire problem OP is talking about.


TBF isn't this going to happen no matter what? As a linux user I have absolutely no problem with the OP saying "we don't support that distro, sorry" and closing the ticket.


I don't think so. If Linux isn't natively supported, even if it runs fine via Proton, you're much less likely to receive bug reports directly, because there's less of an expectation that things work without issues. And closing tickets doesn't help with negative reviews—in fact, some users may use that as a reason to give you a "thumbs down" on Steam.


Maybe but he’s talking about having to deal with the requests. Since Steam only supports Ubuntu LTS you can push the blame on them and say something like, “Sorry we develop for Linux via Steam and they only support Ubuntu x.04, feel free to open a ticket with them if you have further trouble.


Which is the sane thing to do.

Valve should just partner with Canonical and release Ubuntu support.

Let the other distros figure out how to get it working there.


Please not Canonical. OG Steam OS was Debian, which was a wiser choice.


Ubuntu is the most common, most widely used Linux based OS. It's the "Windows" of Linux, for good or ill.

Most hardware and software vendors primarily target it.

It might not be hip, but it's what everyone knows.


From the company that leaked data to Amazon through ads in your search? Yeah, no, pass.


Valve could probably buy Canonical outright if it wanted.


Could you say "supports Ubuntu" instead?


Not only did I say that but I also gave the specific Ubuntu version as well!


That's too bad :( As a Linux user, I love it when game studios support Linux and make a point to buy their games more often.


Steam for example has a "SteamOS + Linux" filter so exposing your game as Linux compatible means exposing it to all Linux users.


Almost everything that works on steam works everywhere. Letting the dev specify a supported distro would be a huge misfeature not unlike websites that used check user agens and refuse to work outside of Internet explorer.

People would start pretending to be Ubuntu to install games.


Yes it seems better to whitelist specific explicit Linux-configurations (version, window-manager, sound-setup, etc). Like we officially support this, if you are using something else use the forums to discuss with other users but we will not investigate your issue.


The emails are likely bug reports?


This is unfairly downvoted IMO, because I think you're actually hitting on a very important point. Linux users tend to be enthusiasts, people who love software, and also primarily interact with a community where the "developer/user" is the most important figure, the one whom everything is designed around.

In the Linux community, at least unless a project has a toxic developer (there are a few), bug reports are Always Good. They're how we make the software WE use better on the systems that WE use it on. Even if a report isn't fully actionable (e.g. it's a problem with graphics drivers), the report is often helpful because the bug tracker is probably public and we can try to find workarounds, or at least flag the issue for others.

For closed source commercial software, especially cases where a tiny number of developers are working on the code, bug reports are Always Bad. They represent more work, work that you don't want to have to do, because at the end of the day these are people who already bought the game. You've gotten as much out of them as you're going to get out of them. If they're more trouble than they're worth (someone else in this thread claimed 90% of bug reports out of 1% of purchases), then it's obvious you should just ignore them or not port your game to their platform at all. You'd think this attitude would be different for issues that affect a lot of people: a good bug report can help you fix widespread problems that are hurting your players, but actually even this is rare. See the story of this guy fixing a bug causing 6 minute startup times that affected at least thousands of people using reverse engineering, when the developer ignored the problem for years: https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...

So I think you're right, these are mostly people enthusiastic about a piece of computer software instinctively trying to collaboratively improve it for everyone. But because development is so limited (there's only one person reading bug reports and working on the code), those reports are experienced as frustrating rather than helpful. Worse still, because the software is commercial there may be an unspoken feeling that support is owed for the software because the user paid for it.


> They represent more work, work that you don't want to have to do, because at the end of the day these are people who already bought the game. You've gotten as much out of them as you're going to get out of them.

This is such a bad attitude. The game is supposed to work correctly without any bugs. People who paid money for the game deserve continued support. It doesn't matter how much time and money the developers have to spend, that's their problem.

If the software is defective, consumers should be entitled to a refund. That ought to motivate companies not to release shoddy work.


> The game is supposed to work correctly without any bugs.

This is not real-world software engineering :)

Pretty much any software has bugs; maybe surprisingly to non-programmers, games are especially complex (in primis, architecturally).

In real world, one can realistically talk about, let's say, an acceptable threshold of bugs.

> People who paid money for the game deserve continued support

And this is not real-world (game) business. Whether one likes it or not, there is a per-unit profit, and the corresponding value in terms of support is very limited.

An ideal solution to this is open sourcing games after a certain time (Id Software used to do it), but this is not realistic. I wish it, though!

> That ought to motivate companies not to release shoddy work

One can't really force a company not to do shoddy work. The gaming market is a radically free one, unlike other constrained markets, like internet providers. Customers are actually entitled to have the money refunded, at least on Steam. Gaming journalism actually has been including bugginess in games evaluation for a while, so buyers can decide in an informed fashion.


Which is why developers don't release on Linux, since they would have to test for so many strange driver setups to ensure things works correctly. The bug reports you get from linux are not "this gameplay is bad", but stuff like "my mouse cursor isn't displayed correctly here" which works fine in windows but somehow their setup screws it up. Trying to get graphics to reliably work in all linux versions is a lot of work.


While I agree with the spirit of what you're saying (as a linux user myself) the problem is that we have to recognize that developers have limited bandwidth. Windows is also generating more revenue for them, so it is going to take priority.

Though for an indie game it probably isn't crazy to make the code open sourced and then put those users to work for you. That can really help reduce the burden. But of course opens you up for people stealing your software (which let's also be real, happens anyways).


No, I agree totally with that. I think that the situation that exists, while unfortunate, is pretty understandable from all sides.


its Proton, not Photon.


I recently did exactly this for my game (Industry Idle) on Steam. I added Linux build but pinned a post that says "Linux and Mac support are considered experimental and are supported on best effort basis" (https://steamcommunity.com/app/1574000/discussions/0/3122659...)

Most people are very helpful and quite understanding that as a sole indie developer, it would be hard to support all the configurations. But occasionally I get angry emails and negative reviews about game not running on Linux.

Given the sales (Linux is 1% of the total sales, Mac is 3%), I would say for an indie developer, it makes more sense to put Linux support on a low priority. It is unfortunate for Linux gaming community but it is what it is.

Also even though Proton has come a long way and has become relatively stable - occassionally there are some strange issues (like Steam Cloud sync fails, etc) here and there. But overall the effort is much lower compared to maintain a separate Linux build.


This is partly the fault of Steam. They simple have a single "supported" boolean. It would be nice if you could provide a warning or "partial support" label so that people had these expectations when buying the game.

Right now the flow for the user is 1. See store page 2. Buy 3. Play 4. Hit bug.

This is the moment when they find out that they bought a game that was not in fact supported. That is super frustrating (and possibly legally requires a fix or a refund). If there was a 1.5 step of "This game offers no support for Linux" or "This game offers no support for any distribution except Ubuntu 21.04" then it is much more acceptable, because I accept that detail before purchasing.


Recently stumbled over a game that did that a bit differently. On the steam page there is only support for Windows listed. But it actually downloads a Linux version. That version worked just fine for me, but they say they can't support it, that's why it's not listed. So there is more than single "supported" boolean (and games can list required Linux version in the requirements section I think).

https://steamcommunity.com/app/378720/discussions/0/49012573... is their explanation, https://store.steampowered.com/app/378720/Thea_The_Awakening... the store page. Great game btw.


I don't disagree that this would be beneficial, but I also don't think it would do anything to prevent the additional support burden and negative reviews in aggregate. If you need proof of this, see the amount of negative reviews on some Early Access games that are very up front about the lack of polish in their current state.


The most obvious solution imo is to offer a free trial mode. If it fails during the free trial then don't buy it.


Steam is quite generous about refunding games, either because you purchased them accidentally or they didn't run correctly or any other reason, as long as you don't have more than a few hours in the game. I think that's a much better approach than having to implement what's effectively a DRM system in software, or a completely separate trial binary containing only part of the game.


> as long as you don't have more than a few hours in the game.

Last I checked the limit was the minimum of "2 hours playtime" and "2 weeks after purchase"

> Steam is quite generous about refunding games, either because you purchased them accidentally or they didn't run correctly or any other reason,

Again, last I read, Steam is quite generous but will probably flag your account if certain patterns emerge (probably through some ML-alchemy).


> This is partly the fault of Steam. They simple have a single "supported" boolean. It would be nice if you could provide a warning or "partial support" label so that people had these expectations when buying the game.

You can have Linux builds available via Steam without listing Linux support on the store.


That is interesting, but too far the other way. I wouldn't even think to look around to see if they have an unsupported Linux version available. But maybe I'm just too picky.


From my point of view, one person telling all their friends that your game sucks can completely offset the nearly zero sales of native linux games. On the other hand most linux gamers know how to run games under wine, so I really don't see any advantage for a developer to make a native linux version.


> On the other hand most linux gamers know how to run games under wine, so I really don't see any advantage for a developer to make a native linux version.

Maybe I'm in the minority, but I've never bought a game because it could theoretically run under Wine, while I've bought quite a few games that run on Linux. Very rarely do I buy a game that does not natively run on Linux.

Wine seems like kind of an ugly hack even in the best case scenario. You don't know what performance is going to be like with your hardware, and you usually don't have anyone who's tested the program on your hardware to make sure it runs correctly and doesn't crash, and obviously games are extremely sensitive to these kinds of hardware dependent things. Installing Wine can be gross too, you have to install a bunch of 32 bit libraries on an otherwise clean 64 bit system.

I'd say the appeal of making a native Linux game is that it gets you access to the market of people who will buy a native Linux game. It's true that you already had the market of people who are technically running Linux and playing Windows games via Wine, but presumably there are many people who aren't going to go to that much trouble. (Obviously, there are many cases where supporting Linux isn't financially viable anyway, Wine or native.)


I have more than once bought a native linux game, gave up getting it to work, and run the windows version under wine.

At this point I just buy windows games, and if they don't work under wine, I run them in a VM.


> Does it work using the latest Ubuntu version? Probably.

KDE doesn't even work on the latest Ubuntu version - on my hardware, at least.

I have an Ubuntu system with an Intel graphics card, and my system won't boot into KDE unless I remove the old Intel driver that it defaults to, so that it then tries the new Intel driver which actually works. Gnome, Enlightenment, and whatever Ubuntu defaults to, they all work fine regardless, but KDE doesn't.

Note that this is after removing the 'Intel' driver for xorg, which had tons of screen tearing issues, in favor of the apparently "correct" modesetting driver, which is the preferred option for newish Intel cards. Except that at some point I was installing something else which explicitly depended on the Intel driver package...

And then we all ended up working from home and now I don't have to deal with any of that crap. Now I just use xorgxrdp from my Windows machine and it just works.

I can't imagine the gigantic hassle that would be trying to game on Linux in that kind of environment, where the "supported" and "default" options are the wrong choice and you have to manually uninstall things if you want stuff to work properly. No thanks. I love Linux (way, WAY more than Windows), but I'm not going to waste time and energy trying to play games on it.


It is absurdly easy to game on Linux now that Steam has proton.

I do it literally every single day.


It is possible to support a wide range of Linux distributions if one takes proper care such as: use CMake and never use GNU autotools or GNU Make as CMake has the best IDE support among C++ building systems; static link as much as possible avoiding reliance on the package manager for getting dependencies; set RPATH as relative path to executable in CMake building system for dependencies that cannot be statically linked; locate resource files using the relative path to the executable instead of relying on fixed path such as /usr/local, /usr/share, /etc/, ...; distributed the executable and the shared libraries in a zip or tar.gz archive with .desktop file containing an icon, so that the user can launch the application just clicking on it. Most MacOSX applications are already distributed in this way in the app-bundle format a, directory ending with suffix .app, containing, a p-list xml metadata, shared libraries and data. The app bundles can be installed just by dragging and dropping without worrying about dependency hell, enabling root permission and so on. The Linux fragmentation could decrease if Linux distros adopted a MacOSX-like standardized app-bundle format.

The last point of failure is the GlibC (GNU C library), a Linux application, even if fully statically linked against all dependencies or packaged with all shared libraries, may fail to run in another Linux distribution if it is using an older version of the GlibC than the one that the executable was linked against. Therefore, in order to ensure that the application can work in the broadest range of distributions, it is necessary to static link the application against the oldest possible version of GlibC or build the application on the LTS (Long Term Support) version of the supported distribution.

If one does not have enough resources to support Linux, an alternative approach, assuming that is OpenGL is used, may be building the executable for Windows and Wine using MinGW compiler either on Windows or by cross-compiling on Linux using Dockercross-mingw (https://github.com/dockcross/dockcross). At least with Wine, one will not have to deal with Glib-C compatibility issues.


> The Linux fragmentation could decrease if Linux distros adopted a MacOSX-like standardized app-bundle format.

Everyone decided this so we got 3 different "standard" formats... (Snap, Flatpak, and AppImage). See: that XKCD about standards.


And macOS has apps that install with .pkg, .app bundles, .dmg, homegrown installers based on shell scripts, zipped archives...

And windows has .msi, whatever is used to install windows store packages, NSIS, Inno Setup...

it's a way overblown complaint. I use AppImages built on centos 7 for my own stuff and never heard anyone having issues with it.


Yep, but the installation is far simpler than the procedure for installing a Linux application not available at repositories or an application that uses unstable libraries likely to change the ABI. All one needs to install an MacOSX app-bundle is to drag and drop the .app folder or .dmg disk image to /Applications. MacOSX reads the plist metadata and finds the icon and the executable path, then renders the icon making easier and intuitive for non technical users to just click and run the application.

On Windows is easier to build self-contained applications as there is no the RPATH issue. All one needs to build a self-contained application that is easier to ship and works everywhere is just add shared libraries at the same directory where is the executable and create a zip archive or a MSI installer. When applications are installed, they are installed in a single folder and files are not scattered across the file systems like binaries in /usr/bin, /usr/local/bin, /libs/, /usr/shared, ..., as in Linux and other unices.


Here's an issue with AppImages: they don't work with musl. Makes me sad.


I mean, targetting musl means pretty much using a different operating system than GNU/Linux. The ELF linking semantics pretty much enforce that you're using the same libc implementation for your whole program.

That's like complaining that, say, a program built against cygwin on windows, can't work on a system without the cygwin dll (and all its dependencies also compiled against the cygwin dll).


Well the problem of Snaps is complex sandbox, lack of user control, forced updates and so on which is disliked on many non-Ubuntu distributions.

Flatpack attempts to solve the dependency hell problem by providing standard runtime, a set of libraries, Glib-C, Gtk or Qt with a fixed version which requires the developer to build an application linking against those runtime, which avoid binary compatibility problems and dependency hell. The trouble of Flatpacks is the integration with the desktop.

AppImage attempts to solve the dependency hell by bundling everything into a single launcher executable with a SquashFs file system image payload. The disadvantage of this approach is that it is only possible to use a single executable. AppImage are also not free from GlibC compatibility issues.

A MacOSx-like app bundle and a changing in Linux development culture to build and design applications as self-contained from inception would strength the Linux desktop and reduce the application packaging work duplication that still affects Linux distributions.


Aren't Linux syscalls backward-compatible? So can't you just static-link your own libc clone (like MUSL or whatever)? (Can't staticaly-link normal libc due to licensing, IIRC)


The system calls are documented and backward compatible. However, Glib-C is only forward compatible, but not forward compatible. An application, built linking against an older version of Glibc than the one found at the deployment machine, is pretty likely to work on this machine, whereas the other way around will result on a linking error.

By using MUSL it is possible to build fully statically linked applications that works everywhere, however dlopen(), dlclose() calls, that are used for loading shared libraries and plugin at runtime on Unix-like systems, do not work with MUSL.


Glibc is LGPL, which makes some exceptions for system libraries, which glibc would fall under. It's been a while since I've read it and IANAL, so I'm not sure what exactly would be allowed.

The bigger problem is that glibc cannot be statically linked properly. It dlopen's some libraries, like for NSS. Maybe games could avoid triggering those cases but musl is definitely better.

While syscalls themselves are backward-compatible (even for faulty behavior), I think I've read somewhere that some parts of DRI are not, but I've lost the source on that.


The bigger problem is that for games you will to interface with OpenGL/Vulkan and probalbly Pulse/ALSA and those are all shared libraries which you cannot use from a completely static executable.


And this still leaves you open to issues with the absolute myriad of driver issues that exist.


yes and then the vocal minority of linux users roast you on steam and tank your rating. chef's kiss


I think you can support only debian distros like ubuntu. Then you would reach the common linux users that install things with the package manager. More advanced linux users would know how to get things running under gentoo or arch, I guess.


I would disagree. As another comment noted, it would be better to compile static where possible and query for directories. I had a game not launch because of a shebang /bin/bash not found even though the common shebang practice is /usr/bin/env bash. No one is expecting perfection, but there are some common best practices out there like following XDG. The more you couple things to hard-coded paths and have them work with just one system, the more you limit accessibility and things "just work"ing on other distros.


As others have said, if you want to distribute on Steam, you have to support "SteamOS and Linux", and can't just say "Supports SteamOS and Ubuntu" or "Partial Linux support" or something like that. If you support any Linux at all, Steam tells every Linux user that your game will work for them.


Steam itself only oficially supports Ubuntu LTS. In practice most distributions work, same as for most games.


Is it actually common to have distro-specific issues like that? I'm skeptical. There are people playing CS:GO with Arch + i3wm just fine. I would think just having the right versions of things packaged is enough, and maybe that distros with fresher packages will be better off.

Alternatively, if more games were free (as in freedom) software, a large community could better sort out some of these issues.


Steam is pretty limited in official support, isn't it? Even if a game targeted a less popular distro, say one optimized for gaming, I'd have no issue dual booting into a different Linux distro just to play it, since it's free anyway and would likely be familiar enough to maintain.

Edit: Though the game would have to have pretty good replay value, I'm not installing another OS for a 10 hour game.


As a Linux gamer, I’d say: just make sure the Proton version works, focus on fixing glitches there, don’t bother spending time on porting any code otherwise.

Pretty much all games I play are through Steam’s compatibility layer anyway, and nowadays it’s a very smooth experience.


I agree. The few times I can't run it via Proton ( Fallout 4, FF11 come to mind ) I jump on Windows VM anyway. It is absolutely bananas how powerful hardware is these days.


FF11 works fine with proton for me


> Make sure the Proton version works

On which distro and graphics hardware? ;)


Good one. I’d say, take either Steam or Lutris or something like that, so that the distro is abstracted away as much as possible.

I’m not a game dev so I don’t know how much you still actually notice of the distro / hardware once you’re running in Proton or Wine.


Flatpak version of steam would be good candidate as anyone can install it on any distro (even non-glibc based ones)


You do. You totally do.


You spend so much time worrying that you will sell enough copies to break even. Linux support time could be spent fixing a bug, marketing, polishing a feature. Plus, negative reviews can hurt you a lot so you can’t really support Linux half heartedly. Needs to work as well as the other versions. The average serious indie game makes $14k (edit: 16k). Steam changed their recommendation algorithms to promote aaa and top sellers


> Steam changed their recommendation algorithms to promote aaa and top sellers

This was very noticable, even as just a user. Where on the pages of games you liked you'd routinely discover more smaller titles that looked really cool under the "more like this" section, it's now the exact same set of games recommended for most of them, and it's really really annoying.


> The average serious indie game makes $14k.

Source?


Might actually be 16k. Research is from mike Rose, an indie publisher

https://drive.google.com/file/d/1W6lZir97bUU0KdvIGNIVWG0O-_A...


If the parent's main point is that indie game developers, on average, don't make tons of money, he's right: https://twitter.com/GreyAlien/status/1227557601786912769


I was hoping for something more descriptive I guess. Steam used to be much pickier about which games were allowed on the platform. Now it's anything goes, and there's a lot more lower quality games on it, that sell for under $5,have no marketing budget, and just kind of look bad.

The indie games I've played all seem to have little news posts about 1 million copies sold. So... what's the return on an indie game of 2012 indie game quality?


Steam has the "Steam Linux Runtime" these days[1]. It runs games inside a container with a fixed set of libraries. As long as it runs in there you don't need to worry about the host distribution.

[1] https://github.com/ValveSoftware/steam-runtime


You're still using the system specific X/Wayland, Pulseaudio, and the system video drivers. I would bet that a majority of game bugs come out of one of those.


Both PulseAudio and X.org are in general stable, don't know about Wayland but I don't think many games target it yet.

The major issue is OpenGL drivers, they can be a pain on Linux (specially proprietary ones like NVIDIA).


As far as I know, X has many issues with things like multiple monitors, full-screen programs, low-latency input etc. Not sure if Pulseaudio similarly has issues with things like surround-sound systems that a game uses more in-depth.


> multiple monitors

If both monitors have similar configuration (DPI, refresh rate) it is fine (Xrandr fixed many of the issues that X previously had). If the monitors have different configurations it isn't, but this problem will reflect even on the desktop and I am sure that nobody will report a problem that they have on desktop as a "game bug".

> full-screen programs

X doesn't have a concept of full-screen, but anyway, it mostly works fine if you grab the root window (that of course you shouldn't write code manually to do this, your game engine probably will do it for you automatically).

> low-latency input

X latency is fine for most games, and in many cases you will not use X to handle input anyway. Also, if your game really needs low-latency (only a minorit of genres do), you can bypass X.

Anyway, I was not referring to specific X or PulseAudio problems that you will if you put things on container or not. I am just saying that even if steam-runtime doesn't built-in those libraries it is not much of a problem since those APIs are stable.


You said that 'as long as it runs in the Steam Runtime, you don't need to worry about the host distribution', and I was pointing out that you'll still communicate a lot with the host distribution. For example, someone could be running a distribution that ships with some ancient copy of X, or Pulseaudio, and that would cause problems for your game even if it is running in a container.


Even on Windows you wouldn't release a game that works on Windows older than 10 nowadays, you can assume a reasonable up-to-date distro. I am quite sure Steam Runtime works at least as far back Ubuntu 16.04, the oldest distro that I can think someone would want to run Steam on it.


X is only a problem if you use Xlib direcly. Instead use SDL [0] which will handle pretty much all window manager peculiarities for you.

[0] http://libsdl.org/


How does libsdl help if your players are complaining their monitors are flickering when your game is in full-screen mode?


I do remember reading an analysis by a bigger studio here recently who basically said the same thing - the Linux userbase was less than 5% of all users, but over 90% of their support requests were for Linux. It was just unfeasible to support long term, the support was costing them more money than those users could ever bring.


It was >0.1% of users and 20% of automated bug reports. There isn't any data on how many Linux users report bugs.


And it they were beaten to death on HN. So I was kind of surprised to see OP being the top voted comment.


As a linux user, thank you for at least trying. Also I don't know if this is applicable but don't let a few rude users color your perspective too much. There are lots of people like me who are thrilled if you just support "Ubuntu". That at least makes it possible for me to get it working.


It depends on the game. SNKRX is popular right now. It's written entirely in cross-platform Lua, uses the cross-platform Love2D engine which is packaged by Linux distros already, and is open-source on GitHub. So it was very easy to make it run on Linux.

I just had to fix a crash because my distro's Love2D uses LuaJIT which only supports Lua 5.1, but the game's source contains a bit that requires Lua 5.4. But it was an easy patch (which unfortunately cannot be upstreamed because upstream doesn't want PRs).

For other games, as long as the game provides an Ubuntu version, it'd work for me. I run an Ubuntu Docker container for Steam and other "first-party software" (binary packages directly from the software manufacturer as opposed to distro repos), because when such software says "it supports Linux" it almost always means "it supports Ubuntu".


That is exactly why I have been saying for quite some time now that Proton has killed native Linux games. Developers would be crazy to publish a native Linux game right now. Sure middleware like Godot helps but even then it's still probably not worth it.

Witb Proton you can publish a game and deny all reponsibily for Linux support. “Sorry we don't support Linux but we hear it runs great on Proton!”


It wouldn't be unreasonable to officially target Proton support. Hell, I'd like that almost as much as native support. And we can give back to a community that is bringing a massive catalog of Windows software to Linux by sending our patches to the compatibility layer instead of just our proprietary source tree.


I think that runs into issues with anti-cheat though. Neither the very popular Easy Anti Cheat or even Valve's own Valve Anti Cheat work through Proton. While Linux native versions of both work fine.


I'm unsure how I feel about classifying an entire game engine as middleware. For most engines (Godot included!) building your code for a different platform is as easy as changing your target in a dropdown menu.

I'm also deeply curious as to exactly how many indie game developers are writing code that interfaces directly with these low level systems and graphics APIs. In my experience, building cross platform games (I've shipped from Unity, Unreal, Godot, and XNA/MonoGame) is trivial and the framework handles 100% of the complexity of porting. From the sounds of this comment thread, everybody is writing their game in raw shader language and then having to port that to Vulkan or OpenGL.


I mean, you could deny Linux support without Proton too. I think that's a separate issue.

Now, supporting Linux via Proton, that's where Proton can kill native Linux builds. But I'm not sure how common that is, or will be.


That has nothing to do with Linux at all, you can just as easily get some weird requests from people trying to run your game at 640x480 on Windows 98 or some other strange thing like that.

The usual response here (from any vendor, not just an independent game developer) is to say you only support the latest LTS version of Ubuntu/SteamOS with the officially supported drivers there and that's it. You're absolutely right to do that. If you want obscure distros to be able to run your program, you can open source it and let them deal with the packaging/testing/maintenance. The fact that all the OS packages are open source is the only reason all the random distros are even able to exist, so you're already making it difficult for them when you don't do this. No reason to dance around that.


"just as easily"?


What is the confusion? People on Windows 98 are able to send emails too.


People on windows 98 doesn't expect modern games to work for them. Many people on strange linux setups do expect games working on some linux distro to work for them as well and complains when it doesn't. Those people are the reason linux versions gets so many more support requests.


That's just about managing expectations, and it falls on you to be clear in your marketing. If you said "we support all versions of Windows" then you could expect someone to misinterpret that, the same way they would misinterpret "we support all versions of Linux." So really to make it clear, you might consider only saying "we support Ubuntu Focal" or something like that. Most products I see now make it pretty obvious that they only test on some combination of Windows 7/8/10 and above, so just do the same for whatever Linux distribution.


>you can just

You could, but clearly this developer didn't. You're responding to someone's real life experience with a hypothetical.


It's not hypothetical, I've seen the odd comment like that on various forums and github, etc. You can say you aren't going to support an obscure setup without judging people for having it.


My advice would be that if you cannot give full support to linux, at least be open about your software. The linux community is very good at getting things working. They can figure most problems out for themselves, but only if they have some information to work with. Toss them some documentation, anything, and 99% of the time they will come up with a solution on their own.

Except DRM. Attach DRM or anti-cheat to your project, software that actively doesn't want to run on anything but a specific OS, and the linux community will turn on you.


Multiplayer games without any anti-cheat are really not much fun. What is the solution there for Linux?


You can make a pretty good experience server side anti cheat and authoritative networking. Trust factor systems like CSGO has also work well (as long as you prevent abuse which is basically what went wrong with CSGOs implementation). You can also design your game in a way where cheating just isn’t effective (e.g. rocket league).

Ultimately the best way to have fair games is to promote finding players through avenues other than official matchmaking: friends or even just random people on something like discord.


> Ultimately the best way to have fair games is to promote finding players through avenues other than official matchmaking: friends or even just random people on something like discord.

Completely agree with you. The current online gaming model where people play with untrustworthy strangers is stupid and broken. We should be playing with friends. Instead we get invasive rootkits installed in our computers and they don't even fully prevent cheating.


How is rocket league designed to make cheating not effective?


How would you cheat in rocket league? As long as the server ensures you can't do illegal moves there is no way to cheat, as writing an AI to play rocket league for you is infeasible, unlike for example an aimbot.


I imagine you could implement a majority of what traditional anti-cheat software gives you as a server-side machine learning algorithm. In my experience as a gamer, the vast majority of cheaters are completely obvious, so that would likely be as true for machines as it is for human players. This comes with the added benefit that your anti-cheat mechanisms would be much more difficult for cheaters to reverse engineer, since they won't have access to the binary. Of course, I'm not aware of any such off-the-shelf solutions, so this point is moot for indie developers who can't afford to build their own anti-cheat software.

Disclaimer: I'm not a game dev and don't really know what I'm talking about.


1) you’d better be comfortable with false positives, 2) cheaters are smart too and will learn 3) how could a small game dev company afford to create an accurate model?


All those apply to client-side anti-cheat.


Definitely not to the same extent. EAC is part of an SDK now


Don't play multiplayer games on Linux.


I don't know. Maybe server-side analysis? Just don't ship borderline malware kernel modules and there will be no problems.


Dedicated servers and active admins.


Last one also introduces other problems.


I hate to say it (as I type this comment on a Linux box), but you're almost better off pushing a 2D game in a browser if you want to support Linux =( The variations in Linux distros just get a little too varied


Linux just isn’t worth supporting financially. At best you do slightly better then break even. But the opportunity cost is so high that time is almost always better spent making the game better for the 99% dor non-Linux players.

If you say this publicly then angry anime avatars will yell at you on Twitter.


Yeah, I think it says a lot about desktop Linux that running the Windows version under Wine is often a better experience than the native version.


It's only a better experience because developers spend all their efforts on the Windows build and the Linux build is just an afterthought.


It is also because on Linux developers above the kernel have very little concern in keeping their APIs and ABIs stable and improving (it isn't just keeping some 29839283 year old library around that never receives any updates, you need to keep stuff up to date - imagine, e.g. SDL1.2 without support for HiDPI - though sadly most of the time not even that keeping around happens with distros dropping older libs left and right). Notable exception being Xorg, so of course the CADT model ensured that it has to be abandoned in favor of Wayland which while being barely usable has already managed to break compatibility with itself.


I wish people would not use the phrase "CADT" like this, it's not relevant, and it's insulting and ableist. Nobody wants to maintain legacy software for free in their spare time. That's all it is. It has nothing to do with the age of the person or the status being "attention deficit," which is a real condition that people actually have and suffer from. If you disagree, you're welcome to spend your spare time working on it. I support you doing that. I really doubt anybody will though, because outside of servers and Android, there is very little interest in doing anything on Linux.

If you're willing to listen, I could describe to you the technical reasons why Xorg was abandoned. But I also doubt the answer will please you, because the reality is that the reason it's perceived as being "stable" is because it's not being improved anymore -- if people were still hacking on Xorg instead of Wayland, then your Xorg would be breaking left and right too.


> I wish people would not use the phrase "CADT" like this, it's not relevant, and it's insulting

Yes, that's the point of an insult.

> Nobody wants to maintain legacy software for free in their spare time. That's all it is.

This is 100% false, a lot of people do. In fact a language (Free Pascal) and framework (LCL) i am using has a very good track record for preserving backwards compatibility while at the same time continuously improving. I have code i wrote two decades ago that work fine with it and will automatically get the new features introduced just with a recompile.

The same can't be said for, e.g. Gtk: Gtk2 apps not only wont get any new features from Gtk3, but they wont even compile. Same with Gtk4, because making the mistake twice wasn't enough.

> If you're willing to listen, I could describe to you the technical reasons why Xorg was abandoned.

There are no technical reasons, Xorg is code, code can be modified. It is all political reasons at best and people wanting to rewrite stuff they'd rather not bother learning about. As JWZ writes in his CADT page:

<<Fixing bugs isn't fun; going through the bug list isn't fun; but rewriting everything from scratch is fun (because "this time it will be done right", ha ha) and so that's what happens, over and over again.>>

> the reason it's perceived as being "stable" is because it's not being improved anymore -- if people were still hacking on Xorg instead of Wayland, then your Xorg would be breaking left and right too.

Xorg improved all the time over the years going back to the XFree86 days, adding new features consistently without breaking existing code and applications. If it suddenly started breaking now it wouldn't be because it is impossible to not break but because the developers somehow started breaking it.


If the point is to look like someone who is unnecessarily rude, I would kindly request that you not do that. Please avoid throwing insults around and starting flame wars. It doesn't help anyone. We can have a good conversation without doing that.

That's great that people are doing that with FPC, but if they're continually adding new features and removing deprecated things then that's not legacy software. To illustrate further what I mean, probably none of those people can be convinced to work on other old stuff like GTK1 or GTK2, because you're really comparing apples and oranges here. If it were easy or profitable to do that in GTK, somebody would have done it already. Half the reason things changed is because the entire underlying stack changed along with the hardware -- this is not even remotely comparable to something like a self contained compiler for a programming language.

If you disagree, I'd love to hear your proposal on how to keep all the various API changes working in the same codebase without causing it to become overly complex and burdensome, and this is probably over the span of at least 20 system libraries that have all deprecated and/or removed various things over the last 30 years. From my view, part of the problem here is that there were some legitimately bad decisions made back then, that looked reasonable at the time but turned out to be not so great, and nobody really wants to keep paying for those decisions. This is not similar to something like a Pascal implementation where they could just aim for compatibility with an existing compiler from the 1970s and then build on that, these were entirely new APIs at the time and they didn't have their designs fully fleshed out, and in some ways they still don't, because the problem space is still somewhat open-ended.

You're wrong that there are no technical reasons, I assure you the technical reasons are real. Again, I can tell you if you're willing to listen, but if you're going to blanket deny they exist, we can't really have a conversation, so I won't bother typing it out. Let me know if you change your mind. In context your JWZ quote doesn't any make sense either, because Xorg got bug fixes for a very long time. It's being moved away from because it's no longer effective to keep doing that, which is the opposite of what that quote suggests. Please don't let rude and dismissive quotes like that be the guiding line of your discourse, let's actually discuss the real issues.

>If it suddenly started breaking now it wouldn't be because it is impossible to not break but because the developers somehow started breaking it.

This is the root of the misunderstanding -- there is no significant difference between these two. It's at the point where it needs a major refactor or rewrite to make continued work on it worth it, which is going to break things, and at that point writing a new display server makes a whole lot more sense.


> That's great that people are doing that with FPC, but if they're continually adding new features and removing deprecated things then that's not legacy software.

They are not removing deprecated things, whenever possible the old things are still around and call the new things. At most they move some stuff to another unit (like a C include or Java import) which is a search+replace into the codebase that takes literally seconds. This happens extremely rarely though, i have non-trivial code that compiles with both a 14 year old release and SVN checkout (which they also try to keep working).

> To illustrate further what I mean, probably none of those people can be convinced to work on other old stuff like GTK1 or GTK2, because you're really comparing apples and oranges here.

Lazarus' currently main backend for Linux is GTK2 exactly because the Gtk developers broke backwards compatibility with GTK3. The GTK3 backend is close to completion though - just in time for the GTK4 to break things again!

There is also a GTK1 backend - it was broken a couple of years ago until someone noticed and fixed it. These are not high priority backends, but they keep them in working condition.

Personally i have contributed to the GTK2 backend since so far GTK2 has the best user experience of all toolkits available on Linux (IMO, of course) by fixing alpha channel support. Since Lazarus has a policy on trying to not introduce unnecessary dependencies, i went to the extra mile to ensure that it works even with very old versions of the library.

> If it were easy or profitable to do that in GTK, somebody would have done it already.

That is the point, it isn't easy or profitable. But something being not easy nor profitable doesn't make it wrong. After all the entire CADT thing is about focusing on the easy stuff because that is fun.

JWZ's page is very short and amusing to read, i recommend reading it.

> If you disagree, I'd love to hear your proposal on how to keep all the various API changes working in the same codebase without causing it to become overly complex and burdensome

By causing it to become "overly complex and burdensome". To avoid the hard work on 2-3 developers this approach pushes that hard work to 20000-3000000 developers.

> and this is probably over the span of at least 20 system libraries that have all deprecated and/or removed various things over the last 30 years.

Which they shouldn't have done.

> From my view, part of the problem here is that there were some legitimately bad decisions made back then, that looked reasonable at the time but turned out to be not so great, and nobody really wants to keep paying for those decisions.

But they should, or at least they should wrap these APIs so that they call new stuff and said new stuff should try - now with the benefit of hindsight - avoid being designed in a way that they'll be so easily broken (which will also help with the maintenance of the wrappers).

> This is not similar to something like a Pascal implementation where they could just aim for compatibility with an existing compiler from the 1970s and then build on that, these were entirely new APIs at the time and they didn't have their designs fully fleshed out, and in some ways they still don't, because the problem space is still somewhat open-ended.

Free Pascal has a ton of burden from design decisions they made in the 80s, 90s, etc including keeping source code compatibility with Delphi and all the boneheaded decisions Borland/Inprise/Embarcadero/CodeGear/whatever did. But also they keep compatibility with standard Pascal, Mac Pascal, Turbo Pascal (which is different from Delphi) and a bunch of other dialects and even specialized dialects like Objective Pascal (for Objective-C interop). They do that by allowing the source code files to switch dialect with dedicated compiler switches and even enable/disable parts.

Yes, this adds a TON of overhead and burden on the compiler writers' side but everyone involved agrees it is a good thing to avoid breaking others' code.

I have a feeling you are greatly underestimating the combined effort that went on FPC and LCL.

> In context your JWZ quote doesn't any make sense either, because Xorg got bug fixes for a very long time.

I was explicit in my original message that Xorg is among the projects that are actually stable so, yes, CADT does not apply to Xorg.

> there is no significant difference between these two.

Of course there is.

> It's at the point where it needs a major refactor or rewrite to make continued work on it worth it

Key words: "worth it". Worth it to whom? People who want to play on shiny toys?

> which is going to break things

They may introduce bugs the refactor but as long as these are acknowledged as bugs and get fixed, then there wouldn't be a problem.

The problem is if they break things intentionally. THOSE are unavoidable. When i can run an X server in Win32 which has a completely different API and display model and the X server barely has any control over the underlying window system, it is absolutely inexcusable to have incompatibility issues in an environment where the X server has complete control over the display, input, etc.

> and at that point writing a new display server makes a whole lot more sense.

Only if you see broken glasses from a bull in a glass shop as unavoidable without questioning why the bull was there in the first place.

EDIT: also in the other message you mentioned that people do not want to maintain legacy software in their spare time. While it isn't correct - a lot of people do - it is also correct that a lot of people do not want to do that because it can be a lot of work. THIS MAKES IT EVEN MORE IMPORTANT FOR THE SOFTWARE DEPENDENCIES TO NOT BREAK so the little time people can afford to put in their software isn't wasted in keeping up with all the breakage their dependencies has introduced just so they can do the same thing in a different way.

To use GTK as an example again, Gtk1 and Gtk4 fundamentally provide the same functionality - sure, Gtk4 has a few more widgets and some fancy CSS support, but fundamentally it is all about placing buttons, labels, input boxes, etc on windows and reacting to events. Yet people who wrote Gtk1 code had to waste time updating it to Gtk2, then again waste time updating it to Gtk3, then again will have to waste time to update it to Gtk4 and all so that they can have buttons and labels and input boxes on windows that people can click on to do stuff.

That is a MUCH worse waste of time because all that time these developers spent to keep up with the breakage could have been spent instead on working on the actual functionality their programs provide.

Instead they not only have to waste time in keeping up with Gtk just so they can do the same stuff, but chances are that due to these changes they are introducing new bugs in their programs.

See XFCE as an example. Or even GIMP, which took ages to switch to GTK3 (again, just in time so they can now waste even more time to switch to GTK4).


>They are not removing deprecated things, whenever possible the old things are still around and call the new things.

That's great that they have the bandwidth to do that, I commend them for it, but other projects don't have the time to maintain and work around deprecated APIs forever.

>Lazarus' currently main backend for Linux is GTK2 exactly because the Gtk developers broke backwards compatibility with GTK3.

If they want to help avoid this in the future for other programs, I would urge them to try to write some kind of compatibility wrapper. It would mostly the same amount of work as doing it upstream, and upstream seems to have no interest in doing it since they would rather focus on helping people get their apps ported to the new way. But this would only work for some things, other things simply can't be provided with any amount of backwards compatibility.

>That is the point, it isn't easy or profitable. But something being not easy nor profitable doesn't make it wrong. After all the entire CADT thing is about focusing on the easy stuff because that is fun.

If you take that approach, you really could say the same thing about these other projects that don't want to upgrade their apps to GTK3/4, etc. Of course they won't do it because it's not fun for them, those projects don't really care about the toolkit, they just want to have some kind of GUI quickly so they can then focus on the rest of their program. At least that's been my experience with them anyway, I sympathize with that but it also conflicts with the need to make changes in the toolkit. So eventually somebody has to compromise somewhere.

>JWZ's page is very short and amusing to read, i recommend reading it.

I've read that page more than a decade ago, as I've said I think it's condescending flame bait that serves to distract from the real technical issues. And it's ableist towards people who do actually suffer from attention deficit disorders. If you want to help fix these issue, please don't refer to it.

>By causing it to become "overly complex and burdensome". To avoid the hard work on 2-3 developers this approach pushes that hard work to 20000-3000000 developers.

I'm sorry I really don't understand what you're saying here. The 20000-300000 developers should easily be able to join together and use their numbers to come up with a solution that is much better for them, no?

>Which they shouldn't have done.

I would urge you to try to maintain all those system libraries for a few years, and then revisit this statement and see how you feel about it after that.

>But they should, or at least they should wrap these APIs so that they call new stuff

Somebody interested in this can just build this wrapper separately, there's no reason it needs to live in the same repo as the new version.

>I have a feeling you are greatly underestimating the combined effort that went on FPC and LCL.

Not quite: my point is to illustrate that the samme amount of work needs to be done in other projects if you want that level of backwards compatibility.

>Key words: "worth it". Worth it to whom? People who want to play on shiny toys?

If you want to describe new features, improved performance, security fixes, etc, as "shiny new toys" then yes, I guess you could say that. I'm not sure what the distinction here is because before you said you wanted these shiny new features?

>The problem is if they break things intentionally. THOSE are unavoidable.

That's also the point I'm getting at: Xorg was at a point where they were going to have to break things intentionally, because some of those APIs are actively causing security issues and cannot be fixed without unavoidable breakage. The apps have to move to a new API if they want this to be fixed, there is no way around it. Any rootless X server (such as the one you used on Windows) will also cause some apps to not work in subtle ways, compatibility is not perfect there either, and Xwayland is basically built with the same design constraints.

>Or even GIMP, which took ages to switch to GTK3 (again, just in time so they can now waste even more time to switch to GTK4).

Depending on your project, porting to GTK4 won't be a waste of time. The rendering model has changed entirely and is now mostly hardware accelerated, so you may see major performance improvements on e.g. high DPI displays. But this is not something that can be provided by a wrapper, to get the major benefits out of it, the apps have to rewrite their widgets to use the scene graph instead of using old-style immediate mode drawing. There would be little benefit if you didn't do that and continued to use GTK2-style drawing. For me at least, that's why I think it's mostly a bad idea to try to make a complete compatibility layer. Maybe it would work for some widgets but apps really need to do a real port if they want the major benefits.


> That's great that they have the bandwidth to do that, I commend them for it, but other projects don't have the time to maintain and work around deprecated APIs forever.

That is the thing, Lazarus and LCL are almost entirely made by volunteer developers working on it in their free time and yet they manage to not break things unlike other projects that have corporate backing and fulltime developers.

It isn't a matter of bandwidth, it is a matter of caring about the work and time other people have spent on their platform.

> If they want to help avoid this in the future for other programs, I would urge them to try to write some kind of compatibility wrapper.

That would be pointless, LCL itself is already a compatibility layer for GUI applications (LCL is primarily a GUI toolkit) and Gtk2 is just one of the several backends. If they had to write Gtk3 support might as well do the Gtk3 backend anyway (which is what they did, Gtk3 work is already in progress, it just isn't as stable as the Gtk2 backend).

My point was that they wouldn't have to waste time on the Gtk3 backend and could focus on other things if Gtk3 didn't break backwards compatibility. Instead they'd add support for the new stuff Gtk3 introduced and use their limited time on more important things.

> It would mostly the same amount of work as doing it upstream, and upstream seems to have no interest in doing it since they would rather focus on helping people get their apps ported to the new way.

If upstream didn't break backwards compatibility with Gtk3 they wouldn't have to focus that either though and everyone would be spending their development time on what their applications are all about instead of keeping up with their depencencies' breakage.

> But this would only work for some things, other things simply can't be provided with any amount of backwards compatibility.

Which only happens because the upstream developers broke backwards compatibility.

> If you take that approach, you really could say the same thing about these other projects that don't want to upgrade their apps to GTK3/4, etc. Of course they won't do it because it's not fun for them, those projects don't really care about the toolkit, *they just want to have some kind of GUI quickly so they can then focus on the rest of their program*.

But that is exactly the issue here, applications aren't using Gtk (or whatever) because they love Gtk itself as an entity, they do it because Gtk provides something - a GUI library - that they want so they wont have to make their own and instead can focus on the stuff that actually matters: their application's functionality. It makes absolutely perfect sense that they wont want to waste time (especially if they are not working on their application full time) to keep up with Gtk's breakage.

Libraries in general are a means to an end, not the end in themselves.

Having a library stop being compatible with its previous versions means that a developer has to stop working on the application they are working on (the stuff that matters) to waste time on something they initially picked up so they can save time - so it makes sense to try and avoid that.

> At least that's been my experience with them anyway, I sympathize with that but it also conflicts with the need to make changes in the toolkit. So eventually somebody has to compromise somewhere.

Changes can be made in the toolkit without breaking existing applications. They may not look as pretty as if you break things and rebuild them, but at the same time you keep existing code working, existing applications running, existing knowledge valid and help make a more reliable platform for both developers (who can rely on your platform to help them instead of wasting their time) and users (who can rely on your platform to have their applications working even if the developers abandon the applications).

It is even good for keeping open source applications - it makes it easier for new developers to pick up some abandoned code and help keep it working. As an example some years ago i got MidasWWW working:

https://i.imgur.com/W37vhBW.png

...the codebase of which was at the time almost 25 years old. Yet because Motif didn't change its API, i barely had to touch the UI. It took me an hour or two (i do not remember, it wasn't much) to get it working and the vast majority of the changes i had to do were some old C-isms and 32bit assumptions that modern GCC on a 64bit machine complained about. In fact the only UI-related changes i had to make were because the Motif version the browser was written for had some incompatible changes from the "base" Motif at the time (ie. it wasn't exactly Motif's fault but whoever distributed their modified version).

> I'm sorry I really don't understand what you're saying here. The 20000-300000 developers should easily be able to join together and use their numbers to come up with a solution that is much better for them, no?

No because they all work on different projects.

What i mean is simple: if Gtk (or any other library that breaks backwards compatibility, while Gtk is a popular example that causes a ton of applications to waste time keeping up with it, it is certainly not the only case - SDL1.2 to SDL2 was another case, though at least AFAIK there is now a drop-in SDL1.2 replacement wrapper that calls SDL2) makes a breaking change because one or two of their developers wanted to make their life a bit easier, that breaking change will have a ripple effect to every single project that relies on Gtk and all the developers who work on those projects. One popular library having one breaking change, even if it was done with good intentions, by one developer can cause thousands of other developers on thousands of other projects to have to deal with it - and people do not work synchronously, this could take time.

> Somebody interested in this can just build this wrapper separately, there's no reason it needs to live in the same repo as the new version.

There are two reasons: 1. to dissuade the main developers from breaking things because, as you imply, it'd be additional work and they'd "feel" the effect of their breakage and 2. to make it much easier to keep up with any changes, if necessary.

What you describe is really having others run behind upstream developers to pick up their breakage so that the upstream developers wont have to care about breaking stuff, when what i describe is upstream developers not break stuff in the first place.

> If you want to describe new features, improved performance, security fixes, etc, as "shiny new toys" then yes, I guess you could say that. I'm not sure what the distinction here is because before you said you wanted these shiny new features?

You do not need to break backwards compatibility to provide those though.

> Xorg was at a point where they were going to have to break things intentionally, because some of those APIs are actively causing security issues and cannot be fixed without unavoidable breakage. The apps have to move to a new API if they want this to be fixed, there is no way around it.

Xorg's security issues have been greatly overstated. The server already has functionality to deny any access from untrusted sources (ie. pretend they are the only application running) so you could do that with applications you do not want to trust (or request they not be trusted, e.g. browsers), but even beyond that there are other ways to improve its security - even down to a sledgehammer-like approach of running separate nested instances. All the effort that went towards reimplementing from scratch (and doing all sorts of mistakes on their own) a display server with Wayland could have went towards improving Xorg instead and not break an already tiny desktop ecosystem.

> The rendering model has changed entirely and is now mostly hardware accelerated, so you may see major performance improvements on e.g. high DPI displays.

HiDPI shouldn't matter much for performance unless the previous implementation was done on the CPU, but that has nothing to do with the rendering model.

Immediate graphics APIs can still be batched and in fact this is what, e.g. most OpenGL implementations did for years - when you request to draw a triangle, the implementation wont draw that triangle immediately, but keep the request in case more triangles and other commands come later. As long as what you "perceive" is the same, it doesn't matter if you perform an immediate mode call "immediately" or keep it around for later use. This can be an issue with single-buffered output, so you'd need to be able to do that immediate output too, but most applications tend to do double-buffered output anyway. And at the end of the day, you still perform draw calls even with a scene graph, you just have better ways for batching those calls.

Note that i'm not against the scene graph approach (though it does need to have some "way around"), just saying that you can optimize immediate mode APIs a lot while preserving them.

But there is also another way: have both. Widgets can opt-in the new approach if they need the extra performance (after all not everything will need it) which will allow applications to keep working while converting to the new approach piecemeal. Old widgets will simply have a "canvas" scene graph node created for them where they can draw using the old API and new/converted widgets will use the pure scene graph.

Yes, this can introduce issues, but again, bugs can be fixed. And for a library with the popularity of Gtk (note that i do not specifically refer to Gtk here, it'd be the same for Qt or any other UI library that wanted to switch from immediate mode to scene graph without breaking compatibility) it wont be hard to find programs to test this against.

There are ways to solve this, and other things, if the developers care about not breaking backwards compatibility.


Everything that matters to Games has a stable ABI: glibc, OpenGL, Vulkan, libasound, libpulse*, Xlib

If you need other support libraries that don't interface with the system just ship them yourself. Or let Valve do that for you with the Steam Linux Runtime.


Funny you mention that because you do not include SDL (and rightfully so because the actual configuration for SDL can differ from system to system) and i actually had to remove SDL from some older games to get them to work under Linux exactly because they decided to ship the libs they relied on (and SDL wasn't the only problematic one, just the most known).

Shipping libraries only helps if whatever these libraries rely on is also stable and the functionality they provide is still available. After all a library might still be using a stable ABI so all it can tell you is that OSS is not supported - which while technically correct and wont cause the program to crash (assuming it can handle not having sound) isn't very useful in practice.


Back in the Loki days I used to believe, even lost a good opportunity at SCEE for being stupidly focused into FOSS game development.

Nowadays I don't care, Windows, macOS and mobile OSes FTW.


I haven't seriously tried to game on Linux in a number of years so this may have changed, but every time I've tried I get some kind of horrid tearing or stutter or low frame cap across a number of games that seems to be caused by inferior graphics drivers.

If by "developers" you mean the ones working on the unity engine or Nvidia proprietary graphics drivers then you're right, but in my experience there are a number of problems and pitfalls further down the stack which game developers can't reasonably be expected to mitigate.


I've been running Linux exclusively for years, and never had any problems. For the most part, everything Just Works™. My previous laptop had just integrated Intel graphics, which worked well enough. My current is a Ryzen integrated graphics, which also works well enough.

The only problem I ever had was in Wasteland 2 where the second part of the game there was some bug with the fog on the world map with Intel drivers. Setting some obscure environment variable fixed that.


I recently installed Fedora on laptop with Intel GPU, I'm playing Factorio and it just works, smooth and nice.


Well, no surprise. It's not resource hungry and there is a hard 60 FPS lock for everyone, so no tearing as well.

There is a 60-80 FPS difference for me in CS:GO between Linux and Windows with AMD graphics.


> and there is a hard 60 FPS lock for everyone, so no tearing as well

That's not how tearing works.


The Linux build isn't an afterthought, it's deliberately ignored because the costs of supporting Linux vastly exceeds the returns from Linux gamers.

If you want to sell your game, the smart money is in putting all your resources into the Windows version.


Also because people (justifiably) don't want to distribute their games as source, so they can't be packaged sanely.


Windows is essentially the primary gaming API which has a pretty damned good secondary cross-platform implementation in Wine/Proton.


There’s as many variations of linux as there are people running it. There’s only a few authoritative versions of OSX and Windows. I don’t think it’s too susprising.


This is a ridiculous statement. Every Windows install is unique as well in suble ways, as is the hardware it runs on. The differences between Linux distributions don't really matter that much if you bundle your dependencies or use the Steam Linux Runtime to do that for you.


It is a problem not only for indie developer. I have witnessed same problem with Borland Kylix (Borland Delphi for Linux). In general it was a great idea, IDE was good, creating Linux apps easily using RAD tool was really tempting. Having ability to use all the Delphi plugins people created for years was also a big advantage.

Yet...

The problem was that Kylix required very specific kernel version (I think it also required some kernel module), so it mostly did not work out of the box and people got discouraged.

The fact that Borland failed to advertise it correctly and never put more effort into making this tool better is another story. Those were times of Borland identity change from RAD tool vendor to super-enterprise corpo Inprise with some crazily expensive ALM tools that were competing with the likes like Telelogic Doors (now IBM brand), etc.


A tip for the open source / Linux community is to make your customers help each other (setup a forum/msg board or what not), as the ratio of knowledge power users is very high compared to for example Windows and Mac.


Proton does an excellent job! Something to consider is that many people (myself included) will offer consulting services to help get your game running natively on Linux with as little effort from you as possible. I personally offer that free to indie developers because I love seeing more native Linux support, but this kind of thing is often not very difficult for someone familiar with the target OS.

I'm curious as to what kind of game engine you're using where targeting Linux isn't as simple as choosing it in a dropdown menu as well, most modern engines support that very well.


For reference Eric Barone, Stardew Valley dev, he made it exclusively for Windows with C#. When he was approached by Chucklefish, they said they'd handle porting it to other OS's and console for a cut.


I don't see why getting support tickets is a problem, surely you triage and prioritize them anyway, and if fixing that problem is for a small percentage of customers then it's not going to be a high priority.

I think it's unreasonable for any software developer to release a product and expect no bug reports to come back at them, but it still doesn't mean they have to tackle everything.


> As I only officially supported Ubuntu

Why not just tell them that? Is it really better to give up those dollars because someone is using a setup you don't support?


I did? Both on the store page and in my email replies. Most people were understanding though some weren't.

However, those dollars you seem anxious I don't give up still didn't really cover the time investment of dealing with Linux - not just the support requests, but getting the build environment setup and performing the testing and all of that.

For reference, the game in question did ~75% of units sold on Windows, ~25% on Mac, and some fraction of a percent on Linux. If I hadn't of released on Ubuntu I would have probably lost less $1000, gross.


> "so I'm on this specific Red Hat version, with this oddball graphics driver and three monitors and full-screen doesn't work with your game properly"

It simply isn't possible that there exists a technophile out there patient enough to set up such a non-Ubuntu rig, yet cave-dwelling enough not to thank their deity for the simple fact that any graphics-hungry software turns out to run at all without crashing.

Barring a copy of the original email and video testimony from the sender, it's more reasonable to believe this was someone trolling you (or perhaps even a team of someones if you received more than one such email).

Edit: clarification


I do find it rude that people would go to the devs for issues like that. It's one thing if it's not working on a system that's supposed to have support, but otherwise you should really be figuring it out yourself, or going to the community.

Maybe actually clarifying where the community lives, like WineHQ and ProtonDB do for running Windows games on Linux, would be a good start to help reduce devs having to deal with this sort of thing.


Could you have open sourced the game to allow Linux-motivated developers to fix own bugs? Or too risky for you?


>I'm pretty sure Photon supports my Windows builds on Linux better than I was ever able to do with the native executable

The native Linux version of War Thunder crashes on launch for me, but the Windows version through Proton runs perfectly.


Targeting proton/wine is fine. Windows could be considered as a application layer for games already. Most of the native linux ports are actually worse than running through proton/wine.


i guess most studios are thinking like this. And (unfortunately) they are absolutely correct.

It is a pity though because I suspect that the vast majority of these requests that you had shouldn't have been directed at you but to others, ie linux distributions or specific projects (mesa/etc) but there is no one to triage and direct support requests.


If you develop against WINE you'll get cross platform support on macOS, Linux and Windows.


Link to game


What's Photon?


I believe GP is talking about Proton.

It's a Linux/Windows compatibility layer from Steam. It's pretty great!

A lot of the incompatability between Linux/Windows in my experience has actually been from the Anti-Cheat systems. Apex Legends and Intruder being examples that come to mind.

https://www.protondb.com/


Proton has been a game changer. Once they get the anti-cheat systems working (and they are making progress), linux will be much more of a valid choice for gaming than it has ever been. I used to do what gaming I couldn't do without on a Windows laptop, but now I do it all on my main linux desktop and haven't booted the laptop for months. Even some of the mod makers are making tools for linux now. It's a completely different world than it was even a few years ago.

There are still games that have problems, but Valve and the wine devs and others are knocking those down one by one. So those 15 people that wanted to switch to linux but couldn't because it couldn't run their games can now do so :)


Similar experience on my end. The only time I find myself booting into my Windows partition is when I play a game with Easy Anti-Cheat.


I have just given up on games with EAC. Much better than constantly switching back and forth, and they’re the exception rather than the rule now.


EAC seems to cause lots of problems even on Windows, unless you have clean install. The thing is very sensitive to any 3rd party programs installed and requires drivers to be up-to-date, breaks some programs by patching system apis etc...


I don't think AC vendors are any inclined to support Linux. ACs are almost always written in a platform-dependent way, which overall hinders the chances of them going to non-Windows systems.


Easy Anti-Cheat (probably the biggest name in anti-cheat right now with games like Apex and Fortnite) actually does support Linux. The issue is even with Linux native anti-cheat, developers are not releasing Linux native games. And getting Windows anti-cheat to work on Linux Wine/Proton is harder to do because of the extra layers in between.


I think your mistake was that you failed to account for the cultural differences between Windows and Linux users. For most Linux users and Developers a bug report is a positive thing rather than just a cost. Linux users are used to being more involved in the bug reporting process, and that includes reporting issues early and with more details - often even things they already figured out how to fix for themselves.

If you are being overwhelmed by bug reports from Linux users the Solution is to let those users triage, categorize and maybe even fix issues amongst themselves by having a publicly acessible tracker. Just like with forums for your game, you might even find people that will moderate those bug trackers for you. Valve realized this early in the Steam for Linux beta and have been using GitHub issues [0] for all of their Linux ports.

As for the differences between Linux distributions, I think the concerns are greatly overblown. The biggest difference between Desktop Linux distributions boils down in the versions of various libraries that they ship. For most of those you don't need to care at all and should ship your own version (or use the Steam Linux Runtime [1]). Base system libraries (glibc, OpenGL, Vulkan, audio) that you can't ship (because they contain hardware specific code that needs to get updates even after your game is EOL, or for other reasons) tend to provide strong backwards compatibility so you only need to target an old enough version to cover all the Linux distributions you want to support. A complicated one is the C++ standard library since some graphics drivers will depend on that - I recomment statically linkingyour own version and not exporting/importing any C++ symbols in your program.

I agree with others here that it is fine to only guarantee support for a limited set of Linux distributions (e.g. current Ubuntu LTS). However you should not consider reports from other distributions as a nuisance but rather as an early warning system or "linter" that will let you know about potential problems that users on your supported distribution (or even your users on other operating systems) may encounter in the future.

Next you can have various windowing systems, window managers and audio systems (even on one Distribution). Just ignore those: Don't interface directly with Xlib or pulseaudio but instead use a proven abstraction that takes care of the different quirks for you: SDL. That is, assuming you are not already using an engine with mature Linux support. Even if there is a quirk not handled in SDL, your users now are empowered to debug SDL themselves and fix the issue there, benefitting everyone. SDL will also make it easier to support future systems: if you never talk to Xlib and GLX directly, SDL can give you Wayland support for free.

Finally you have drivers. This isn't really much different than under Windows. Like with Linux distributions, issues with one driver often point towards things that just happen to work correctly in another vendor's driver but could break in the future. Having testing on more drivers is a good thing. Compared to Windows however, there is one big advantage: With the exception of Nvidia (and the proprietary AMD driver, but no need to care about that one) you (and savy users) have full source access to those which makes debugging some issues a lot more feasible. But further than that, they are also developed in the open with a public bug tracker [3] which gives you direct access to the developers. You can even chat with them on IRC if you like - just make sure that you are not wasting their time any more than you consider your users are wasting your time by reporting bugs.

I have also seen many concerns in this thread that bad reviews from Linux users will tarnish their score. First, realize that Steam reviews are always relative compared to expectations. If you manage those expectations, you can limit negative reviews - that goes for Linux users just as for anyone else. But Linux users can also help your game by recommending it to others. While the same is true for Windows users, those initial Linux users are easier to reach because there is less market saturation (especially in some genres). Using just the raw Linux sales percentage does not necessarily give you a full view of what sales you have gained by releasing a Linux port. To be fair, there will also be Linux customers that would have bought the Windows version, but either way % sales and revenue impact is not a 1:1 relation.

In conclusion, I think the main problem Windows developers face when targetting Linux are not technical issues (which there are of course) but one of cultural differences. Once you overcome those and learn how the Linux ecosystem works, you can use it to your advantage.

[0] https://github.com/ValveSoftware/steam-for-linux/issues

[1] https://github.com/ValveSoftware/steam-runtime

[2] http://libsdl.org/

[3] https://gitlab.freedesktop.org/groups/mesa/-/issues


You should give the linux version for free without support and encourage linux users to support you through donations


Honestly, developers will probably have a better time maintaining Proton versions than writing native Linux ones. As long as your game has silver or higher on ProtonDB, you can have my money.


At a Steam Dev Days event I hit up a dev from a popular flight sim that ran on Win/Mac/Linux and asked him to rank the support costs for different platforms. I was surprised by his answer. He said:

Mac users were effectively the most expensive because his team was (then) spending a lot of time porting their graphics code to Metal.

Linux users were the least expensive because they tended to be sophisticated users who were accustomed to solving their own problems. He cited a particular customer who he said had a solid track record of finding graphical glitches in the game, then opening bugs against Intel GPU drivers and getting them fixed.

Windows users were somewhere in between.

Of course we didn't discuss the opportunity cost of supporting Linux (financially probably not worth it), I'm not sure how much his view was a function of (maybe) not having to personally answer support requests, or whether his experience could be generalized beyond his particular customer demographic, but I learned quite a bit from his response.

==========

If I ever ship my own game I hope to support Linux not because I think it's the right financial move, but because I think offering cross-platform compatibility is just part of being a good digital citizen. A lot of us lived through a time where Windows was about the only game in town, and I don't want to ever go back there. (Plus there's a selfish element: I develop on Linux, so I want to play on Linux!)


I shipped a tiny game on Steam developed entirely in GNU/Linux with Windows/MacOS/Linux builds all produced from within GNU/Linux without virtualization of other OSes.

MINGW covered the Windows build, clang/osxcross for the MacOS build, and plain old gcc for Linux. It's all oldschool autotools+pkg-config dances for the cross-compilation. Plain C and SDL2+OpenGL under the hood, no engine.

It's nice being able to do it all from my preferred GNU/Linux environment, and I was able to at least smoke test the windows builds successfully via WINE. The main shortcoming currently is there's no MacOS WINE-equivalent that's mature enough to run a graphical GPU-accelerated video game AFAIK.


That sounds pretty cool ... as a budding game dev wanting to work along the same lines (no engine, minimal cruft) is there any chance you could post a link to your game? I'd love to know about your processes and dev approaches, etc.


Sure: https://store.steampowered.com/app/1056060/Eon/

It's nothing to write home about as it was mostly just an experiment to learn some OpenGL, evaluate my ability to ship something GPU-accelerated for the big three desktop OSes built entirely from GNU/Linux, better understand the shortcomings of myself and my lone collaborator when it comes to creating games, all while gaining visibility into the Steam platform and how much exposure one could expect from simply shipping a title on their store without any advertising.

So it's not exactly a fun or good game... as that didn't even make it into the list of priorities. I just kept the scope very small to ensure it could be shipped as a side project with some semblance of polish.

I don't think anything I have to contribute on the subject of processes or approaches should be considered particularly valuable since it's not really a successful game by any relevant measure.

It also feels like desktop operating systems are becoming so hostile towards running arbitrary native programs that unless you're shipping some AAA title pushing all the hardware limits of performance, it might not make sense to bother shipping native executables anymore. For individual indie devs producing small titles, the web might make much more sense, webgl/wasm/webgpu avoids all this untrusted executable friction and modern computers are fast enough to make it work. It's unclear to me how this dovetails with distribution/discovery and earning money via established platforms like Steam though, there's some dust that needs to settle here from what I can see.


Is Godot a good game engine to learn?

I am looking to make an RTS 2D game on a global map, similar to an old game called Red Storm Rising - https://www.myabandonware.com/media/screenshots/r/red-storm-...

Any suggestions?

I am a Java/C++ dev


I spent about 3 months full time with Godot and was enjoying it, but ran into some bugs that undermined my trust in the Godot developers to the point where I had to move away.

For example: if you hold a reference to object in your script, and the object is removed from the scene, the engine can reuse the address for a new object, but it will result in your script holding a reference to the wrong object.

I believe its the object pooling system not talking to the scrips. This bug is a few years old and I believe it wont be fixed till v4.

Most people don't seem to encounter it, and I worked around it fine buy just making sure I manually null out any references when they leave the scene.

The real WTF which made me finally say this engine is not for me is that they changed the behavior so that it won't happen in Debug, but will still happen in Release. Different behavior for Debug and Release is an even bigger bug!

Its such a rare and sneaky bug, and when it starts happening in your release you can't even debug it!

Small update: I reported the issue on Github in September 2019. From reading the comments it sounds like the inconstant behavior was only in from 3.2.2 to 3.2.3 version (~6 months) then fixed in 3.3. However a user is reporting that the issue is still happening in 3.3 as recently as May this year.


Link to the github issue?



I'd say yes!

I'm also a Java dev, and have dabbled in making simple "hello world" types examples for different game engines, and Godot was the first one that just clicked right away. Beyond that, I was able to stick to it, and was able to fully publish a game for the first time! (Puck Fantasy: https://www.lowkeyart.com/puckfantasy)

Setting expectations though, if you are expecting GDScript (the scripting language it uses) to be as full featured as Java (or C++), you'll be left wanting. It took some getting used to, to understand the limitations of the language, and adapt accordingly. After moving forward from that mental block, things have been even smoother. And if you really want it, there is C#/mono support, though I recommend your first project to be with GDScript, since it integrates very well with the editor, and creates a smooth learning experience.


I absolutely love Godot, and I've used them all. The only reason not to use Godot is if you're already entrenched in one of the other engines' ecosystems, like if you're working in a large team that already has processes around Unity's workflow. Some Unity features are more evolved, but if you're a novice game developer, these things won't matter. And IMO Godot is much much more intuitive to someone who is already a software developer.


If you want any pointers on RTS networking the gold standard is this paper:

https://www.gamasutra.com/view/feature/131503/1500_archers_o...

I did a toy implementation here: http://github.com/eamonnmr/openlockstep

I haven't rebuilt that in Godot yet, but I will eventually. Godot's workflow is well worth using it's bespoke language.


I found Godot + Mono/C# to be a great user experience compared to Unity.


Have you any experience using it with F#?


Godot? None. I think some people are using it with Unity, but tbh I hated Unity so much I just avoided it as much as I could.


Excellent read!


It’s surprisingly solid, I’ve made a couple weekend projects in it, but I see it being to Unity/Unreal what Blender is to Maya/Zbrush. I still wouldn’t use Godot for anything I hope to make money from, or if your looking to gain skills that will land you a job at a studio. If you’re new to game dev as a whole I’d recommend starting with Unity.


I mostly agree with your distinction but there is a large gap between Unity and Unreal: Godot is Gimp Unity is Paint.net Unreal is Photoshop

I understand mid/big studios don't want to give away a percentage to Epic (over 1mln) - which translates to "learn unity to land a job" - but for indies unlikely to reach 1 million in revenue, Unreal is basically free technology from the future.


My impression of Unreal is the tech is awesome if you want to make a game with high graphical fidelity — Nanite looks crazy good, etc. But if you aren’t trying to do that, are making a 2d game or something lofi 3D like Minecraft graphics, I don’t see what Unreal buys you over using a different engine. if anything, the massive Unity asset store seems like a huge competitive advantage, and from my understanding Unreal doesn’t really compare on that front.


If you're a competent C++ dev it might be easier to go with Unreal Engine, but Godot is a much nicer user experience than Unity (IMO) if you're looking to use C#. The price you'll pay is far far less instructional content as Godot's newer/not "enterprise" supported.


I would like to add to this a caveat that it assumes a windows development platform. I would not recommend UE otherwise. It should also be mentioned that Godot is generally much simpler and easier to get something up and running quickly. If graphic fidelity is important, then godot is quite a ways behind both Unity and UE.

A final note is that you can also fairly easily develop games for godot with C++ using gdnative, though you might be better off using gdscript, even though it means learning a new syntax.


I'm actually dev-ing a game for Windows, Mac, and Linux entirely in C++ and libsdl and it has been a GREAT experience so far!


Yes it is, but I suggest putting up with the pains for 4.0 branch. So many fixes for old bugs in 3 are there, and as of the last few months thanks to the devs giving linux lots more love lately, much more stable.


Yes it is pretty good for that type of game. People enjoy it, there are no royalties to worry about. You can use C# which will be familiar to you as a Java dev.


They don’t have support to export to consoles afaik but for a pc game it should be enough. I’ve tried some tutorials in the past with success and enjoyment but I’m in the process of learning gamedev myself


I never thought I'd say this but... there's a good course for Godot on Udemy. I have no idea how it ended up there.


Which one?


Probably "Discovering Godot". The instructor, Yann Burrett, is fantastic. He runs his own training courseware company now, called Canopy Games.


There've been companies which ported the engine over who are available for contracting/consulting, for what its worth: https://lonewolftechnology.com

Also, iirc Xbox support is available via UWP in the main branch.


I've found it to be really intuitive with a great community fwiw.


It's good but not quite as complete as Unity, so unless you care about using an Open Source engine you might check Unity instead.


Given the state of multiplayer in Unity and the needs of an RTS game I would advise otherwise.


There are solid third party networking solutions for Unity.


I installed Manjaro Linux as my primary OS (Windows on a secondary M.2) on the gaming PC I built earlier this year. I found the compatibility excellent, especially for the grand strategy titles I tend play (Paradox, Total War, etc.). The only thing that pushed me to change boot order back to windows is the abysmal tab-out performance Linux experiences for full screen applications. Even playing Valheim, a relatively light game for a 3080, the system UI would glitch, move very slowly, and freeze frequently whilst tabbed out. If I understand correctly from my research, it's caused by the OS UI not being bundled into the kernel like it is in Windows.

Which is a shame. KDE's Plasma is an absolute shoe-in for me coming from Windows daily-driver place. I can't imagine any distribution being able to inject its UI into the kernel downstream from Linus. It sounds stupid to be so turned off for such a simple reason but I often would tab out of games to jump to Discord, Spotify, or Terminal and it would be easily 10+ seconds of waiting for things to render.

Does anybody know what the future looks like for this problem? Will there be some reimagining of desktop rendering in the near future on Linux? Or maybe I misunderstand the problem?


Yes, on Linux each application has it's own graphics buffer, including all of it's window borders and controls. These are rendered to by GTK or Qt (likely using OpenGL). The OS just splats them all into the framebuffer and displays them. So in essence, a fullscreen window (such as the desktop background) can be nearly as GPU intensive as a 2D game. Swapping which buffers are composited into the framebuffer can be time-consuming. However, I don't think that this is enough to explain the poor alt-tab performance. I think it's due to modern desktop environments using lots of shaders and bitmapped graphics, which render fine when they're the only thing on-screen, but when a game starts eating up (and fragmenting) VRAM, this overhead becomes noticeable. Switching to a simpler desktop environment like i3 or dwm may help.


You could try Ctrl+Alt+F* to get another session/terminal. Or if you use Steam, use the Steam overlay browser.

Could also try another Window and/or display manager.

At home I have set up multi seat, meaning multiple graphics (GFX)/sound cards, separate screen keyboard and mouse, connected to the same PC. The only problem is that one of the GFX cards goes to sleep when inactive for a few hours, so I have to restart the display manager to wake it up, but the performance is excellent, we play games simultaneously, watch Youtube or what not - without any performance issues, on a several years old PC.


I'll give some of these ideas a shot. I felt bad giving up on it so quickly. I guess sometimes habits need to be adjusted as well to make a move work. It would be a small price to pay for being able to love the OS I'm using. Thanks, z3t4.


I haven’t used KDE in a couple years but what you describe sounds like what I encountered with its compositor. When launching fullscreen applications it turns off the compositor for better performance. Tabbing out doesn’t always (or at least didn’t for me) turn the compositor back on. Add to that weird, seemingly Nvidia-specific bugs (again just my experience) and the whole desktop experience falls apart.


Thanks for the tip, cluoma, I looked into what you were describing and I think it may be that! I'll have to check out the quick-fix I found online (Alt+Shift+F12 to enable/disable the compositor) when I'm at my tower the next time. Much appreciated!


Well, at least you assume that everyone has the same problems you do with Linux. If that was the case, I don't think it would be very popular. Linux runs games better than windows for many of us. Try PopOS, it may work better for you, but otherwise it's just your computer that runs poorly on Linux.


> Try PopOS, it may work better for you, but otherwise it's just your computer that runs poorly on Linux.

Wow, this feels like a blast from the past, the bad old days of people pushing Linux Desktop and blaming all issues on the user or hardware.

You are recommending an obscure distribution that is unlikely to be supported by anyone. You are claiming to know that their computer is somehow unable to run 'Linux' in such a way that... Alt tabbing takes a long time to re-render the system UI. You are entirely ignoring their own research into the topic with a 'works for me!'. You don't even have the courtesy to claim that you've tried their exact use case to say that it works for you, you just claim a generic 'it works better than Windows for many of us' (while of course ignoring that the Linux gaming community is two orders of magnitude smaller than the Windows one, typically because Linux and Games are not good friends, for many reasons).


The pandemic made me appreciate Valve’s original games push for Linux a few years ago because I switched and found myself appreciating the only-a-few-years-old games that had Linux Native versions in my backlog.


Most indie game developers quickly find out that supporting Linux is not worth it. The market is tiny (typically 1%) and the support cost is higher than the other 99% combined. Plus some of the Linux customers will make it a mission to bad mouth their game simply because it doesn’t work correctly on their specific custom Linux flavour that literally nobody else have.


Title is slightly misleading. "Developing Games for Linux" is less vague.

Cynically: is this simply a PR stunt by Godot team?


> Denis: For my case, I’ve always been using Linux. It’s just my favorite operating system. I’m doing all my work in it.

Seems like "on Linux" is appropriate (Denis is the lead programmer.)


Back-button hijacking. How charming…

I can’t wait for webgpu and wasm to be more mature. You’ll be able to truly write a game once and release it literally everywhere. Like what Java was supposed to be. I think it’s going to be a nice little golden age for games and other software. But the last time I checked webgpu wasn’t ready yet.


I'm not familiar with the implementation of webgpu/webasm, but I suspect there's a lot of overhead to these abstractions. Maybe less than the overhead of other emulators and cross-platform solutions, but I don't expect it to replace native apps any time soon for high-resource games.


I'm actually much more optimistic about it. I've been getting into wgpu in the past few months and although you can target the web, you can also directly target each platform's native graphics API - Metal, DirectX, and Vulkan.

So the browser might have some unavoidable overhead, but for most games it probably won't even matter, and others using the wgpu API can target native desktop platforms.

A voxel game, Veloren, recently migrated to wgpu and wrote up some of their experiences here:

https://veloren.net/devblog-125/


It matters a lot, because in native games you can easily check if the 3D acceleration is present, not so on Web 3D APIs.

So the game that runs without problems on your development machine, can have all sorts of issues on the client browser that you cannot work around like on native, because they are caused by how the browser is blacklisting the user's hardware or drivers.


Can you elaborate on this? Is it different from "detecting if WebGL/WebGPU is supported"? Because I'm pretty sure you can detect that, according to all the answers on this thread:

https://stackoverflow.com/questions/11871077/proper-way-to-d...


It is supported doesn't mean that you get what you are expecting, if anything is black listed you will get a "supported" version with software rendering instead.

Naturally random joe user has no idea what is going on, and will dimiss your game as crap because it is running at e.g. 3 FPS.

Then even if everything is supported in hardware, as OpenGL ES 3.0 subset (defined in 2011), no matter how good the GPU is, there is only so much you can do.


I see what you're saying, is this to prevent browser fingerprinting?

Maybe the browser versions won't be too great, but I'm more optimistic about the WebGPU libraries which allow you to target the desktop.


I really doubt that the mature implementations of wgpu will not allow an app to check for hardware acceleration.


Doubt as much as you want, it is a security issue for browser vendors to expose that information, that is why they black list to start with.


Browser vendors want people to be able to experience rich 3D accelerated content in their browsers and they will be aware of these issues like the one you mention. The idea that it won’t be addressed is completely schizophrenic… how could it be a security risk for the client to do it’s own assessment of whether or not the available hardware is suitable for 3D content and then set a flag accordingly?


10 years of WebGL experience on the field prove otherwise.

There are browser flags to disable black listing, which is something that regular Joe/Jane has no idea whatsoever that they exist.

Besides, you can head off to webgpu.io and follow up on meeting minutes.

Even better, attend the upcoming WebGL/WebGPU meeting from Khronos (registration currently open) and pose that question if you prefer to hear the same from the browser vendors themselves.


I don't know why you keep saying "it's not possible" when I literally wrote code that does it: https://github.com/magcius/noclip.website/blob/cc98112d7b105...


That code is not fullproof for blacklisted drivers across ANY browser.


Ha ha you’re stupid!


If you feel that way about me, great.

Have a nice joyful day.


it looks like this code detects whether or not the client is making webgl2 available, not the presence or amount of hardware acceleration that the client is giving you access to via webgl2.


But in this case webgpu is basically a waste of time… why would they bother creating it if it won’t end up being of any use? Couldn’t you assess performance with a dummy load anyway?


apps that go beyond the compute budget of the wasm/webgpu layer will be native and everything else will not because nobody is going to be able to turn their nose up to the incredible advantage of launching everywhere at once with a single binary. Especially after the brand of “new web apps” gains momentum and it becomes a part of the lexicon.

The overhead is low and modern computers are very fast. The budget will be very healthy indeed.


Doesn't webgpu impose a new unique shading language? That's gonna be even harder to support than opengl on Linux.


I think that most developers use game engines, and game engines probably will implement it.


It doesn’t need to be supported by the operating system… as long as the operating system supports Vulcan, the browser will use Vulcan primitives to implement whatever shading solution is used in webgpu. Unless I am mistaken?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: