But that was five years ago. I'm pretty sure Photon supports my Windows builds on Linux better than I was ever able to do with the native executables, and at least there's that.
Then the Linux community gets another game. Does it not work on K.I.S.S Linux running sowm as a window manager and an entire custom userspace? Probably not. Does it work using the latest Ubuntu version? Probably.
But the notion that developers have to support every possible Linux configuration out there just seems toxic to the Linux game development effort as a whole.
As long as you're nice about it and don't turn it in to a bland cookie-cutter "corporate" response most people will understand.
As someone who just runs Linux, and occasionally runs some games on it, I'm always a bit annoyed when people report these kind of very specific bugs with old/weird drivers/distros that aren't in the supported platforms list. It turns developers off for completely understandable reasons. It's a shame, because for most people it does usually work.
I run Void Linux, it almost always works, but I'd never report these kind of bugs without testing Ubuntu (or whatever is supported) first.
If I sell on Windows (I do) and Mac (I do) then I have to support a certain range of OS versions and ongoing OS releases - even if that means (for example) I have to figure out how to 'notarise' a Mac executable so that a user doesn't have a big scary Security Warning pop-up. Not ideal, but fine. The challenge with Linux is that I would have to communicate against expectations - that I would have to make it clear that when I 'support Linux' it looks different to the support for Windows or Mac. I do genuinely think that 99% of Linux people get this, it's just the 1% that's maybe less forgiving of different standards.
For me, just personally and selfishly, passing the buck to Photon or Wine is an easier sell for my business.
When your game runs under Steam runtime, the real distribution is (almost) irrelevant - everything in your address space is supplied by the runtime, the things you get from the host system is the kernel/kernel modules and services you talk to via IPC (i.e. X11/Wayland, Pulseaudio).
It solves the problem of what version of what library is installed (if at all, maybe user removed it as "bloat") on the host system. You get known set of binaries that you can test against / coherent SDK target like with Windows or Mac.
Whether this matters is up to the developer. But it's a potential downside.
Gog is quite contend with just Ubuntu being supported; they are not that different.
Not sure whether that is the only reason why some games are not on Gog though; often Mac ports are missing too. It seems more like missing rights for the ports than technical reasons.
I don't know; probably not. But this was the response I got back from the devs of "Expeditions: Conquistador" when I asked if they could release the Linux version on GOG (when I bought it originally I still had a Windows machine).
Valve should just partner with Canonical and release Ubuntu support.
Let the other distros figure out how to get it working there.
Most hardware and software vendors primarily target it.
It might not be hip, but it's what everyone knows.
People would start pretending to be Ubuntu to install games.
In the Linux community, at least unless a project has a toxic developer (there are a few), bug reports are Always Good. They're how we make the software WE use better on the systems that WE use it on. Even if a report isn't fully actionable (e.g. it's a problem with graphics drivers), the report is often helpful because the bug tracker is probably public and we can try to find workarounds, or at least flag the issue for others.
For closed source commercial software, especially cases where a tiny number of developers are working on the code, bug reports are Always Bad. They represent more work, work that you don't want to have to do, because at the end of the day these are people who already bought the game. You've gotten as much out of them as you're going to get out of them. If they're more trouble than they're worth (someone else in this thread claimed 90% of bug reports out of 1% of purchases), then it's obvious you should just ignore them or not port your game to their platform at all. You'd think this attitude would be different for issues that affect a lot of people: a good bug report can help you fix widespread problems that are hurting your players, but actually even this is rare. See the story of this guy fixing a bug causing 6 minute startup times that affected at least thousands of people using reverse engineering, when the developer ignored the problem for years: https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...
So I think you're right, these are mostly people enthusiastic about a piece of computer software instinctively trying to collaboratively improve it for everyone. But because development is so limited (there's only one person reading bug reports and working on the code), those reports are experienced as frustrating rather than helpful. Worse still, because the software is commercial there may be an unspoken feeling that support is owed for the software because the user paid for it.
This is such a bad attitude. The game is supposed to work correctly without any bugs. People who paid money for the game deserve continued support. It doesn't matter how much time and money the developers have to spend, that's their problem.
If the software is defective, consumers should be entitled to a refund. That ought to motivate companies not to release shoddy work.
This is not real-world software engineering :)
Pretty much any software has bugs; maybe surprisingly to non-programmers, games are especially complex (in primis, architecturally).
In real world, one can realistically talk about, let's say, an acceptable threshold of bugs.
> People who paid money for the game deserve continued support
And this is not real-world (game) business. Whether one likes it or not, there is a per-unit profit, and the corresponding value in terms of support is very limited.
An ideal solution to this is open sourcing games after a certain time (Id Software used to do it), but this is not realistic. I wish it, though!
> That ought to motivate companies not to release shoddy work
One can't really force a company not to do shoddy work. The gaming market is a radically free one, unlike other constrained markets, like internet providers. Customers are actually entitled to have the money refunded, at least on Steam. Gaming journalism actually has been including bugginess in games evaluation for a while, so buyers can decide in an informed fashion.
Though for an indie game it probably isn't crazy to make the code open sourced and then put those users to work for you. That can really help reduce the burden. But of course opens you up for people stealing your software (which let's also be real, happens anyways).
Most people are very helpful and quite understanding that as a sole indie developer, it would be hard to support all the configurations. But occasionally I get angry emails and negative reviews about game not running on Linux.
Given the sales (Linux is 1% of the total sales, Mac is 3%), I would say for an indie developer, it makes more sense to put Linux support on a low priority. It is unfortunate for Linux gaming community but it is what it is.
Also even though Proton has come a long way and has become relatively stable - occassionally there are some strange issues (like Steam Cloud sync fails, etc) here and there. But overall the effort is much lower compared to maintain a separate Linux build.
Right now the flow for the user is 1. See store page 2. Buy 3. Play 4. Hit bug.
This is the moment when they find out that they bought a game that was not in fact supported. That is super frustrating (and possibly legally requires a fix or a refund). If there was a 1.5 step of "This game offers no support for Linux" or "This game offers no support for any distribution except Ubuntu 21.04" then it is much more acceptable, because I accept that detail before purchasing.
https://steamcommunity.com/app/378720/discussions/0/49012573... is their explanation, https://store.steampowered.com/app/378720/Thea_The_Awakening... the store page. Great game btw.
Last I checked the limit was the minimum of "2 hours playtime" and "2 weeks after purchase"
> Steam is quite generous about refunding games, either because you purchased them accidentally or they didn't run correctly or any other reason,
Again, last I read, Steam is quite generous but will probably flag your account if certain patterns emerge (probably through some ML-alchemy).
You can have Linux builds available via Steam without listing Linux support on the store.
Maybe I'm in the minority, but I've never bought a game because it could theoretically run under Wine, while I've bought quite a few games that run on Linux. Very rarely do I buy a game that does not natively run on Linux.
Wine seems like kind of an ugly hack even in the best case scenario. You don't know what performance is going to be like with your hardware, and you usually don't have anyone who's tested the program on your hardware to make sure it runs correctly and doesn't crash, and obviously games are extremely sensitive to these kinds of hardware dependent things. Installing Wine can be gross too, you have to install a bunch of 32 bit libraries on an otherwise clean 64 bit system.
I'd say the appeal of making a native Linux game is that it gets you access to the market of people who will buy a native Linux game. It's true that you already had the market of people who are technically running Linux and playing Windows games via Wine, but presumably there are many people who aren't going to go to that much trouble. (Obviously, there are many cases where supporting Linux isn't financially viable anyway, Wine or native.)
At this point I just buy windows games, and if they don't work under wine, I run them in a VM.
KDE doesn't even work on the latest Ubuntu version - on my hardware, at least.
I have an Ubuntu system with an Intel graphics card, and my system won't boot into KDE unless I remove the old Intel driver that it defaults to, so that it then tries the new Intel driver which actually works. Gnome, Enlightenment, and whatever Ubuntu defaults to, they all work fine regardless, but KDE doesn't.
Note that this is after removing the 'Intel' driver for xorg, which had tons of screen tearing issues, in favor of the apparently "correct" modesetting driver, which is the preferred option for newish Intel cards. Except that at some point I was installing something else which explicitly depended on the Intel driver package...
And then we all ended up working from home and now I don't have to deal with any of that crap. Now I just use xorgxrdp from my Windows machine and it just works.
I can't imagine the gigantic hassle that would be trying to game on Linux in that kind of environment, where the "supported" and "default" options are the wrong choice and you have to manually uninstall things if you want stuff to work properly. No thanks. I love Linux (way, WAY more than Windows), but I'm not going to waste time and energy trying to play games on it.
I do it literally every single day.
The last point of failure is the GlibC (GNU C library), a Linux application, even if fully statically linked against all dependencies or packaged with all shared libraries, may fail to run in another Linux distribution if it is using an older version of the GlibC than the one that the executable was linked against. Therefore, in order to ensure that the application can work in the broadest range of distributions, it is necessary to static link the application against the oldest possible version of GlibC or build the application on the LTS (Long Term Support) version of the supported distribution.
If one does not have enough resources to support Linux, an alternative approach, assuming that is OpenGL is used, may be building the executable for Windows and Wine using MinGW compiler either on Windows or by cross-compiling on Linux using Dockercross-mingw (https://github.com/dockcross/dockcross). At least with Wine, one will not have to deal with Glib-C compatibility issues.
Everyone decided this so we got 3 different "standard" formats... (Snap, Flatpak, and AppImage). See: that XKCD about standards.
And windows has .msi, whatever is used to install windows store packages, NSIS, Inno Setup...
it's a way overblown complaint. I use AppImages built on centos 7 for my own stuff and never heard anyone having issues with it.
On Windows is easier to build self-contained applications as there is no the RPATH issue. All one needs to build a self-contained application that is easier to ship and works everywhere is just add shared libraries at the same directory where is the executable and create a zip archive or a MSI installer. When applications are installed, they are installed in a single folder and files are not scattered across the file systems like binaries in /usr/bin, /usr/local/bin, /libs/, /usr/shared, ..., as in Linux and other unices.
That's like complaining that, say, a program built against cygwin on windows, can't work on a system without the cygwin dll (and all its dependencies also compiled against the cygwin dll).
Flatpack attempts to solve the dependency hell problem by providing standard runtime, a set of libraries, Glib-C, Gtk or Qt with a fixed version which requires the developer to build an application linking against those runtime, which avoid binary compatibility problems and dependency hell. The trouble of Flatpacks is the integration with the desktop.
AppImage attempts to solve the dependency hell by bundling everything into a single launcher executable with a SquashFs file system image payload. The disadvantage of this approach is that it is only possible to use a single executable. AppImage are also not free from GlibC compatibility issues.
A MacOSx-like app bundle and a changing in Linux development culture to build and design applications as self-contained from inception would strength the Linux desktop and reduce the application packaging work duplication that still affects Linux distributions.
By using MUSL it is possible to build fully statically linked applications that works everywhere, however dlopen(), dlclose() calls, that are used for loading shared libraries and plugin at runtime on Unix-like systems, do not work with MUSL.
The bigger problem is that glibc cannot be statically linked properly. It dlopen's some libraries, like for NSS. Maybe games could avoid triggering those cases but musl is definitely better.
While syscalls themselves are backward-compatible (even for faulty behavior), I think I've read somewhere that some parts of DRI are not, but I've lost the source on that.
Alternatively, if more games were free (as in freedom) software, a large community could better sort out some of these issues.
Edit: Though the game would have to have pretty good replay value, I'm not installing another OS for a 10 hour game.
Pretty much all games I play are through Steam’s compatibility layer anyway, and nowadays it’s a very smooth experience.
On which distro and graphics hardware? ;)
I’m not a game dev so I don’t know how much you still actually notice of the distro / hardware once you’re running in Proton or Wine.
This was very noticable, even as just a user. Where on the pages of games you liked you'd routinely discover more smaller titles that looked really cool under the "more like this" section, it's now the exact same set of games recommended for most of them, and it's really really annoying.
The indie games I've played all seem to have little news posts about 1 million copies sold. So... what's the return on an indie game of 2012 indie game quality?
The major issue is OpenGL drivers, they can be a pain on Linux (specially proprietary ones like NVIDIA).
If both monitors have similar configuration (DPI, refresh rate) it is fine (Xrandr fixed many of the issues that X previously had). If the monitors have different configurations it isn't, but this problem will reflect even on the desktop and I am sure that nobody will report a problem that they have on desktop as a "game bug".
> full-screen programs
X doesn't have a concept of full-screen, but anyway, it mostly works fine if you grab the root window (that of course you shouldn't write code manually to do this, your game engine probably will do it for you automatically).
> low-latency input
X latency is fine for most games, and in many cases you will not use X to handle input anyway. Also, if your game really needs low-latency (only a minorit of genres do), you can bypass X.
Anyway, I was not referring to specific X or PulseAudio problems that you will if you put things on container or not. I am just saying that even if steam-runtime doesn't built-in those libraries it is not much of a problem since those APIs are stable.
I just had to fix a crash because my distro's Love2D uses LuaJIT which only supports Lua 5.1, but the game's source contains a bit that requires Lua 5.4. But it was an easy patch (which unfortunately cannot be upstreamed because upstream doesn't want PRs).
For other games, as long as the game provides an Ubuntu version, it'd work for me. I run an Ubuntu Docker container for Steam and other "first-party software" (binary packages directly from the software manufacturer as opposed to distro repos), because when such software says "it supports Linux" it almost always means "it supports Ubuntu".
Witb Proton you can publish a game and deny all reponsibily for Linux support. “Sorry we don't support Linux but we hear it runs great on Proton!”
I'm also deeply curious as to exactly how many indie game developers are writing code that interfaces directly with these low level systems and graphics APIs. In my experience, building cross platform games (I've shipped from Unity, Unreal, Godot, and XNA/MonoGame) is trivial and the framework handles 100% of the complexity of porting. From the sounds of this comment thread, everybody is writing their game in raw shader language and then having to port that to Vulkan or OpenGL.
Now, supporting Linux via Proton, that's where Proton can kill native Linux builds. But I'm not sure how common that is, or will be.
The usual response here (from any vendor, not just an independent game developer) is to say you only support the latest LTS version of Ubuntu/SteamOS with the officially supported drivers there and that's it. You're absolutely right to do that. If you want obscure distros to be able to run your program, you can open source it and let them deal with the packaging/testing/maintenance. The fact that all the OS packages are open source is the only reason all the random distros are even able to exist, so you're already making it difficult for them when you don't do this. No reason to dance around that.
You could, but clearly this developer didn't. You're responding to someone's real life experience with a hypothetical.
Except DRM. Attach DRM or anti-cheat to your project, software that actively doesn't want to run on anything but a specific OS, and the linux community will turn on you.
Ultimately the best way to have fair games is to promote finding players through avenues other than official matchmaking: friends or even just random people on something like discord.
Completely agree with you. The current online gaming model where people play with untrustworthy strangers is stupid and broken. We should be playing with friends. Instead we get invasive rootkits installed in our computers and they don't even fully prevent cheating.
Disclaimer: I'm not a game dev and don't really know what I'm talking about.
If you say this publicly then angry anime avatars will yell at you on Twitter.
If you're willing to listen, I could describe to you the technical reasons why Xorg was abandoned. But I also doubt the answer will please you, because the reality is that the reason it's perceived as being "stable" is because it's not being improved anymore -- if people were still hacking on Xorg instead of Wayland, then your Xorg would be breaking left and right too.
Yes, that's the point of an insult.
> Nobody wants to maintain legacy software for free in their spare time. That's all it is.
This is 100% false, a lot of people do. In fact a language (Free Pascal) and framework (LCL) i am using has a very good track record for preserving backwards compatibility while at the same time continuously improving. I have code i wrote two decades ago that work fine with it and will automatically get the new features introduced just with a recompile.
The same can't be said for, e.g. Gtk: Gtk2 apps not only wont get any new features from Gtk3, but they wont even compile. Same with Gtk4, because making the mistake twice wasn't enough.
> If you're willing to listen, I could describe to you the technical reasons why Xorg was abandoned.
There are no technical reasons, Xorg is code, code can be modified. It is all political reasons at best and people wanting to rewrite stuff they'd rather not bother learning about. As JWZ writes in his CADT page:
<<Fixing bugs isn't fun; going through the bug list isn't fun; but rewriting everything from scratch is fun (because "this time it will be done right", ha ha) and so that's what happens, over and over again.>>
> the reason it's perceived as being "stable" is because it's not being improved anymore -- if people were still hacking on Xorg instead of Wayland, then your Xorg would be breaking left and right too.
Xorg improved all the time over the years going back to the XFree86 days, adding new features consistently without breaking existing code and applications. If it suddenly started breaking now it wouldn't be because it is impossible to not break but because the developers somehow started breaking it.
That's great that people are doing that with FPC, but if they're continually adding new features and removing deprecated things then that's not legacy software. To illustrate further what I mean, probably none of those people can be convinced to work on other old stuff like GTK1 or GTK2, because you're really comparing apples and oranges here. If it were easy or profitable to do that in GTK, somebody would have done it already. Half the reason things changed is because the entire underlying stack changed along with the hardware -- this is not even remotely comparable to something like a self contained compiler for a programming language.
If you disagree, I'd love to hear your proposal on how to keep all the various API changes working in the same codebase without causing it to become overly complex and burdensome, and this is probably over the span of at least 20 system libraries that have all deprecated and/or removed various things over the last 30 years. From my view, part of the problem here is that there were some legitimately bad decisions made back then, that looked reasonable at the time but turned out to be not so great, and nobody really wants to keep paying for those decisions. This is not similar to something like a Pascal implementation where they could just aim for compatibility with an existing compiler from the 1970s and then build on that, these were entirely new APIs at the time and they didn't have their designs fully fleshed out, and in some ways they still don't, because the problem space is still somewhat open-ended.
You're wrong that there are no technical reasons, I assure you the technical reasons are real. Again, I can tell you if you're willing to listen, but if you're going to blanket deny they exist, we can't really have a conversation, so I won't bother typing it out. Let me know if you change your mind. In context your JWZ quote doesn't any make sense either, because Xorg got bug fixes for a very long time. It's being moved away from because it's no longer effective to keep doing that, which is the opposite of what that quote suggests. Please don't let rude and dismissive quotes like that be the guiding line of your discourse, let's actually discuss the real issues.
>If it suddenly started breaking now it wouldn't be because it is impossible to not break but because the developers somehow started breaking it.
This is the root of the misunderstanding -- there is no significant difference between these two. It's at the point where it needs a major refactor or rewrite to make continued work on it worth it, which is going to break things, and at that point writing a new display server makes a whole lot more sense.
They are not removing deprecated things, whenever possible the old things are still around and call the new things. At most they move some stuff to another unit (like a C include or Java import) which is a search+replace into the codebase that takes literally seconds. This happens extremely rarely though, i have non-trivial code that compiles with both a 14 year old release and SVN checkout (which they also try to keep working).
> To illustrate further what I mean, probably none of those people can be convinced to work on other old stuff like GTK1 or GTK2, because you're really comparing apples and oranges here.
Lazarus' currently main backend for Linux is GTK2 exactly because the Gtk developers broke backwards compatibility with GTK3. The GTK3 backend is close to completion though - just in time for the GTK4 to break things again!
There is also a GTK1 backend - it was broken a couple of years ago until someone noticed and fixed it. These are not high priority backends, but they keep them in working condition.
Personally i have contributed to the GTK2 backend since so far GTK2 has the best user experience of all toolkits available on Linux (IMO, of course) by fixing alpha channel support. Since Lazarus has a policy on trying to not introduce unnecessary dependencies, i went to the extra mile to ensure that it works even with very old versions of the library.
> If it were easy or profitable to do that in GTK, somebody would have done it already.
That is the point, it isn't easy or profitable. But something being not easy nor profitable doesn't make it wrong. After all the entire CADT thing is about focusing on the easy stuff because that is fun.
JWZ's page is very short and amusing to read, i recommend reading it.
> If you disagree, I'd love to hear your proposal on how to keep all the various API changes working in the same codebase without causing it to become overly complex and burdensome
By causing it to become "overly complex and burdensome". To avoid the hard work on 2-3 developers this approach pushes that hard work to 20000-3000000 developers.
> and this is probably over the span of at least 20 system libraries that have all deprecated and/or removed various things over the last 30 years.
Which they shouldn't have done.
> From my view, part of the problem here is that there were some legitimately bad decisions made back then, that looked reasonable at the time but turned out to be not so great, and nobody really wants to keep paying for those decisions.
But they should, or at least they should wrap these APIs so that they call new stuff and said new stuff should try - now with the benefit of hindsight - avoid being designed in a way that they'll be so easily broken (which will also help with the maintenance of the wrappers).
> This is not similar to something like a Pascal implementation where they could just aim for compatibility with an existing compiler from the 1970s and then build on that, these were entirely new APIs at the time and they didn't have their designs fully fleshed out, and in some ways they still don't, because the problem space is still somewhat open-ended.
Free Pascal has a ton of burden from design decisions they made in the 80s, 90s, etc including keeping source code compatibility with Delphi and all the boneheaded decisions Borland/Inprise/Embarcadero/CodeGear/whatever did. But also they keep compatibility with standard Pascal, Mac Pascal, Turbo Pascal (which is different from Delphi) and a bunch of other dialects and even specialized dialects like Objective Pascal (for Objective-C interop). They do that by allowing the source code files to switch dialect with dedicated compiler switches and even enable/disable parts.
Yes, this adds a TON of overhead and burden on the compiler writers' side but everyone involved agrees it is a good thing to avoid breaking others' code.
I have a feeling you are greatly underestimating the combined effort that went on FPC and LCL.
> In context your JWZ quote doesn't any make sense either, because Xorg got bug fixes for a very long time.
I was explicit in my original message that Xorg is among the projects that are actually stable so, yes, CADT does not apply to Xorg.
> there is no significant difference between these two.
Of course there is.
> It's at the point where it needs a major refactor or rewrite to make continued work on it worth it
Key words: "worth it". Worth it to whom? People who want to play on shiny toys?
> which is going to break things
They may introduce bugs the refactor but as long as these are acknowledged as bugs and get fixed, then there wouldn't be a problem.
The problem is if they break things intentionally. THOSE are unavoidable. When i can run an X server in Win32 which has a completely different API and display model and the X server barely has any control over the underlying window system, it is absolutely inexcusable to have incompatibility issues in an environment where the X server has complete control over the display, input, etc.
> and at that point writing a new display server makes a whole lot more sense.
Only if you see broken glasses from a bull in a glass shop as unavoidable without questioning why the bull was there in the first place.
EDIT: also in the other message you mentioned that people do not want to maintain legacy software in their spare time. While it isn't correct - a lot of people do - it is also correct that a lot of people do not want to do that because it can be a lot of work. THIS MAKES IT EVEN MORE IMPORTANT FOR THE SOFTWARE DEPENDENCIES TO NOT BREAK so the little time people can afford to put in their software isn't wasted in keeping up with all the breakage their dependencies has introduced just so they can do the same thing in a different way.
To use GTK as an example again, Gtk1 and Gtk4 fundamentally provide the same functionality - sure, Gtk4 has a few more widgets and some fancy CSS support, but fundamentally it is all about placing buttons, labels, input boxes, etc on windows and reacting to events. Yet people who wrote Gtk1 code had to waste time updating it to Gtk2, then again waste time updating it to Gtk3, then again will have to waste time to update it to Gtk4 and all so that they can have buttons and labels and input boxes on windows that people can click on to do stuff.
That is a MUCH worse waste of time because all that time these developers spent to keep up with the breakage could have been spent instead on working on the actual functionality their programs provide.
Instead they not only have to waste time in keeping up with Gtk just so they can do the same stuff, but chances are that due to these changes they are introducing new bugs in their programs.
See XFCE as an example. Or even GIMP, which took ages to switch to GTK3 (again, just in time so they can now waste even more time to switch to GTK4).
That's great that they have the bandwidth to do that, I commend them for it, but other projects don't have the time to maintain and work around deprecated APIs forever.
>Lazarus' currently main backend for Linux is GTK2 exactly because the Gtk developers broke backwards compatibility with GTK3.
If they want to help avoid this in the future for other programs, I would urge them to try to write some kind of compatibility wrapper. It would mostly the same amount of work as doing it upstream, and upstream seems to have no interest in doing it since they would rather focus on helping people get their apps ported to the new way. But this would only work for some things, other things simply can't be provided with any amount of backwards compatibility.
>That is the point, it isn't easy or profitable. But something being not easy nor profitable doesn't make it wrong. After all the entire CADT thing is about focusing on the easy stuff because that is fun.
If you take that approach, you really could say the same thing about these other projects that don't want to upgrade their apps to GTK3/4, etc. Of course they won't do it because it's not fun for them, those projects don't really care about the toolkit, they just want to have some kind of GUI quickly so they can then focus on the rest of their program. At least that's been my experience with them anyway, I sympathize with that but it also conflicts with the need to make changes in the toolkit. So eventually somebody has to compromise somewhere.
>JWZ's page is very short and amusing to read, i recommend reading it.
I've read that page more than a decade ago, as I've said I think it's condescending flame bait that serves to distract from the real technical issues. And it's ableist towards people who do actually suffer from attention deficit disorders. If you want to help fix these issue, please don't refer to it.
>By causing it to become "overly complex and burdensome". To avoid the hard work on 2-3 developers this approach pushes that hard work to 20000-3000000 developers.
I'm sorry I really don't understand what you're saying here. The 20000-300000 developers should easily be able to join together and use their numbers to come up with a solution that is much better for them, no?
>Which they shouldn't have done.
I would urge you to try to maintain all those system libraries for a few years, and then revisit this statement and see how you feel about it after that.
>But they should, or at least they should wrap these APIs so that they call new stuff
Somebody interested in this can just build this wrapper separately, there's no reason it needs to live in the same repo as the new version.
>I have a feeling you are greatly underestimating the combined effort that went on FPC and LCL.
Not quite: my point is to illustrate that the samme amount of work needs to be done in other projects if you want that level of backwards compatibility.
>Key words: "worth it". Worth it to whom? People who want to play on shiny toys?
If you want to describe new features, improved performance, security fixes, etc, as "shiny new toys" then yes, I guess you could say that. I'm not sure what the distinction here is because before you said you wanted these shiny new features?
>The problem is if they break things intentionally. THOSE are unavoidable.
That's also the point I'm getting at: Xorg was at a point where they were going to have to break things intentionally, because some of those APIs are actively causing security issues and cannot be fixed without unavoidable breakage. The apps have to move to a new API if they want this to be fixed, there is no way around it. Any rootless X server (such as the one you used on Windows) will also cause some apps to not work in subtle ways, compatibility is not perfect there either, and Xwayland is basically built with the same design constraints.
>Or even GIMP, which took ages to switch to GTK3 (again, just in time so they can now waste even more time to switch to GTK4).
Depending on your project, porting to GTK4 won't be a waste of time. The rendering model has changed entirely and is now mostly hardware accelerated, so you may see major performance improvements on e.g. high DPI displays. But this is not something that can be provided by a wrapper, to get the major benefits out of it, the apps have to rewrite their widgets to use the scene graph instead of using old-style immediate mode drawing. There would be little benefit if you didn't do that and continued to use GTK2-style drawing. For me at least, that's why I think it's mostly a bad idea to try to make a complete compatibility layer. Maybe it would work for some widgets but apps really need to do a real port if they want the major benefits.
That is the thing, Lazarus and LCL are almost entirely made by volunteer developers working on it in their free time and yet they manage to not break things unlike other projects that have corporate backing and fulltime developers.
It isn't a matter of bandwidth, it is a matter of caring about the work and time other people have spent on their platform.
> If they want to help avoid this in the future for other programs, I would urge them to try to write some kind of compatibility wrapper.
That would be pointless, LCL itself is already a compatibility layer for GUI applications (LCL is primarily a GUI toolkit) and Gtk2 is just one of the several backends. If they had to write Gtk3 support might as well do the Gtk3 backend anyway (which is what they did, Gtk3 work is already in progress, it just isn't as stable as the Gtk2 backend).
My point was that they wouldn't have to waste time on the Gtk3 backend and could focus on other things if Gtk3 didn't break backwards compatibility. Instead they'd add support for the new stuff Gtk3 introduced and use their limited time on more important things.
> It would mostly the same amount of work as doing it upstream, and upstream seems to have no interest in doing it since they would rather focus on helping people get their apps ported to the new way.
If upstream didn't break backwards compatibility with Gtk3 they wouldn't have to focus that either though and everyone would be spending their development time on what their applications are all about instead of keeping up with their depencencies' breakage.
> But this would only work for some things, other things simply can't be provided with any amount of backwards compatibility.
Which only happens because the upstream developers broke backwards compatibility.
> If you take that approach, you really could say the same thing about these other projects that don't want to upgrade their apps to GTK3/4, etc. Of course they won't do it because it's not fun for them, those projects don't really care about the toolkit, *they just want to have some kind of GUI quickly so they can then focus on the rest of their program*.
But that is exactly the issue here, applications aren't using Gtk (or whatever) because they love Gtk itself as an entity, they do it because Gtk provides something - a GUI library - that they want so they wont have to make their own and instead can focus on the stuff that actually matters: their application's functionality. It makes absolutely perfect sense that they wont want to waste time (especially if they are not working on their application full time) to keep up with Gtk's breakage.
Libraries in general are a means to an end, not the end in themselves.
Having a library stop being compatible with its previous versions means that a developer has to stop working on the application they are working on (the stuff that matters) to waste time on something they initially picked up so they can save time - so it makes sense to try and avoid that.
> At least that's been my experience with them anyway, I sympathize with that but it also conflicts with the need to make changes in the toolkit. So eventually somebody has to compromise somewhere.
Changes can be made in the toolkit without breaking existing applications. They may not look as pretty as if you break things and rebuild them, but at the same time you keep existing code working, existing applications running, existing knowledge valid and help make a more reliable platform for both developers (who can rely on your platform to help them instead of wasting their time) and users (who can rely on your platform to have their applications working even if the developers abandon the applications).
It is even good for keeping open source applications - it makes it easier for new developers to pick up some abandoned code and help keep it working. As an example some years ago i got MidasWWW working:
...the codebase of which was at the time almost 25 years old. Yet because Motif didn't change its API, i barely had to touch the UI. It took me an hour or two (i do not remember, it wasn't much) to get it working and the vast majority of the changes i had to do were some old C-isms and 32bit assumptions that modern GCC on a 64bit machine complained about. In fact the only UI-related changes i had to make were because the Motif version the browser was written for had some incompatible changes from the "base" Motif at the time (ie. it wasn't exactly Motif's fault but whoever distributed their modified version).
> I'm sorry I really don't understand what you're saying here. The 20000-300000 developers should easily be able to join together and use their numbers to come up with a solution that is much better for them, no?
No because they all work on different projects.
What i mean is simple: if Gtk (or any other library that breaks backwards compatibility, while Gtk is a popular example that causes a ton of applications to waste time keeping up with it, it is certainly not the only case - SDL1.2 to SDL2 was another case, though at least AFAIK there is now a drop-in SDL1.2 replacement wrapper that calls SDL2) makes a breaking change because one or two of their developers wanted to make their life a bit easier, that breaking change will have a ripple effect to every single project that relies on Gtk and all the developers who work on those projects. One popular library having one breaking change, even if it was done with good intentions, by one developer can cause thousands of other developers on thousands of other projects to have to deal with it - and people do not work synchronously, this could take time.
> Somebody interested in this can just build this wrapper separately, there's no reason it needs to live in the same repo as the new version.
There are two reasons: 1. to dissuade the main developers from breaking things because, as you imply, it'd be additional work and they'd "feel" the effect of their breakage and 2. to make it much easier to keep up with any changes, if necessary.
What you describe is really having others run behind upstream developers to pick up their breakage so that the upstream developers wont have to care about breaking stuff, when what i describe is upstream developers not break stuff in the first place.
> If you want to describe new features, improved performance, security fixes, etc, as "shiny new toys" then yes, I guess you could say that. I'm not sure what the distinction here is because before you said you wanted these shiny new features?
You do not need to break backwards compatibility to provide those though.
> Xorg was at a point where they were going to have to break things intentionally, because some of those APIs are actively causing security issues and cannot be fixed without unavoidable breakage. The apps have to move to a new API if they want this to be fixed, there is no way around it.
Xorg's security issues have been greatly overstated. The server already has functionality to deny any access from untrusted sources (ie. pretend they are the only application running) so you could do that with applications you do not want to trust (or request they not be trusted, e.g. browsers), but even beyond that there are other ways to improve its security - even down to a sledgehammer-like approach of running separate nested instances. All the effort that went towards reimplementing from scratch (and doing all sorts of mistakes on their own) a display server with Wayland could have went towards improving Xorg instead and not break an already tiny desktop ecosystem.
> The rendering model has changed entirely and is now mostly hardware accelerated, so you may see major performance improvements on e.g. high DPI displays.
HiDPI shouldn't matter much for performance unless the previous implementation was done on the CPU, but that has nothing to do with the rendering model.
Immediate graphics APIs can still be batched and in fact this is what, e.g. most OpenGL implementations did for years - when you request to draw a triangle, the implementation wont draw that triangle immediately, but keep the request in case more triangles and other commands come later. As long as what you "perceive" is the same, it doesn't matter if you perform an immediate mode call "immediately" or keep it around for later use. This can be an issue with single-buffered output, so you'd need to be able to do that immediate output too, but most applications tend to do double-buffered output anyway. And at the end of the day, you still perform draw calls even with a scene graph, you just have better ways for batching those calls.
Note that i'm not against the scene graph approach (though it does need to have some "way around"), just saying that you can optimize immediate mode APIs a lot while preserving them.
But there is also another way: have both. Widgets can opt-in the new approach if they need the extra performance (after all not everything will need it) which will allow applications to keep working while converting to the new approach piecemeal. Old widgets will simply have a "canvas" scene graph node created for them where they can draw using the old API and new/converted widgets will use the pure scene graph.
Yes, this can introduce issues, but again, bugs can be fixed. And for a library with the popularity of Gtk (note that i do not specifically refer to Gtk here, it'd be the same for Qt or any other UI library that wanted to switch from immediate mode to scene graph without breaking compatibility) it wont be hard to find programs to test this against.
There are ways to solve this, and other things, if the developers care about not breaking backwards compatibility.
If you need other support libraries that don't interface with the system just ship them yourself. Or let Valve do that for you with the Steam Linux Runtime.
Shipping libraries only helps if whatever these libraries rely on is also stable and the functionality they provide is still available. After all a library might still be using a stable ABI so all it can tell you is that OSS is not supported - which while technically correct and wont cause the program to crash (assuming it can handle not having sound) isn't very useful in practice.
Nowadays I don't care, Windows, macOS and mobile OSes FTW.
If by "developers" you mean the ones working on the unity engine or Nvidia proprietary graphics drivers then you're right, but in my experience there are a number of problems and pitfalls further down the stack which game developers can't reasonably be expected to mitigate.
The only problem I ever had was in Wasteland 2 where the second part of the game there was some bug with the fog on the world map with Intel drivers. Setting some obscure environment variable fixed that.
There is a 60-80 FPS difference for me in CS:GO between Linux and Windows with AMD graphics.
That's not how tearing works.
If you want to sell your game, the smart money is in putting all your resources into the Windows version.
The problem was that Kylix required very specific kernel version (I think it also required some kernel module), so it mostly did not work out of the box and people got discouraged.
The fact that Borland failed to advertise it correctly and never put more effort into making this tool better is another story. Those were times of Borland identity change from RAD tool vendor to super-enterprise corpo Inprise with some crazily expensive ALM tools that were competing with the likes like Telelogic Doors (now IBM brand), etc.
I'm curious as to what kind of game engine you're using where targeting Linux isn't as simple as choosing it in a dropdown menu as well, most modern engines support that very well.
I think it's unreasonable for any software developer to release a product and expect no bug reports to come back at them, but it still doesn't mean they have to tackle everything.
Why not just tell them that? Is it really better to give up those dollars because someone is using a setup you don't support?
However, those dollars you seem anxious I don't give up still didn't really cover the time investment of dealing with Linux - not just the support requests, but getting the build environment setup and performing the testing and all of that.
For reference, the game in question did ~75% of units sold on Windows, ~25% on Mac, and some fraction of a percent on Linux. If I hadn't of released on Ubuntu I would have probably lost less $1000, gross.
It simply isn't possible that there exists a technophile out there patient enough to set up such a non-Ubuntu rig, yet cave-dwelling enough not to thank their deity for the simple fact that any graphics-hungry software turns out to run at all without crashing.
Barring a copy of the original email and video testimony from the sender, it's more reasonable to believe this was someone trolling you (or perhaps even a team of someones if you received more than one such email).
Maybe actually clarifying where the community lives, like WineHQ and ProtonDB do for running Windows games on Linux, would be a good start to help reduce devs having to deal with this sort of thing.
The native Linux version of War Thunder crashes on launch for me, but the Windows version through Proton runs perfectly.
It is a pity though because I suspect that the vast majority of these requests that you had shouldn't have been directed at you but to others, ie linux distributions or specific projects (mesa/etc) but there is no one to triage and direct support requests.
It's a Linux/Windows compatibility layer from Steam. It's pretty great!
A lot of the incompatability between Linux/Windows in my experience has actually been from the Anti-Cheat systems. Apex Legends and Intruder being examples that come to mind.
There are still games that have problems, but Valve and the wine devs and others are knocking those down one by one. So those 15 people that wanted to switch to linux but couldn't because it couldn't run their games can now do so :)
If you are being overwhelmed by bug reports from Linux users the Solution is to let those users triage, categorize and maybe even fix issues amongst themselves by having a publicly acessible tracker. Just like with forums for your game, you might even find people that will moderate those bug trackers for you. Valve realized this early in the Steam for Linux beta and have been using GitHub issues  for all of their Linux ports.
As for the differences between Linux distributions, I think the concerns are greatly overblown. The biggest difference between Desktop Linux distributions boils down in the versions of various libraries that they ship. For most of those you don't need to care at all and should ship your own version (or use the Steam Linux Runtime ). Base system libraries (glibc, OpenGL, Vulkan, audio) that you can't ship (because they contain hardware specific code that needs to get updates even after your game is EOL, or for other reasons) tend to provide strong backwards compatibility so you only need to target an old enough version to cover all the Linux distributions you want to support. A complicated one is the C++ standard library since some graphics drivers will depend on that - I recomment statically linkingyour own version and not exporting/importing any C++ symbols in your program.
I agree with others here that it is fine to only guarantee support for a limited set of Linux distributions (e.g. current Ubuntu LTS). However you should not consider reports from other distributions as a nuisance but rather as an early warning system or "linter" that will let you know about potential problems that users on your supported distribution (or even your users on other operating systems) may encounter in the future.
Next you can have various windowing systems, window managers and audio systems (even on one Distribution). Just ignore those: Don't interface directly with Xlib or pulseaudio but instead use a proven abstraction that takes care of the different quirks for you: SDL. That is, assuming you are not already using an engine with mature Linux support. Even if there is a quirk not handled in SDL, your users now are empowered to debug SDL themselves and fix the issue there, benefitting everyone. SDL will also make it easier to support future systems: if you never talk to Xlib and GLX directly, SDL can give you Wayland support for free.
Finally you have drivers. This isn't really much different than under Windows. Like with Linux distributions, issues with one driver often point towards things that just happen to work correctly in another vendor's driver but could break in the future. Having testing on more drivers is a good thing. Compared to Windows however, there is one big advantage: With the exception of Nvidia (and the proprietary AMD driver, but no need to care about that one) you (and savy users) have full source access to those which makes debugging some issues a lot more feasible. But further than that, they are also developed in the open with a public bug tracker  which gives you direct access to the developers. You can even chat with them on IRC if you like - just make sure that you are not wasting their time any more than you consider your users are wasting your time by reporting bugs.
I have also seen many concerns in this thread that bad reviews from Linux users will tarnish their score. First, realize that Steam reviews are always relative compared to expectations. If you manage those expectations, you can limit negative reviews - that goes for Linux users just as for anyone else. But Linux users can also help your game by recommending it to others. While the same is true for Windows users, those initial Linux users are easier to reach because there is less market saturation (especially in some genres). Using just the raw Linux sales percentage does not necessarily give you a full view of what sales you have gained by releasing a Linux port. To be fair, there will also be Linux customers that would have bought the Windows version, but either way % sales and revenue impact is not a 1:1 relation.
In conclusion, I think the main problem Windows developers face when targetting Linux are not technical issues (which there are of course) but one of cultural differences. Once you overcome those and learn how the Linux ecosystem works, you can use it to your advantage.
Mac users were effectively the most expensive because his team was (then) spending a lot of time porting their graphics code to Metal.
Linux users were the least expensive because they tended to be sophisticated users who were accustomed to solving their own problems. He cited a particular customer who he said had a solid track record of finding graphical glitches in the game, then opening bugs against Intel GPU drivers and getting them fixed.
Windows users were somewhere in between.
Of course we didn't discuss the opportunity cost of supporting Linux (financially probably not worth it), I'm not sure how much his view was a function of (maybe) not having to personally answer support requests, or whether his experience could be generalized beyond his particular customer demographic, but I learned quite a bit from his response.
If I ever ship my own game I hope to support Linux not because I think it's the right financial move, but because I think offering cross-platform compatibility is just part of being a good digital citizen. A lot of us lived through a time where Windows was about the only game in town, and I don't want to ever go back there. (Plus there's a selfish element: I develop on Linux, so I want to play on Linux!)
MINGW covered the Windows build, clang/osxcross for the MacOS build, and plain old gcc for Linux. It's all oldschool autotools+pkg-config dances for the cross-compilation. Plain C and SDL2+OpenGL under the hood, no engine.
It's nice being able to do it all from my preferred GNU/Linux environment, and I was able to at least smoke test the windows builds successfully via WINE. The main shortcoming currently is there's no MacOS WINE-equivalent that's mature enough to run a graphical GPU-accelerated video game AFAIK.
It's nothing to write home about as it was mostly just an experiment to learn some OpenGL, evaluate my ability to ship something GPU-accelerated for the big three desktop OSes built entirely from GNU/Linux, better understand the shortcomings of myself and my lone collaborator when it comes to creating games, all while gaining visibility into the Steam platform and how much exposure one could expect from simply shipping a title on their store without any advertising.
So it's not exactly a fun or good game... as that didn't even make it into the list of priorities. I just kept the scope very small to ensure it could be shipped as a side project with some semblance of polish.
I don't think anything I have to contribute on the subject of processes or approaches should be considered particularly valuable since it's not really a successful game by any relevant measure.
It also feels like desktop operating systems are becoming so hostile towards running arbitrary native programs that unless you're shipping some AAA title pushing all the hardware limits of performance, it might not make sense to bother shipping native executables anymore. For individual indie devs producing small titles, the web might make much more sense, webgl/wasm/webgpu avoids all this untrusted executable friction and modern computers are fast enough to make it work. It's unclear to me how this dovetails with distribution/discovery and earning money via established platforms like Steam though, there's some dust that needs to settle here from what I can see.
I am looking to make an RTS 2D game on a global map, similar to an old game called Red Storm Rising - https://www.myabandonware.com/media/screenshots/r/red-storm-...
I am a Java/C++ dev
For example: if you hold a reference to object in your script, and the object is removed from the scene, the engine can reuse the address for a new object, but it will result in your script holding a reference to the wrong object.
I believe its the object pooling system not talking to the scrips. This bug is a few years old and I believe it wont be fixed till v4.
Most people don't seem to encounter it, and I worked around it fine buy just making sure I manually null out any references when they leave the scene.
The real WTF which made me finally say this engine is not for me is that they changed the behavior so that it won't happen in Debug, but will still happen in Release. Different behavior for Debug and Release is an even bigger bug!
Its such a rare and sneaky bug, and when it starts happening in your release you can't even debug it!
Small update: I reported the issue on Github in September 2019. From reading the comments it sounds like the inconstant behavior was only in from 3.2.2 to 3.2.3 version (~6 months) then fixed in 3.3. However a user is reporting that the issue is still happening in 3.3 as recently as May this year.
I'm also a Java dev, and have dabbled in making simple "hello world" types examples for different game engines, and Godot was the first one that just clicked right away. Beyond that, I was able to stick to it, and was able to fully publish a game for the first time! (Puck Fantasy: https://www.lowkeyart.com/puckfantasy)
Setting expectations though, if you are expecting GDScript (the scripting language it uses) to be as full featured as Java (or C++), you'll be left wanting. It took some getting used to, to understand the limitations of the language, and adapt accordingly. After moving forward from that mental block, things have been even smoother. And if you really want it, there is C#/mono support, though I recommend your first project to be with GDScript, since it integrates very well with the editor, and creates a smooth learning experience.
I did a toy implementation here: http://github.com/eamonnmr/openlockstep
I haven't rebuilt that in Godot yet, but I will eventually. Godot's workflow is well worth using it's bespoke language.
I understand mid/big studios don't want to give away a percentage to Epic (over 1mln) - which translates to "learn unity to land a job" - but for indies unlikely to reach 1 million in revenue, Unreal is basically free technology from the future.
A final note is that you can also fairly easily develop games for godot with C++ using gdnative, though you might be better off using gdscript, even though it means learning a new syntax.
Also, iirc Xbox support is available via UWP in the main branch.
Which is a shame. KDE's Plasma is an absolute shoe-in for me coming from Windows daily-driver place. I can't imagine any distribution being able to inject its UI into the kernel downstream from Linus. It sounds stupid to be so turned off for such a simple reason but I often would tab out of games to jump to Discord, Spotify, or Terminal and it would be easily 10+ seconds of waiting for things to render.
Does anybody know what the future looks like for this problem? Will there be some reimagining of desktop rendering in the near future on Linux? Or maybe I misunderstand the problem?
Could also try another Window and/or display manager.
At home I have set up multi seat, meaning multiple graphics (GFX)/sound cards, separate screen keyboard and mouse, connected to the same PC. The only problem is that one of the GFX cards goes to sleep when inactive for a few hours, so I have to restart the display manager to wake it up, but the performance is excellent, we play games simultaneously, watch Youtube or what not - without any performance issues, on a several years old PC.
Wow, this feels like a blast from the past, the bad old days of people pushing Linux Desktop and blaming all issues on the user or hardware.
You are recommending an obscure distribution that is unlikely to be supported by anyone. You are claiming to know that their computer is somehow unable to run 'Linux' in such a way that... Alt tabbing takes a long time to re-render the system UI. You are entirely ignoring their own research into the topic with a 'works for me!'. You don't even have the courtesy to claim that you've tried their exact use case to say that it works for you, you just claim a generic 'it works better than Windows for many of us' (while of course ignoring that the Linux gaming community is two orders of magnitude smaller than the Windows one, typically because Linux and Games are not good friends, for many reasons).
Cynically: is this simply a PR stunt by Godot team?
Seems like "on Linux" is appropriate (Denis is the lead programmer.)
I can’t wait for webgpu and wasm to be more mature. You’ll be able to truly write a game once and release it literally everywhere. Like what Java was supposed to be. I think it’s going to be a nice little golden age for games and other software. But the last time I checked webgpu wasn’t ready yet.
So the browser might have some unavoidable overhead, but for most games it probably won't even matter, and others using the wgpu API can target native desktop platforms.
A voxel game, Veloren, recently migrated to wgpu and wrote up some of their experiences here:
So the game that runs without problems on your development machine, can have all sorts of issues on the client browser that you cannot work around like on native, because they are caused by how the browser is blacklisting the user's hardware or drivers.
Naturally random joe user has no idea what is going on, and will dimiss your game as crap because it is running at e.g. 3 FPS.
Then even if everything is supported in hardware, as OpenGL ES 3.0 subset (defined in 2011), no matter how good the GPU is, there is only so much you can do.
Maybe the browser versions won't be too great, but I'm more optimistic about the WebGPU libraries which allow you to target the desktop.
There are browser flags to disable black listing, which is something that regular Joe/Jane has no idea whatsoever that they exist.
Besides, you can head off to webgpu.io and follow up on meeting minutes.
Even better, attend the upcoming WebGL/WebGPU meeting from Khronos (registration currently open) and pose that question if you prefer to hear the same from the browser vendors themselves.
Have a nice joyful day.
The overhead is low and modern computers are very fast. The budget will be very healthy indeed.