People have tried to provide a packaging format that would allow apps to declare their depencies in a distribution neutral manner[1]. It was, uh, not a huge success.
Let's take an extreme example. If I build an app on Ubuntu and then try to run it against the system libraries on Alpine, it'll fail, because Alpine is built against a different libc. We can simply declare Alpine out of scope and only support glibc based distributions, but if we want good cross-distribution support we're still going to be limited to what's shipping in the oldest supported version of RHEL. So let's skip that problem by declaring LTS distros out of scope and only target things shipped in the last 3 years - and now apps can't target any functionality newer than 3 years old, or alternatively have to declare a complicated support matrix of distributions that they'll work with, which kind of misses the point of portability.
In an ideal world distributions would provide a consistent runtime that had all the functionality apps needed, but we've spent the past 20 years failing to do that and there's no reason to believe we're suddenly going to get better at it now. The Flatpak approach of shipping runtimes isn't aesthetically pleasing, but it solves the problem in a way that's realistically achievable rather than one that's technically plausible but socially utterly impossible.
Flatpak is a pragmatic solution for an imperfect world - just like most good engineering is.
Edit to add: one of the other complexities is that dependencies aren't as easy to express as you'd like. You can't just declare a dependency on a library SONAME - the binary may rely on symbols that were introduced in later minor versions. But you then have to take into account that a distribution may have backported something that added that symbol to an older version, and the logical conclusion is that you have to expose every symbol you require in the dependencies and then have the distribution resolve those into appropriate binary package dependencies, and that metadata simply doesn't exist in every distribution.
AppFS [0] solves this by just providing every file any package needs, including libc.
AppFS lazy fetch self-certifying system with end-to-end signature checks way to distribute packages over HTTP (including from static servers) so it's federated.
Since all packages are just a collection of files, AppFS focuses on files within a package. This means if you're on Alpine Linux and want to run ALPINE from Ubuntu (assuming Ubuntu used AppFS) then it would use Ubuntu's libc. This is pretty similar to static linking, except with some of the benefits from dynamic linking.
CERN has a similar, but more limited system called CernVM-FS [1]. AppFS isn't based on that and I only learned about it after writing AppFS.
AppFS is based (in spirit, not in code) on 0install's LazyFS. Around the turn of the century I was trying to write a Linux distribution that ONLY used 0install, but due to limitations it wasn't feasible.
This is possible to do with AppFS and how the docker container "rkeene/appfs" works.
> but if we want good cross-distribution support we're still going to be limited to what's shipping in the oldest supported version of RHEL.
Why should that requirement be considered so extreme when, in the Windows world, applications are often required to work as far back as Windows 7 (or were until a year or two ago)?
I'm trying to scope the "Just use system libraries" approach into one that's realistically achievable. I agree that this shouldn't be an extreme requirement, but it turns out that most app authors aren't terribly interested in restricting themselves to the libraries shipped in RHEL 7, so.
On Windows, .dll files are automatically searched in quite a few places, including current directory, directory where .exe was launched from, and PATH environment variable. Meaning it is far easier for apps to ship private libraries.
Plus, when linux apps try to ship private libraries, as official chrome packages do, that gets quite some backlash from distro maintainers.
And on Windows the default way to bundle an application is "make a separate directory for that application, put everything you need inside it" which is kinda equivalent to Linux "/opt" packages, IIRC? Anyway, that neatly combines with the lookup rules for .dlls (they're first searched next to the executable file itself) so that shipping mostly self-contained applications is sorta easy: the applications by default use their packaged libraries, and if those are missing, the system libraries or libraries from PATH are used.
> On Windows, .dll files are automatically searched in quite a few places, including current directory, directory where .exe was launched from, and PATH environment variable. Meaning it is far easier for apps to ship private libraries.
You can get the same behavior on Linux by linking with -Wl,-rpath,\$ORIGIN (minus the PATH env var, use LD_LIBRARY_PATH for that).
That was a very good summary of where flatpak (and snap, et al.) come from, thank you. Not a particularly convincing conclusion, though.
Same problem that flatpak is solving (shiping binaries that would work across many distros) already have at least two solutions - static binaries (where possible, golang is great here), or shell wrappers that point dynamic linker to appropriate private /lib directory.
Among the 3 solutions here flatpak is the most complex, and least compatible with what advanced user might do - run stuff in private containers, with different init, etc.
> or shell wrappers that point dynamic linker to appropriate private /lib directory.
You haven't needed shell wrappers to do that for a long time, just link with -Wl,-rpath,\$ORIGIN/whatever/relative/path/you/want where $ORIGIN at the start resolves to the directory containing the binary at runtime.
Of course other things like selecting between different binaries based on architecture or operating system still requires a shell script.
If reducing duplication is a goal, static linking or shipping private copies of libraries works against that. Building against standardised runtimes works much better in that respect.
> People have tried to provide a packaging format that would allow apps to declare their depencies in a distribution neutral manner[1]. It was, uh, not a huge success.
> and now apps can't target any functionality newer than 3 years old
You can optionally support newer functionality with a single binary by dynamically loading libraries resolving functions at runtime using dlsym or API-specific mechanisms (e.g glxGetProcAddress).
Sure, you could dlopen() different SONAMEs until you find one that works, and now you just have to somehow deal with the structs having changed between versions, and the prototypes of functions not being compatible, so yes it's technically possible but nobody is going to bother
The idea isn't to open dynamic libraries randomly but to open dynamic libraries you explicitly know they provide the newer than 3 years functionality you want.
It isn't some fantastic never-seen-before concept, it is how applications on Windows can use new APIs from Windows 11 while still running on Windows XP or how OpenGL programs can use APIs from OpenGL 4.6 while being able to run on drivers that only expose OpenGL 3.1.
OpenGL worked on Linux last time i checked (a few minutes ago).
Linux isn't some special case that makes this impossible, the only reason for this to not work is libraries themselves not making it possible. But the blame lies with the libraries not with Linux.
Like I said, it's technically possible and that is (outside a very small number of well-defined outliers like OpenGL) entirely irrelevant in terms of whether it's practically possible. Even if we rewrote every library now to have mechanisms to make this easier, it wouldn't help for any of the older versions that don't do this and which are already deployed everywhere.
Of course it is practically possible, as long as the developers care about backwards ABI compatibility - the issue isn't if it is possible or not (it certainly is as actual existing APIs and libraries show), the issue is library developers breaking their libraries' ABIs.
But that doesn't mean it isn't possible to do something like this, it means that you have to stick to libraries that do not break their ABIs. And this has absolutely nothing to do with Linux you brought previously, everything i mentioned works on Linux and any other OS that supports dynamic linking and has ABI backwards compatibility for the applications that run on top of it.
RHEL 8 doesn't ship with GTK 4. How do I ship an application that uses functionality that only exists in GTK 4 if available, but falls back to GTK 3 if it isn't? If libraries had been written with this in mind (like OpenGL was), then yes, there'd be a path to doing so. Libraries on Linux could work in the way you suggest. But, for the most part, they don't.
Well yes, you basically come to what i've been writing about so far: the issue is with libraries like GTK 4 that break ABI backwards compatibility. It isn't the issue with Linux, Linux doesn't break ABI backwards compatibility - if GTK 4 didn't break their ABI you could use the GTK 3 API as a baseline for your application and dynamically load the new GTK 4 stuff (and as a bonus you'd get any inherent improvements that might be there in GTK 4 that are exposed through the GTK 3 API, like how -e.g.- applications written for WinXP get the emoji input popup on Win 10 even though that didn't exist during WinXP's time).
The REAL problem is libraries breaking their ABIs, not Linux itself.
If you're defining Linux as a kernel, then yes, the kernel does not impose any constraints on userland that would make this impossible - and I never said it did. If you're defining Linux as a complete OS, then the fact that it's conceptually possible for libraries to behave this way is irrelevant; they could, but they don't, and any solution for distributing apps needs to deal with that reality rather than just asserting that everything else should be rewritten first before anyone can do anything.
> Let's take an extreme example. If I build an app on Ubuntu and then try to run it against the system libraries on Alpine, it'll fail, because Alpine is built against a different libc.
I mean, on windows if I use a given libc, say msvcrt or ucrt (or heck, newlib with cygwin) I have to ship it anyways with my app. Linux makes that harder but in practice there's no way around this.
Let's take an extreme example. If I build an app on Ubuntu and then try to run it against the system libraries on Alpine, it'll fail, because Alpine is built against a different libc. We can simply declare Alpine out of scope and only support glibc based distributions, but if we want good cross-distribution support we're still going to be limited to what's shipping in the oldest supported version of RHEL. So let's skip that problem by declaring LTS distros out of scope and only target things shipped in the last 3 years - and now apps can't target any functionality newer than 3 years old, or alternatively have to declare a complicated support matrix of distributions that they'll work with, which kind of misses the point of portability.
In an ideal world distributions would provide a consistent runtime that had all the functionality apps needed, but we've spent the past 20 years failing to do that and there's no reason to believe we're suddenly going to get better at it now. The Flatpak approach of shipping runtimes isn't aesthetically pleasing, but it solves the problem in a way that's realistically achievable rather than one that's technically plausible but socially utterly impossible.
Flatpak is a pragmatic solution for an imperfect world - just like most good engineering is.
Edit to add: one of the other complexities is that dependencies aren't as easy to express as you'd like. You can't just declare a dependency on a library SONAME - the binary may rely on symbols that were introduced in later minor versions. But you then have to take into account that a distribution may have backported something that added that symbol to an older version, and the logical conclusion is that you have to expose every symbol you require in the dependencies and then have the distribution resolve those into appropriate binary package dependencies, and that metadata simply doesn't exist in every distribution.
[1] http://refspecs.linux-foundation.org/LSB_4.1.0/LSB-Core-gene...