I owned a Librem 5 phone and used it as my daily driver for a few months. During this time I exclusively used Flatpaks for my apps that I installed. After installing about 15-20 apps, the 32 GB of internal storage were completely full and my phone stopped working. I had no clue how huge Flatpak apps are.
Even worse, I could not find out which apps to delete. All the space was taken up by excess "runtimes" I don't know or care what a runtime is but I never intentionally installed them. Why are there five different Freedesktop and Gnome runtimes installed? Which app do I delete to get rid of them?
I need to be able to easily answer the questions: "Which app(s) do I delete to free up space?" and "how much space will be freed by deleting app X?"
On flatpak those questions required dark magic beyond my understanding to answer.
This is especially relevant on devices with limited storage, like the Steam Deck.
Keeping dependencies with applications does have a lot of advantages, especially for applications that need incompatible versions of the same libraries. But, having every application with its own full set of dependencies seems quite wasteful.
The problem is that system libraries DO solve a problem that docker recreates - the kernel can efficiently map in a virtual address range for shared libraries and then add the map to all of the tasks that are using the same library, and the disk space is minimal because you only have maybe a few copies of the same lib. For some pieces like glibc, that's huge savings system wide.
With container images you're bundling your app with exactly the libraries it needs at the exact versions you want. This means that the kernel is loading all of this auxiliary pieces for you, distinct copies, that then have to reside in their own mappings, so you get no saving there (it's actually slower in terms of program startup), and then you also are wasting tons of disk space to have all of these duplicate dependencies lying around.
Really, the idea of duplicating an entire os tree for containers is just a bad idea, and it leads to lots of super vulnerable images and destroys the whole concept of sharing a system base for performance and storage wins.
Containers have never been about system efficiency, rather admin/developer efficiency - lower skilled staff can pump out a working product cheaper.
Need to run 2 separate apps that both need to use the same userid/filesystem path/network listener port? No problems.
Need to have different library versions? No worries.
Sure you can workaround all this stuff with ld preloads and other things, but that requires more skill than copying a base dockerfile from a google search and putting in a bunch of RUN commands.
There should certainly be more tooling around finding these things out, but one thing worth noting is that you can `flatpak uninstall --unused` to remove old, unused runtimes (the runtime environments for apps).
Unused End-of-Life runtimes get removed automatically now (as of 1.9.1), but the answer is a bit more nuanced than that, especially if there are multiple users on the same system (that could have user-specific flatpaks that use system runtimes).
It could also benefit users that have more disk space than bandwidth (I was in that situation for a while, though a mechanism for sharing runtimes with LAN peers would be nice).
Could this be solved by smarter layering? I've read that Flatpaks are just container images under the hood and I'm sure many of those share the same base layers.
Flatpak is even better than layering. All the files are stored in OSTree, a git-like repo, and hard-linked under the root for each app. So duplicate files don't take up space.
Good point. Yes. But there are too many versions and varieties. In the same vein there is a lot of variety of image layers. What problem does Flatpak solve? I assume it's because it's tough to create Linux desktop applications when you have no idea what shared libraries, compiler, or version expect.
I always thought of containers as essentially a universal linux package manager. If you ever yum install something and see it pull in dependencies, it's not like there's an elegant software solution that makes that happen; it's work done by people who maintain lists of what each package requires and compatible versions. Red Hat puts considerable effort into that, and it's work duplicated by other distros, each with their own packaging idiosyncrasies.
Because they've given up on being able to maintain a desktop distro with a set of system libraries that allow you to run programs. Future shock from the too rapidly changing underlying libs and no thought of forwards compatibility longer than 3 years means no software lasts longer than 3 years, if that. And so, like a fever is a symptom of infection, we have containers as symptom of future shock.
As for which container, it doesn't really matter, and there's been a lot of not-invented here going around creating new containers that muddied the waters. But the truth is they're all symptoms of something bad even if they mitigate a problem temporarily (while making debugging, networking, freedesktop standards, etc incomparably harder).
Once the linux world moves completely to Flatpak, that would truly help languages like C++ - whose evolution and cleanliness has been tremendously crippled by being unable to have a clean ABI break.
Let me repeat a message i wrote some time ago here because i really don't think Flatpak is a good solution (note that this isn't issues from some ancient distro, this is from openSUSE Tumbleweed, this is only a few months old):
--
Flatpak is awful, i tried installing software from it a few times and not only it always installs a ton of unnecessary stuff (practically an entire distro!) but also the software doesn't integrate seamlessly with the rest of the system. GUI software looks wrong (themes do not apply, font rendering is wrong), command line software doesn't show up in PATH. Example in [0] for the GUI bits (Bless for theme, Notepadqq for font rendering) as well as the disk usage for two simple programs like a hex editor and a text editor.
Even worse, while there is an option to install things in the user's directory only (so i can make a separate user to try some things that i can easily delete later) not everything installs with that options and wants root access to pollute the rest of my system.
Nowadays i simply avoid anything related to flatpak. If something doesn't provide normal binaries and i really want it, i'd rather compile it from source (and if the source language is something exotic or the program needs a ton of dependencies i'd just skip it).
For desktop integration, install flatseal and pass ~/.themes and other desktop directories to the flatpak app. It doesn't happen by default cause sandboxing.
Command line software does show up in CLI, just not by its short name. You have to use the fully qualified app name - com.xyz.whatever. You can make an alias if you want.
> Even worse, while there is an option to install things in the user's directory only (so i can make a separate user to try some things that i can easily delete later) not everything installs with that options and wants root access to pollute the rest of my system.
Are you talking about flatpak install --user ? Yes youre right some apps might not work with it but how is that any worse than traditional packaging.
Flatpak isn't perfect and I prefer distro packaging over everything else but it isn't feasable to have distro packaging for everything - not with the strict packaging rules most distros have atleast.
But out of snap, appimage and flatpak I'd go with Flatpak.
There are workarounds for everything i mentioned, including just taking out the application binaries and placing them in /usr/local or whatever by hand. The point is that while Flatpak might solve some issues, it also introduces others.
I know this is a controversial opinion, some would strongly disagree with me, maybe most would consider me downright wrong. But as a system designer I always thoughts (by always, I mean last ~5 years since I've been thinking about this) the misery we experience with C/C++ package management is an OS problem, not a language problem. The fact that some languages roll their own package manager like pip, npm, cargo etc doesn't change this for me at all. pip, npm, cargo are symptoms of OSs failing to ship dependencies in a sane way.
Ultimately I want my software to be shipped by my OS's package manager like pacman, apt, nix, or flatpak etc. The fact that I can go ahead and install dependencies in pip, npm is an implementation detail while I'm working with the code. The entire linux ecosystem should have been designed in the first place like nix, AppImage or flatpak from day 0 and going to the "shared libraries" route was simply a bug. We thought it was ok, but we quickly saw it was not ok and we should have made the observation that it's not working and switched but we didn't. Like nix does, when there are common dependencies between apps you can still optimize and install them once, but each app should define its own dependency and that version alone should be allowed to use so that a library upgrade doesn't break anything.
Unfortunately, package management is an utter mess today and I no longer have any appetite to use things like `pacman` or `apt` unless absolutely necessary. And this is not even just the fault of archlinux or Debian. Case in point, MuseScore (an app I use every day to write music) published MuseScore 4 over MuseScore 3. The fact is, MS 4 is much better than MS 3, but some workflows are different and it's not in complete future parity with MS 3 (some things they decided not to implement). Now, I want to use MS 4 and I do use MS 4 for most things, but for certain things I still absolutely need MS 3. By "absolutely": I mean for some compositions MS 4 is a dealbreaker and MS 3 is the correct tool. But most other compositions MS 4 is the superior and correct tool. But when they published MS 4 pacman immediately made the package `musescore` point MS 4 with no option to install MS 3. They're two entirely different apps. Solution? Just download the .AppImage from website and call it a day. It would have been much better if `pacman` serves MS4.pacman_exe and I click on it and it opens MS4. Then, it also optionally serves MS3.pacman_exe, I click on it and that also works. If I desire I click on MS3.6.2.alpha43432.pacman_exe and use that specific version, with every library fixed. But this is currently impossible since each version can potentially need different dependencies etc so .AppImage is the only way.
Sorry for only reading a bit at t he top, but languages often need their own library installers not because (all) OSes are failing the package deployment, but that some OSes that the language targets doesn't. It just takes one bad OS (though really it's windows/OSX) that can't have good package managers. Why do chocolaty / brew exist? Because the os vendors aren't interested in propagating software that's not making them money, direct or no.
On Windows the normal install method is by downloading an installer and executing it, almost nobody uses app stores or package managers. And it works great.
There wasn't enough disk space to do deliver statically-linked-like apps by default in the 90s.; I think around the time Debian started I had a 386DX40 and a 1 or 2GB hard disk.
Traditional distros are able to install the same apps side-by-side, such as python2 and python3, if they put their mind to it. It is just a bit clumsy. Sounds like they didn't allow for that with your app. That's a shame, I guess you could ask or cough up a donation to help?
I generally just wait for things to get packaged in the regular distro, but it is nice to have choices.
I know this is going to sound like a stupid question, but does MuseScore run versioning control? Like can you use
$sudo apt-get install musescore=4_0_2
as you can with other packages like Tenacity or Dolphin?
Sorry if it wasn't clear, this approach doesn't work since I need both versions. Besides, old versions are ultimately removed from the cache and rendered uninstallable by `pacman`. So, no that wouldn't work.
I agree. You've got a bit of a fatalist tone, but you're not wrong either - if Microsoft or Apple announced that they were depreciating all of their prior packaging options in favor of a QEMU image of your application, people would lose it. It's not just about performance or integration or nativeness for most people, it's about the files living in the same spot and not needing extra apps to manage a sandbox.
Flatpak feels like it took the wrong lessons from the past 30 years of software distribution and turned it into a platform. I get that it has to be simplified to be friendly to newer users, but it's also a complete turnoff when you want to fix an issue or edit a config file.
On Win32 you don't static link everything, all OS functionality is provided through shared libraries (aka DLLs), the OS provides a ton more functionality out of the box (GUI, multimedia, input, graphics, etc) than you'd get with Linux, which while it provides strong backwards compatibility is just the kernel and practically everything else a regular desktop program uses is provided by libraries talking with it.
The big difference isn't static vs dynamic but that on Windows the DLLs that come with the OS do not break backwards compatibility so applications can rely on them whereas on Linux only a very tiny number of libraries do not break backwards compatibility (from the top of my head that'd be glibc[0], X11/Xlib/etc, curl and OpenGL). Most importantly the libraries for making a GUI application do tend to break backwards compatibility (see the breakage in Gtk1->Gtk2->Gtk3->Gtk4 and Qt1->Qt2->Qt3->Qt4->Qt5->Qt6 though with the latter being C++ they can't help it as C++ itself makes it harder with not having a stable ABI). Not all do (AFAIK Motif has been backwards compatible since the early 90s) but those that don't aren't that common (for unrelated reasons - e.g. most provide only a small fraction of the functionality Gtk/Qt provide or, in cases like Motif, the license was unpopular with FLOSS developers so no ecosystem was built around it).
[0] they do break compatibility for programs using unsupported APIs though
That works on Windows because Microsoft have made a commitment to doing things that way, as an intentional moat.
If you've got $$$$ of CAD software (or whatever) in binary form, which you know will work on pretty much every version of Windows but will absolutely not work on Linux or OS X - that's a good reason to stay on Windows. Microsoft is willing to do a bunch of compatibility testing etc to maintain that moat. They've got the money to pay developers to make sure Office 97 still runs on Windows 11.
Linux, on the other hand, has a lot of different stakeholders - and some of those stakeholders take the opinion that closed source software can go fuck itself. Maybe a Linux distribution wants to update from OpenSSL 1.1 to OpenSSL 3.0 and they're quite happy to patch all the software that's part of the distro. If you want to run some closed-source binary-only software from 1997 - that's between you and the software vendor, buddy.
> some of those stakeholders take the opinion that closed source software can go fuck itself.
It is sadly a very common misconception that backwards compatibility is only for close source software but in reality FLOSS benefit a ton from backwards compatibility too.
Imagine for example if glibc decided to add a third parameter to fopen (ignore if it is realistic for glibc to do that, this is for the sake of argument).
The only benefit of FLOSS[0] would be that it'd be easier for someone to update it to use the third parameter instead of waiting for the original developer to do so - but still every program, FLOSS or not, would need to be updated anyway and someone would need to spend time doing that update.
A more realistic example would be GIMP: the stable releases (last was made a few weeks ago) still rely on Gtk2 and the Gtk3 port (i.e. the effort to move from Gtk2 to Gtk3) is still in beta while the Gtk devs consider it deprecated. And this is all open source, no closed source software in sight, yet see how much time was lost (not only for GIMP but for any software that relied on Gtk2 and had to switch to Gtk3, with the story repeating for Gtk4) - time that could have spent improving the actual functionality that the GIMP users use the program for - adding functionality for manipulating images - instead of using yet another approach to draw and lay out buttons, menus and checkboxes - something that was already solved back in Gtk 1.x days.
[0] actually of any source available that doesn't have a "see but don't touch" rule
> Why can't we just static link everything, and distribute binaries
Because what happens if there's vulnerability in say zlib or openssl. As a distributor you'd need to rebuild everything (volunteer-run distros don't have sufficient cpu time to rebuild the whole archive at once), and in the process check if the update won't break anything in each and every package. Or rely on upstream (which may be unresponsive, because they're also volunteers).
This might be manageable in relatively small, single-language, corporate-backed apps, but is not viable in volunteer-run operating systems with ~50k packages written in every single programming language invented.
No one distro will risk this (imagine Phoronix article: "After 1 year, CVE-2023-123456 fixed in only 30% of packages in StaticLinux!").
I mean, you're free to try. I'll provide time-to-fixed statistics for all your CVEs.
Fwiw, I do this for everything I release. But I doubt that's much "better" than something like flatpack, because just static linking doesn't really help me with dependencies at build time.
Flatpack Runtimes do this for common dependency lists, but still allow some customization, without me having to recompile openssl- or some other annoying upstream.
While not statically linked necessarily, app images do attempt to bundle dependencies such that they “just work” as self-contained executables on most Linux distros - which is what the poster above seems to care about.
The ballooning storage issue was mentioned along with Flatpak, and is an issue as every single dependency is pulled in. AppImage excludes those dependencies which are "reasonably expected" to be on any Linux system. See the Application Sizes section here: https://askubuntu.com/questions/866511/what-are-the-differen...
Maintaining backwards and forwards compatibility will cripple you as a lib developer. If you do it, your solution will be slower to change, and probably slower overall than a solution that breaks compatibility occasionally.
So you get less contribution and small bus factor means this library is more likely to die.
On the other hand the developers who rely on your library wont need to waste time trying to play catch up with your breakage so that their programs keep doing the exact same stuff they were already doing and instead their most likely limited time (especially important for FLOSS developers, many of whom do that in their free time) will be instead put into improving what their applications actually does.
In addition, even in the case the applications stop getting developed, they will keep working since their dependencies would not break and other developers can pick them up if needed, even years later. On the other hand having to pick up a project whose dependencies do not even work anymore is much harder.
> In addition, even in the case the applications stop getting developed, they will keep working since their dependencies would not break and other developers can pick them up if needed, even years later.
In theory, yes. In practice due to Hyrum's law you'll have to reproduce bugged behavior because an application exploited a bug in your code. See for example SimCity Windows 95 compatibility story.
Nix is the ultimate expression of this abandonment of the concept of a desktop distro. It doesn't even try to have system libraries. Nix gives up re: future shock. The entire OS is just containers. I'm sure it works fine if you only ever use popular software but as someone that's constantly compiling and adding little .c programs from the 'net to my bin/ having to manually create and specify the entire "system" libraries set bit by bit every time I want to compile (or run!) something is no go. Great for business use as server, bad for human persons as a desktop.
> manually create and specify the entire "system" libraries set bit by bit every time I want to compile (or run!) something is no go.
I don't have first hand experience with nix, but if I would look at this statement from Guix point of view (and I assume nix will be same/very similar), this is not really true.
Nothing prevents you from having a list of libraries (in a manifest). That list might be way to wide for any specific program and it can represent what you would normally have as "system libraries".
You can then just invoke shell as guix shell --manifest=manifest.scm and you will be given working shell with all the libraries available.
You can also just package your program using trivial copy&paste code with the same list. Sure, the dependencies will be too wide, but since you are not sending the package to upstream Guix for inclusion, no one really cares.
So, is it as friction-less as "normal" distro? No (especially the "add to /bin" part).
Is it as bad as "manually create and specify the entire "system" libraries set bit by bit every time"? Also no.
> Nix is the ultimate expression of this abandonment of the concept of a desktop distro.
NixOS deliberately avoids having global/implicit 'system' libraries / configured state, and instead demands declarative expression of the system configuration. -- The libraries the system uses don't adhere to a global FHS structure.
Nix similarly eschews use of global/implicit libraries. Building each package requires its dependencies be declared.
However, it's incorrect to say this is "just containers". I don't see how you get "abandonment of concept of desktop distro" from "doesn't have implicit system libraries". -- NixOS allowing the whole system configuration starting from a single file is convenient, and something people might otherwise use a tool like Ansible for.
That said, NixOS obviously has many use cases which have significantly higher friction compared to more typical systems. (And a steep learning curve to overcome that friction). -- e.g. on Arch or Ubuntu, the system configuration is global and malleable.
As others have pointed out, this is wrong. You generally by default use shared versions of whatever your system libraries are, which are packaged as part of a given nix release. Software can specify other, particular versions if needed. If you need access to the shared libs or some special libs in a dev environment you can specify that in the dev environment definition, and they are then available as usual. We use this, for example, so rust libraries can compile against the standard SSL library that comes with nixpkgs.
Nix is the actual and only novel solution in this space — you can just have a single flake file in your repository to make it always reproducible. It will only build/download files that are necessary, using a minimum required space.
But for this reason it can also just point you to a binary cache and you can copy the necessary data (basically just a diff of what is required dependency for running program P and your system). Just because the source is not available doesn’t mean that the dependencies can’t be explicitly specified and use some standard deps. Chances are that proprietary app will just depend on a libc you already have downloaded.
Haven't tried it but you could probably pin an env with everything you would want and then use direnv for automation. And for quick experiments there's FHS/steam-run. But yeah.. setup.
Maybe don't pretend a trivial semantic difference due to your point of view means I don't know what I'm talking about. Yes, Nix only alters environmental variables, but guess what, that's pretty much what Snaps do too plus a bit of bind use. The fact that people are constantly asking "Will Nix Overtake Docker" is another hint about actual usage.
Snaps bind in the libs, Nix sets env variables to set the available libs. The goal and end results are the same. The difference is more marketing than anything else.
On Nix there is no separation of file system or network across applications. The same installed libraries can be used by multiple apps. Apps have access to the same system services and run together, e.g. with a single systemd. How can you call what Nix is doing "containers" ? There's virtually no overlap.
Zoom overtook the local freeway as the preferred mechanism for get to meetings at my company. I guess Zoom is a road by your logic.
On Snaps there is no separation of network access. The same installed libraries can be used by multiple applications. Applications have access to the same system services and run together. How can you pretend Nix is different from a Snap container? There's virtually complete overlap.
Or maybe you'll now say that Snaps are not containers because they do it in a Nix like way? By your logic a frontage road is not a road just because it's dedicated to a single commercial development and named differently.
In my top level post which started this big thread I claimed that the purpose of containers (Nix, Snaps, Flatpack, Docker, AppImage, etc) is to solve the future shock problem and allow running of broken applications by controlling the libs available. I stick to that claim here.
I'm reading on snapcraft that most snap packages use "strict" confinement, which means that they "run in complete isolation, up to a minimal access level that’s deemed always safe. Consequently, strictly confined snaps can not access your files, network, processes or any other system resource without requesting specific access via an interface."
Sounds like a container to me, and it's not how Nix does things.
Yeah, Nix “doesn’t only alter environment variables”, hell, the most common way of operation is simply patching/compiling standard linux ELF executables in such a way that they dynamically link to the correct dependency, so instead of just having libc mentioned, there is /nix/store/some-hash-libc/lib/libc there. This one libc is shared across many (most) of the packages you will practically have, so you get the advantages of containers, without any overhead in emulation, nor file size.
It would be a better solution if it had a proper GUI that auto-generated config for novice users. Until then it's too hard to use for non-technical folk.
Flatpak is the reason why people who only used Linux because of Steam Deck can just install the app from the app store, and it works well enough.
And it's actually pretty cool that Flatpak keep improving, even in the past few years. It's not improving as fast as I'd like (I want VPN apps, Lutris controlling emulators, and Native Host Messaging to be working already) but a lot of issues are being worked on and these days they work well enough.
I still prefer to use AUR if I could access it there, and some of the things I do are only available distro-agnostically through Nix or arch distrobox, but Flatpak is absolutely great when it's available and the sandbox isn't in the way.
1. Sandboxed applications (via containers?) - so applications you have don't technically have access to your home directories by default. That's sort of nice - but how many applications really warrant this overhead?
2. Possibly easier to write/package? That said, in Nix you pretty much just need to package once.
I've not used Flatpak - can somebody who is maybe familiar with both technologies comment further? While I may be a big Nix evangelist, it's always confused me a bit why Flatpak exists and I'm curious if it is truly solving a problem I'm unaware of.
I actively use both Nix and Flatpak, and my perspective is that they're tuned for different purposes:
- Flatpak doesn't require apps to be rebuilt when the runtime changes, whereas Nix would require a rebuild. This is, of course, an intentional selling point for Nix: you don't have to worry about keeping track of ABIs and the like. This is, however, less fun when you have to update a large amount of applications for little reason.
- When using flakes, it's easy to end up with a lot of extra disk usage because of a proliferation of different nixpkgs versions. Flatpak's runtimes help avoid this, since apps target a runtime version and can run under any individual commit for that version.
- Flatpak has a massive amount of internal infra for sandboxing that would take a moderate effort to reproduce elsewhere.
- This also means that running Flatpaks always involves new user namespaces; this can't just use environment variables like Nix does.
- Flatpak's entire setup is more tuned towards GUI applications, with support for stuff like swapping out the graphics drivers. (nixOS can ofc do this, but you need nixGL otherwise.)
- Flatpak has support for downloading non-redistributable files at install time and placing them in the same location as the application's main files.
- Flatpak's summaries that it downloads from the server are much smaller than nixpkgs checkouts.
- Flatpak dedups individual files across apps.
Really, it's just that Flatpak prefers keeping the build process and usage simpler to focus the energy on sandboxing, leading to design decisions like the more straightforward runtime/app split (easier for a user to understand / manage) and all of the built-in extras for graphical applications. Nix has a significantly more complex UX but is infinitely more flexible and also far better suited for development environments.
If your thought is "most of this isn't structural", that wouldn't be incorrect: there is no reason that:
- building flatpaks couldn't use a more nix-like model
- nix couldn't add sandboxing and summary support on top
Heck, you could probably build a Flatpak from a Nix derivation, if you figured out how to separate the "runtime" and "app" parts. But at the end of the day, every tool has some limited degree of development bandwidth, and as-is, Flatpak and Nix just optimized that bandwidth for different targets.
> Flatpak doesn't require apps to be rebuilt when the runtime changes, whereas Nix would require a rebuild. This is, of course, an intentional selling point for Nix
I doubt it's intentional. Guix has solved the problem---update the runtime without rebuilding the app---while being fairly similar to Nix's concepts in general.
Nix sounds amazing and better than everything else, but it also sounds really hardcore and not for common usage.
You have to learn a specific programming language just to use it, so the barrier is: learn to use a computer, learn to program, learn a specific language
Compare that to desktop distributions, or windows, or macOS the barrier is: learn to use a computer
That is so frustrating, it really sounds amazing, but I don't have the willpower to do that just for the day-to-day OS.
Looking forward the day it gets more consumer oriented
What advantages has Nix over Flatpak? Seems to be just yet another esoteric solution for installing software? Why should one prefer it over other solutions?
How well does Nix integrate packages into the system? Flatpak is integrating apps into the desktop environments. Probably just following standards, but is Nix doing this too?
Are Nix-Installations portable? I use Flatpak to have the same installation on different systems available, just move my SSD between them. How ell would this work with Nix?
Theoretically, you could create a Nix-Flatpak hybrid by sourcing your Bubblewrap packages/environment from the Nix store. It wouldn't be much different from how Nix handles it today, just with an additional layer of sandboxing for each environment.
The problem Nix solves is the one Flatpak tries working around - how do I guarantee software builds and runs on my system? The answer turns out to be "very strictly", and Nix has done a good job of writing sane rules where it can. It's not perfect (and definitely not user-friendly today) but it has a larger package repo to work with and better support for stuff Flatpak is missing like system services. If people do want to switch to an immutable root system, I'd hope the goal is to be as modular as NixOS is someday.
From my testing it seems to integrate well enough. Mind, I'm still only dipping my toes to Nix on Vanilla OS for now, but I could install `git` and I could run `git` directly without `flatpak run` and the likes.
Optional dependencies seems to be an issue, though, as I needed to figure out what on my own I need to get thumbnails on Dolphin working correctly, but I might still be missing something that I'm supposed to do in this case.
> What advantages has Nix over Flatpak? Seems to be just yet another esoteric solution for installing software? Why should one prefer it over other solutions?
It's not just for installing software - I have a declarative, shared configuration for each software that works deterministically on each system I share it with (NixOS or not).
> Are Nix-Installations portable? I use Flatpak to have the same installation on different systems available, just move my SSD between them. How ell would this work with Nix?
No, but the configurations are (see earlier response), so this seems to be the same benefit here.
> How well does Nix integrate packages into the system? Flatpak is integrating apps into the desktop environments. Probably just following standards, but is Nix doing this too?
I don't exactly know how to answer this - and it's probably partially because I don't use many modern desktop environments (when on NixOS I use i3, otherwise I'm on macOS, which is a bit of a specialized usecase). So I can't really comment here.
As far as installing immutable packages accessible from the system - it does a pretty exceptional job.
Maybe it's because I'm naive when it comes to Nix, but so far everything Nix related I've seen requires some file configuration that no everyone is able to keep up with. Compare that to having a GUI frontend and a simple `flatpak install ..` command.
Flatpak is not just easier for packagers, it's miles ahead when it comes to simplicity for end users.
The benefit of Nix is that you declaratively configure everything in a single file. And there's no reason you couldn't wrap a GUI around that. In fact I'd be surprised if someone hasn't already.
You're listing one of the biggest reasons flatpaks was even created as the only benefit. The reason it got some attention lately is the growing interest in immutable distributions where flatpak is sold as the userspace package manger while ostree handles the root system.
From a 2023 POV about desktop OS security running things sandboxed is a _strong must have_.
Even for well curated applications, through less so.
So I'm happy about the general direction here.
But last time I looked at flathub (and related topics), I was quite disappointed to a point that I would say they majorly failed this. Now that was quite a while ago. But just the fact that in a non well curated app store you could download applications which where not effectively sandboxed (e.g. used it only for copatibility not sandboxing) in a context which made it look for users like it's sandboxed without any huge red warning label or similar was quite shocking and made me second guess the security competence of the involved developers.
Now I hope things are much better by now and I guess I will spend some time to look into it again this weekend.
But the reason I write this here isn't because I want to dump on flathub but because I often feel that huge parts of the Linux community seem to be very out of touch when it comes to desktop OS security to a point it makes me worry for the future of desktop Linux.
(It's also not just security e.g. some design decisions and defaults of systemd make sense for many Linux use case, but not desktop systems. The Unix permission model doesn't match well to the desktop OS security concerns of today either etc. etc.)
I strongly agree. Luckily, a lot of other people do, and the ecosystem is moving towards run-time permissios ("portals") that allow users to confirm access.
It does take a while for toolkits to start using them, but AFAIK, as an example, both KDE and GNOME now transparently use portals for the file chooser dialog.
It will take more time for everything to move to newer, safer APIs (Wayland, screensharing portal, etc). The situation is much better than it was a few years ago, though.
And most frontends now display required permissions; KDE even recently integrated permission management in is system settings.
Some portals still need to be devised though, especially for device access (gamepads, webcams, etc).
I haven't daily driven linux on my desktop on a few years, so flatpak didn't have a ton of traction when I was a target user, but I do think it solves some of the problems I had running linux on my desktop
I liked linux because I considered it to be a great development environment, but I always thought it was annoying as hell to run consumer software on. And I do have to run consumer software, whether it's web conferencing or chat or tools for some of my electronics, etc.
Sandboxing consumer applications would have allowed me to focus my package management efforts entirely on my development environment
Part of my issue with Flatpak is that it takes these systems that really ought to be managed separately (process isolation, package management, runtime configuration, security, etc.) and makes them one monolith. It's an ok stopgap for people frustrated with software distribution, but it tangles itself up in so much mess that it's hard to promote over it's competitors (even Snap).
It's not great. Sometimes it works well, but when the wiccan magic inside Flatpak breaks now you have to debug two runtimes for the price of one!
A lot of Linux users grouse about people not wanting to use their favorite OS but they don't do it because a bunch of perfectly well built software may or may not run depending on some under-the-hood voodoo magic they don't understand and can't fix.
Flatpaks fix that. If the pack is well built, it'll work. If it's from a reputable source, it's probably well built.
Well, if I understood the point correctly and if it happens too often, it could become too much of a burden (as in not enough sales to make it worthwhile.) Developer might end supporting only one runtime, deprecate and put out of support all the old ones. That's a problem for users and eventually for developers too: it they don't know how to create a release for everybody, they won't.
Does Flathub help with sandboxing? For example, if I download an app, can I know that it can’t access photos, or location, or microphone, etc… without asking first?
The flatpak CLI will tell you which permissions a program wants. GUI installers usually also show this information, just worse most of the time. If you want to revoke permissions, you can use Flatseal or the KDE settings module.
Flatpak is great! Flathub on the other hand... The app submission process is very convoluted and the documentation is super vague.
Snap is much easier to publish and a joy to use with the dashboard and analytics. I hope Flathub improves their submission system and documentation because it's still a mystery to me.
As a non-technical linux user, seeing the ecosystem move towards some sort of standard on this front is wonderful and feels long overdue. Understanding how to install (or uninstall) software is one of greatest pain points for new / non-technical linux users.
What are people's opinions on Flatpak vs AppImage? I use both (AppImage a lot more so than Flatpak) and I'm not well educated in their differences. I shall research it, but I'm wondering if anyone has some insight to share. Some apps I use (MuseScore) publish their own official .AppImage linux executables which work flawlessly for me, so I cannot abandon AppImage completely. Is it worth it to use both? Is it better to switch to one for everything, along with distro package manager? Thoughts?
AppImage is a more localized approach to Flatpak's "bundle everything" in that instead of trying to have shared libraries across different applications (Flatpak tries to share runtimes), every set of dependencies is local to the application itself. In a way it is kinda like trying to static link everything, though of course it doesn't technically use static linking - and it is really up to the developer what "everything" means (it can still rely on stuff on the underlying OS). Also it only tries to solve the dependency problem, whereas Flatpak tries to do a bunch of other stuff.
From a "how things should be" perspective, both are bad IMO (see my other messages here), but from a purely practical perspective based on my experience with trying to use applications using them, AppImage tends to work slightly better and since it is local only to the application, it doesn't pollute the rest of the system.
If i had to choose, i'd choose none of them, but if i had to choose between the two then AppImage is by far the most preferable.
I've looked at your other comments here but I can't find elaboration on the bad parts of AppImage. I tend to agree that AppImage is the best of the available options - do you mean that the best solution would be rich system libraries that never break backwards compatibility ala Windows? And since we don't have that, AppImage is the best we can do?
Yes, i meant for the "how things should be", the comments i meant were those backwards compatibility which is the root of the problem and AppImage is just a workaround for it.
Potentially stupid question, is there any reason a "simple" .app bundle like has been used on NeXTStep/macOS for decades wouldn't work on Linux?
That doesn't take care of sandboxing, but that seems like it'd be better handled by the OS itself rather than the package management system, and it avoids the problems with overhead and integration that come with things like Flatpak and Snap.
> is there any reason a "simple" .app bundle like has been used on NeXTStep/macOS for decades wouldn't work on Linux?
Conceptually no, and both GNUStep and RoxFiler/RoxOS did just that.
The practical reality though was that Linux Desktop has never had a conception of which pieces are 'add-on' and which are 'base-system', and to the extent there was any collection of components that could be part of the latter they were frequently making breaking changes. Combined with no coherent standard for properly versioning libraries and every fiefdom package repo doing its own thing, and what you're left with is decidedly not a platform.
The modern version of Rox AppDirs is AppImage, and it mostly kinda works on most distros. This is sad, but there's just no way out of it as long as the Linux Desktop community continues to reject the very concept of being a platform.
Steam initially tried to solve this for games by just dragging along its own Runtime for developers to target, but there were still issues. They've since regrouped and basically just declared Win32 the one true stable runtime ABI for games on Linux.
FlatPak just decided that yet-another-package-manager was the way to go and manages a collection of explicitly defined runtimes instead. It works well enough for most use cases.
One thing I really dislike with snap/flatpak is how it breaks connectivity with other parts your system.
Example, I run Ubuntu and wanted to use a smartcard for authentication in certain websites. That didn't work because Firefox is in a snap, and can not communicate with the pkcs11 libraries...
To solve that I have to scrap Firefox as snap and install as debian package from PPA.
To some degree, I like this, but the experience for both to allow access is still pretty draconian. I'm not quite sure what the process is for Snap to adjust this (it might require a rebuild?), but I know for Flatpak it's common for folks to say "Use Flatseal to adjust the permissions of what the app has access to" to resolve problems like this.
In an ideal world, the application would be Flatpak or Snapcraft aware (or there'd be a middleware to intercept these calls), and ask for access to parts of the system it needs.
You could try Flatpak and override some of the defaults. I would probably open an issue with Mozilla since they maintain at least the Flatpak. Not sure about the Snap.
Flatpak is great for self contained apps, but it seems problematic when it has to reach outside the sandbox. I ran into this in VSCode and with CUDA video processing filters, but I'm sure there are more examples.
Try using Flatseal [1] which is a Flatpak app that helps manage permissions for other Flatpak apps. You can see exactly what permissions are enabled in each app's sandbox and expand/override everything from env vars to filesystem locations you want to enable access to.
Flatseal is only a convenience. I think you'll find that the flatpak CLI [1] has commands to do everything Flatseal does, albeit in an arguably unwieldy way.
Someone needs to slap a comprehensive GUI installer and configuration UI onto NixOS. It solves similar problems in a non-hacky way but is not easy enough for the average user.
At least static binaries slip under the radar of vulnerability management tools so you don't have to go through the tedious patching treadmill as often
Not really. Runtimes have lots of software that is shared. If you are building a run of the mill GTK application, you probably use the GNOME runtime with no extra dependencies.
These type of workarounds seems to point to people giving up in having secure applications, so much for the move to rust. So the path is to having applications in a walled garden. But who cares if all the plants in that garden is infested with aphids, at least they can say it won't spread. Well, time will tell on that. But, I hope application developers keep an eye on portability. There is more to the world than Linux.
OpenBSD is avoiding all this complexity of Flatpack, snap, docker and all these attempts at contains. To me, pledge(2) and unveil(2) is far more secure and easier to use than what Linux is doing. And I think for containers, nothing still comes close to FreeBSD jails.
It's funny you mention OpenBSD and pledge/unveil, because they don't really come close enough to being able to lock down the system, while linux has things like SELinux/RSBAC available.
Even worse, I could not find out which apps to delete. All the space was taken up by excess "runtimes" I don't know or care what a runtime is but I never intentionally installed them. Why are there five different Freedesktop and Gnome runtimes installed? Which app do I delete to get rid of them?
I need to be able to easily answer the questions: "Which app(s) do I delete to free up space?" and "how much space will be freed by deleting app X?"
On flatpak those questions required dark magic beyond my understanding to answer.