Right now I use openSUSE Tumbleweed and Guix for applications. This gives me an up-to-date OS with the ability to rollback OS updates (thanks to openSUSE snapper and btrfs). Guix is a functional package manager with rollback support & reproducible builds, which allows you to have multiple versions of the same library installed so that different applications can make use of them. The above combination solves a lot of problems traditional packaging systems have.
I also follow to some extend the Snap packaging world. I like the security idea Snap brings to the table, but right now the snap thing is still work in progress.
Regardless of library track record, Debian pessimistically treats all libraries in Debian stable as if they were OpenSSL of old (which broke API between releases and exposed random internals as potential breakage surface). If you are upstream library developer and maintain perfect API compatibility, Debian still won't ship your releases to Debian stable before the next Debian stable, so app developers can't depend on the new features of your library if they use the system copy. Depending on the nature of the library, this either holds the ecosystem back or leads to app developers bundling the library leading to more untangling work for Debian.
Never mind that, there were cases where OpenSSL broke binary compatibility in security patches. Code which worked fine suddenly stopped working when OpenSSL was upgraded.
During my tenure as FreeBSD security officer, OpenSSL was right at the top of the "don't ever trust the patches these guys send out" list.
Debian-supplied libraries should only be for user-facing applications supplied by the Debian distribution itself. The system copy should be for the system only, and unless you're writing software intended to be part of that system, it's not for you. If you opt in to using it, you've got to accept the trade-off that the rest of the system and all its concerns come first - and that includes not introducing changes unless absolutely necessary.
I'd much prefer a clear "line in the sand" policy like that than the status quo, which tries to keep everyone happy.
What about libraries like Gtk, though? Should third-party apps bundle their own Gtk and interface with the system only on the X/Wayland + ATK layers?
What about glibc? Occasionally (getrandom()!) there's new glibc surface that apps really should adopt. Should apps just rely on the Linux syscall interface directly and bypass glibc?
As long as Debian stable is a target that developers feel their app should run on, backports and testing aren't solutions for app developers.
Debian and distros in general should take note of that.
That said, I think distros (and debian) should look to provide a base system on which we can run flatpaks and containers.
The create a container with a low-friction programming language specific package manager for each project a developer is working on... and publish end-user apps as flatpaks, as these are the only thing people care to update anyways.
On the server-side, containers is not primarily about "getting the admin out of the way", but about reducing dependencies of your application to their bare minimums. To its extreme you have Erlang and Mirage that would let you compile your whole application to its own TCP-stack + GC + your app, bypassing the kernel completely.
In that world, there's no need for security patches, because you're using a managed fast language (OCaml/Erlang/Elexir/F*) and you don't have things running next to your application that can pose security problems; no shared libraries, no kernel, no SSH daemon; it's all compiled into your app. You get security updates through your language package manager and they are frequent, because you keep your app up to date as you develop it.
Because your above app runs its own GC, TCP-stack and is self-contained, it now makes sense to move to Kubernetes; because it gives you scheduling, health checks, networking (overlay), a resource permission model (RBAC), a network permission model (Calico) and secret management. Your deployment environment is now your operating system, and the distros aren't needed anymore.
You're too optimistic. Managed languages solve a few security problems, but not all of them. Logic bugs still exist. Encoding issues still exist. Shellshock still happened. PHP is a managed language and we don't hold it as an example.
The only thing that the lack of shared libraries does is that now, you have to compile the same code into the app. It's going to contain the same errors, but now you have to replace the whole app rather than one library. It's also harder to tell from outside if you're relying on a specific version of a library.
MirageOS provides you with a kernel. You're not getting rid of that one. Also Erlang needs some system to run on. It may hide in the same package and be called Ling, but it's still a kernel.
I haven't actually read any real detail about it, but my understanding is this is why redhat bought coreos. The old redhat/debian is going to slowly turn into a tarball of static libs for backwards compatibility.
caveat: this is w/r/t "servers" I don't know/care about desktops/laptops.
On the desktop we'll have the same problem with flatpaks.
I hope it might all be okay, if the base-system is provided by distros..
If that's not ambitious enough, what would it take for the implementers of a new language to be convinced to just use Debian as their package manager? It seems like that would require a lot of changes.
OS level packages only work properly when developers just focus on a specific OS.
The moment one wants to target as much OSes as possible, building OS specific packages becomes a pain and it is easier to outsource the problem to language specific package managers that abstract the OS layer.
It also feels ridiculous to have to install another package manager if you only want to have a certain command line tool.
Maybe this could be solved by a sort of meta-standard for package managers, so the OS has at least some way to aggregate all package managers and language-specific packages installed.
Of course, as always with meta-standards, you might fall into the XKCD trap...
Edit: After reading the linked article, I want to add this: I believe things from NPM don't really need to be packaged individually. The community is an outlier in many respects, with its sloppy practices, its very wide scope and the sheer innumerable amount of "packages" it hosts (many hundreds of thousands). For the rest, only the libraries the application software packages (from Firefox to coreutils, anything that's not a library) use need to be packaged. The rest is most often installed by developers or users that are installing non-package software, so they will somehow need to use a third party package manager anyways. The OS package manager does not need to include everything, but what's essential and stable.
So I don't want to say "it depends", but... it depends.
I seem to operating/administrating/using my machines in a certain way, regardless of their official designation as "production/QA/development/just-working-by-chance" - I have A LOT (say, 80-100%) of packages of the system where I simply care about two things:
a) does it work and
b) is it recent enough to include some features I need.
These are installed by a distro package manager and only get security updates. Everything is awesome.
On the other hand, those 5-20% other packages are usually either not packaged at all in $distro, or too old, or somethings's wrong or I'm really developing against git HEAD. So now the complicated part:
Do I use the latest and greatest version? (Debian unstable, arch/void/other rolling release)
Do I just use the language package manager anyway because will never get packages this year?
TLDR: For all the software (majority) I don't have a laser focus on, I love the distro packages. For some things, the language package manager is fine (or I will use nixpkgs or GNU stow or just plain make install to /usr/local) and for a tiny part I will manually "git pull" every few hours or days and use that. And all of this is not a problem, it works for me[tm].
Linux package management is a rare example of successfully tamed complexity. It's magic: having a single copy of, say, gzip implementation imported and used by hundreds of installed applications.
Sandboxed approach works on servers, where a single machine is usually dedicated to running just a handful of rapidly re-installed apps, it has no place on a desktop. I am not interested of every little component (of which I use dozens) landing with its own "runtime".
this madness is caused by the common techno-disease of seeing a nail everywhere when holding a hammer. yes, containers are great for the (trivial) server use case and no, it's stupid on a desktop which is two orders of magnitude more complicated.
I clicked on your link and saw exactly what I expected:
> VisiFile is a NodeJS application built with Express, Sqlite, and various NodeJS modules.
yep. another example of wheels coming off: no, NodeJS is not an appropriate tool to build desktop software. by picking the wrong tool you're dragging the wrong deployment model along with it. I will only use an electron-like app if someone pays me for it (Slack at work).
Chrome and Firefox ship new versions of their products every six weeks. Each version carries both new functionality (Web standards require field trials as part of the standard process), and essential security changes (e.g. for TLS). Ubuntu and Fedora repackage Firefox and ship it as fast as they can, because that is the responsible thing to do, but it would be easier and safer for everybody if that could be done with the transactional updates, parallel version installation and dependency isolation that flatpak and snap can provide.
I see a similar friction here we also face between distributions and developers and the main issue for friction seems to be the friction about guarantees for stability. And only after that it extents to the friction about getting the latest version shipped to the user (either developer or end user of an application).
In corporate environments they claimed we solve this all by running microservices and we would just version our APIs to guarantee the stability of our interfaces to our fellow developers. I still would like to see this working for extended period of time in the real world.
I don't see how containers or flatpacks will solve it. At some point someone has to touch it again. Be it for bug fixes, security updates, feature development. If hell breaks loose everytime you've to continue stalled development because something wasn't as stable as expected in your ecosystem we've not solved the problem yet.
And it's not solved if we install five different versions of a lib with different sets of bugs in parallel.
Sure, it has its uses, but outside of those, it's just yet another needless layer of obfuscation. OS'es and centralised package management served us well for a generation or two. I worry that Docker & Co. are going to send all that the same way js sent a few decades worth of soundly developed programming practice.
Rather than have admins complain that something broke when some push-to-prod devs are running amok, said devs can just point to their containerized snowflakes and tell the admins to get lost.
The problem is not technical, it is cultural. It is the webdev mentality percolating down the stack, giving the devs the impression that they can just fix their latest API/ABI boondoggle by rapidly pushing another version, like a machinegun fires bullets.
It is not surprising that Gnome is one of the biggest pushers of this, as they seem to have and artistic attitude to their code. Just watch them trying to tar en feather Torvalds, as he has told them off many a time for this behavior (a behavior that is strictly not tolerated in kernel land, might i add).
Except it often breaks and you need to be an expert to fix it. I'll take the hit on disk space if it means I can reliably install software quickly and without hassle.
Containers aren't perfect, but they get the job done.
Basically wrap the language specific managers, and integrate the result into the Gobolinux directory tree.
More akin to Gentoo than Arch, and far removed from the likes of CentOS or Debian Stable.
That said, rolling your own recipes is relatively straight forward. And with the /System/Index they are using these days, i think it should be less sensitive to hardcoded paths and similar idiocies.
can be replaced by
> with frequent upgrades, errors are sparse blips on the Debian testing release
standard line: if you're considering Debian for the desktop and want a reasonable mix of stability as well as recent updates,
testing recommended over stable.
(I wish Debian had a genuinely supported rolling or rolling-ish flavor. It's bad for the Linux ecosystem when people who don't need to be using out-of-date software use out-of-date software from stable.)
1) The general attitude seems to be that if something breaks, the user is at fault for using something called “unstable”. I.e. it’s not positioned as a way for the casual user to keep using up-to-date software that’d still be smoke-tested enough that you don’t need to be an enthusiast who can resurrect an occasional broken system.
2) While there probably in practice is better security coverage than in testing and backports, formally there still isn’t security support per https://www.debian.org/security/faq#unstable . (I do recognize your HN handle and know you know this, but I’m providing the link for other readers.)
Indeed. Any crazy ideas on how to remedy this? :)