It’s been great for C and C++ packaging. I don’t think the track record has been great for Python, Go, JavaScript, etc., all of which surfaced the same problems years before Rust.
Why does conda or bazel exist then? If you're willing to limit yourself to pure Python plus a sprinkling of non-python for crypto, a single language with IPC over HTTP (i.e. Go) or run in a locked-down effectively mono-lingual environment for your language (JS), then the language package manager makes sense, if you can ignore the subtle breakage that comes with each of them (e.g. Jupyterhub cannot use the PyPI distributed version of specific packages, DNS issues on MacOS). Otherwise you need a cross-language package manager, and the only thing that has appeared to scale is volunteer package maintainers (complemented in some cases by paid ones) looking after upstream-provided software.
Yeah, this is my perception as well. My lukewarm take is that because so much of the stuff packaged was written in C/C++ for so long, the packaging systems are mostly optimized to work well with the quirks of thing written in them, but that sometimes comes at the expense of stuff written in other languages. In a lot of ways, distro package managers basically have evolved the role of pip/npm/Cargo/etc. for C/C++ packages, which leads to a mismatch when trying to also use them for other languages in pretty similar ways you might expect when trying to grapple with handling a build with an arbitrary pair of languages from the list above
Completely irrelevant. Freedom 2 means people are free to build and distribute as they wish. Upstream need not care what the format is. Just make the software easy to build and get out of the way.
The Linux model of global shared libraries is an objective failure. Everyone is forced to hack around this bad and broken design by using tools like Docker.
It's okay, docker is also an failure, because it relies and random parties to package things up into container images and then keep the result up to date. Given the number of Dockerfiles I've seen that do charming things like include random binary artifacts and pinned versions that will probably never be checked for security updates, I tend to prefer the distro packages.
I wish people would interrogate this more deeply: why do we see so many Dockerfiles with random binary artifacts? Why do people download mystery meat binaries from GitHub releases? Why is curl-pipe-to-bash such a popular pattern?
The answer to the questions is that, with few exceptions, distributions are harder to package for, both in first- and third-party contexts. They offer fewer resources for learning how to package, and those resources that do exist are largely geared towards slow and stable foundational packaging (read: C and C++ libraries, stable desktop applications) and not the world of random (but popular) tools on GitHub. Distribution packaging processes (including human processes) similarly reflect this reality.
On one level, this is great: like you, I strongly prefer a distro package when it's available, because I know what I'm going to get. But on another level, it's manifestly not satisfying user demand, and users are instead doing whatever it takes to accomplish the task at hand. That seems unlikely to change for the better anytime soon.
It's not the "Linux model". It's an antiquated distro model that has been superseded by distros like Guix and NixOS that have shown you can still have an understandable dependency graph of your entire system without resorting to opaque binary blobs with Docker.
I know this is not a good faith question but I did devops for many years and the state of the tooling and culture is so atrocious that I don't know if they will ever catch on. They're all lost in a swamp of YAML and Dockerfiles, a decade or more behind the rest of the industry in terms of software engineering knowledge. No one ever had any clue when I tried to talk to them about functional and immutable package management.
It is a remark in form of question, pointing out that Nix/Guix are 3l1t3, and will never achieve the adoption scale of Red-Hat, Debian, Arch, SusSE, Alpine, or the hyperscalers own distributions.
If you have to run 5 different docker images each with their own “global shared library” set you clearly no longer have system wide globals. You have an island of deps potentially per program or possible a few islands for a few sets of programs.
Which, once again, completely defeats the entire purpose of the Linux global shared library model. It would have been much much much simpler for each program to have linked statically or to expect programs to include their dependencies (like Windows).
Containers should not exist. The fact that they exist is a design failure.
Static linking is acceptable when you can update a version of a library in one place and it will trigger rebuilds of all dependent software, ensuring things like security updates are delivered system-wide with confidence. The Windows every-app-is-an-island model makes you reliant on every app developer for updates to the entire dependency graph, which in practice means you have a hodge podge of vulnerable stuff.
Someone always makes this comment and it’s always wrong.
If you’re going to say something so spicy, at least back it up with some reasons, otherwise you could have just said it out loud and saved everyone from reading it (blah blah HN guidelines blah blah).