Hacker News new | past | comments | ask | show | jobs | submit login

Packaging a distribution efficiently requires sharing as many dependencies as possible, and ideally hosting as much of the stuff as possible in an immutable state. I think that's why Debian rejects language-specific package distribution. How bad would it suck if every Python app you installed needed to have its own venv for example? A distro might have hundreds of these applications. As a maintainer you need to try to support installing them all efficiently with as few conflicts as possible. A properly maintained global environment can do that.

Edit: I explained lower down but I also want to mention here, static linkage of binaries is a huge burden and waste of resources for a Linux distro. That's why they all tend to lean heavily on shared libraries unless it is too difficult to do so.






> Packaging a distribution efficiently requires sharing as many dependencies as possible, and ideally hosting as much of the stuff as possible in an immutable state.

I don't think any of this precludes immutability: my understanding is Debian could package every version variant (or find common variants without violating semver) and maintain both immutability and their global view. Or, they could maintain immutability but sacrifice their package-level global view (but not metadata-level view) by having Debian Rust source packages contain their fully vendored dependency set.

The former would be a lot of work, especially given how manual the distribution packaging process is today. The latter seems more tractable, but requires distributions to readjust their approach to dependency tracking in ecosystems that fundamentally don't behave like C or C++ (Rust, Go, Python, etc.).

> How bad would it suck if every Python app you installed needed to have its own venv for example?

Empirically, not that badly. It's what tools like `uv` and `pipx` do by default, and it results in a markedly better net user experience (since Python tools actually behave like hermetic tools, and not implicit modifiers of global resolution state). It's also what Homebrew does -- every packaged Python formula in Homebrew gets shipped in its own virtual environment.

> A properly maintained global environment can do that.

Agreed. The problem is the "properly maintained" part; I would argue that ignoring upstream semver constraints challenges the overall project :-)


> Debian could package every version variant ... Or, they could maintain immutability ... by having Debian Rust source packages contain their fully vendored dependency set. The former would be a lot of work, especially given how manual the distribution packaging process is today.

That would work for distributions that provide just distributions / builds. But one major advantage of Debian is that it is committed to provide security fixes regardless of upstream availability. So they essentially stand in for maintainers. And to maintain many different versions instead of just latest one is plenty of redundant work that nobody would want to do.


>... Or, they could maintain immutability ... by having Debian Rust source packages contain their fully vendored dependency set. The former would be a lot of work, especially given how manual the distribution packaging process is today.

That's not reasonable for library packages, because they may have to interact with each other. You're also proposing a scheme that would cause an explosion in resource usage when it comes to compilation and distribution of packages. Packages should be granular unless you are packaging something that is just too difficult to handle at a granular level, and you just want to get it over with. I don't even know if Debian accepts monolithic packages cobbled together that way. I suspect they do, but it certainly isn't ideal.

>And to maintain many different versions instead of just latest one is plenty of redundant work that nobody would want to do.

When this is done, it is likely because updating is riskier and more work than maintaining a few old versions. Library authors that constantly break stuff for no good reason make this work much harder. Some of them only want to use bleeding edge features and have zero interest in supporting any stable version of anything. Package systems that let anyone publish easily lead to a proliferation of unstable dependencies like that. App authors don't necessarily know what trouble they're buying into with any given dependency choice.


>I don't think any of this precludes immutability: my understanding is Debian could package every version variant (or find common variants without violating semver) and maintain both immutability and their global view.

Debian is a binary-first distro so this would obligate them to produce probably 5x the binary packages for the same thing. Then you have higher chances of conflicts, unless I'm missing something. C and C++ shared libraries support coexistence of multiple versions via semver-based name schemes. I don't know if Rust packages are structured that well.

>Empirically, not that badly. It's what tools like `uv` and `pipx` do by default, and it results in a markedly better net user experience (since Python tools actually behave like hermetic tools, and not implicit modifiers of global resolution state). It's also what Homebrew does -- every packaged Python formula in Homebrew gets shipped in its own virtual environment.

These are typically not used to install everything that goes into a whole desktop or server operating system. They're used to install a handful of applications that the user wants. If you want to support as many systems as possible, you need to be mindful of resource usage.

>I would argue that ignoring upstream semver constraints challenges the overall project :-)

Yes it's a horrible idea. "Let's programmatically add a ton of bugs and wait for victims to report the bugs back to us in the future" is what I'm reading. A policy like that can be exploited by malicious actors. At minimum they need to ship the correct required versions of everything, if they ship anything.


> Debian is a binary-first distro so this would obligate them to produce probably 5x the binary packages for the same thing. Then you have higher chances of conflicts, unless I'm missing something.

Ah yeah, this wouldn't work -- instead, Debian would need to bite the bullet on Rust preferring static linkage and accept that each package might have different interior dependencies (still static and known, just not globally consistent). This doesn't represent a conflict risk because of the static linkage, but it's very much against Debian's philosophy (as I understand it).

> I don't know if Rust packages are structured that well.

Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.

> These are typically not used to install everything that goes into a whole desktop or server operating system. They're used to install a handful of applications that the user wants.

I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.

And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).


>I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.

Many software packages need root access but that is not what I was talking about. Distro users just want working software with minimal resource usage and incompatibilities.

>Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.

Static linkage is deliberately avoided as much as possible by distros like Debian due to the additional overhead. It's overhead on the installation side and mega overhead on the server that has to host a download of essentially the same dependency many times for each installation when it could have instead been downloaded once.

>And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).

If the binary package is a thin wrapper around venv, then you're right. But these packages are usually designed to share dependencies with other packages where possible. So for example, if you had two packages installed using some huge library for example, they only need one copy of that library between them. Updating the library only requires downloading a new version of the library. Updating the library if it is statically linked requires downloading it twice along with the other code it's linked with, potentially using many times the amount of resources on network and disk. Static linking is convenient sometimes but it isn't free.


OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.

>OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.

Let's see. On one hand there is more compile time, disk usage, bandwidth usage, RAM usage required for static linking. On the other hand we have a different, slightly more involved linking scheme that saves on every hardware resource. It seems to me that static linking is rarely appropriate for most applications and systems.


Historically, the main reason why dynamic linking is even a thing is because RAM was too limited to run "heavy" software like, say, an X server.

This hasn't been true for decades now.


Other OSes got dynamic linking before it became mainstream on UNIX.

Plugins and OS extensions were also a reason why they came to be.


This is still true.

Static linking works fine because 99% of what you run is dynamically linked.

Try to statically link your distribution entirely and see how the ram usage and speed will degrade :)


But there's a pretty significant diminishing return between, say, the top 80 most linked libraries and the rest.

Or using OS IPC processes for every single plugin on something heavy like a DAW.

RAM is still a limited resource. Bloated memory footprints hurt performance even if you technically have the RAM. The disk, bandwidth, and package builder CPU usage involved to statically link everything alone is enough reason not to do it, if possible.

> Then you have higher chances of conflicts, unless I'm missing something.

For python, you could install libraries into a versioned dir, and then create a venv for each program, and then in each venv/lib/pythonX/site-packages/libraryY dir just symlinks to the appropriate versioned global copy.


Do you think that's user friendly?

I see it as more user friendly - instead of forgetting to activate the venv and having the program fail to run/be broken/act weird, you run the program and it activates the venv for you so you don't have that problem.

Do you think your software is so important that people will do all of that rather than use something better? (For example your software but patched by a distribution to work easily without doing all of that complication)

I'm talking about a shim that distributions can use to launch python programs, some which they distribute, rather than software I write. In particular, ML researchers aren't sysadmins but a lot of their software is in the form of community python programs, which is to say not polished commerical apps with backing like, say, Adobe Photoshop, and this shim solves one of python's pitfalls for users.

That would make it difficult to tell at a system level what the exact installed dependencies of a program are. It would also require the distro to basically re-invent pip. Want to invoke one venv program from another one? Well, good luck figuring out conflicts in their environments which can be incompatible from the time they are installed. Now you're talking about a wrapper for each program just to load the right settings. This is not even an exhaustive list of all possible complications that are solved by having one global set of packages.

> my understanding is Debian could package every version variant

unlike pypi, debian patches CVEs, so having 3000 copies of the same vulnerability gets a bit complicated to manage.

Of course if you adopt the pypi/venv scheme where you just ignore them, it's all much simpler :)


This is incorrect on multiple levels:

* Comparing the two in this regard is a category error: Debian offers a curated index, and PyPI doesn't. Debian has a trusted set of packagers and package reviewers; PyPI is open to the public. They're fundamentally different models with different goals.

* PyPI does offer a security feed for packages[1], and there's an official tool[2] that will tell you when an installed version of a package is known to be vulnerable. But this doesn't give PyPI the ability to patch things for you; per above, that's something it fundamentally isn't meant to do.

[1]: https://docs.pypi.org/api/json/#known-vulnerabilities

[2]: https://pypi.org/project/pip-audit/


It's a completely fair comparison is one is assessing which is more secure. The answer is completely straightforward.

One project patches/updates vulnerable software and makes sure everything else works, while the other puts all the effort on the user.


> How bad would it suck if every Python app you installed needed to have its own venv for example?

You mean…the way many modern python apps install themselves? By setting up their own venv? Which is a sane and sensible thing to do, given pythons sub-par packaging experience.


Yes, I'm indirectly saying that the way many contemporary apps are managed sucks. Python's packaging experience is fine as far as tools in that category go. The trouble happens when packages are abandoned, or make assumptions that are bad for users. Even if everyone was a good package author, there would be inevitable conflicts.

The problem extends way beyond Python. This is why we have Docker, Snap, Flatpack, etc.: to work around inadequate maintenance and package conflicts without touching any code. These tools make it even easier for package authors to overlook bad habits. "Just use a venv bro" or "Just run it in Docker" is a common response to complaints.


I want to challenge this assumption that “the distro way is the good way” and anything else that people are doing is the “wrong” way.

I want to challenge it, because I’m beginning to be of the opinion that the “distro way” isn’t actually entirely suitable for a lot of software _anymore_.

The fact that running something via docker is easier for people than the distro way indicates that there are significant UX/usability issues that aren’t being addressed. Docker as a mechanism to ship stuff for users does suck, and it’s a workaround for sure, but I’m not convinced that all those developers are “doing things wrong”.


>The fact that running something via docker is easier for people than the distro way indicates that there are significant UX/usability issues that aren’t being addressed.

It's a lack of maintenance that isn't getting addressed. Either some set of dependencies isn't being updated, or the consumers of those dependencies aren't being updated to match changes in those dependencies. In some cases there is no actual work to do, and the container user is just too lazy to test anything newer. Why try to update when you can run the old stuff forever, ignoring even the slow upgrades that come with your Linux distro every few years?

Some people would insist that running old stuff forever in a container is not "doing things wrong" because it works. But most people need to update for security reasons, and to get new features. It would be better to update gradually, and I think containers discourage gradual improvements as they are used in most circumstances. Of course you can use the technology to speed up the work of testing many different configurations as well, so it's not all bad. I fear there are far more lazy developers out there than industrious ones however.


> How bad would it suck if every Python app you installed needed to have its own venv for example?

I would love to have that. Actually that's what I do: I avoid distribution software as much as possible and install it in venvs and similar ways.


Now tell your grandmother to install a software that way and report back with the results please.

Hahaha try asking your grandmother to install with apt and you get the same result.

I’d estimate most Unix distributions are used in one 3 ways:

- a technical application maintained by a tech savvy admin or a team of them, this would primarily be server usage. - desktop usage by a developer. - a restricted installation on older hardware for a non-tech savvy person. Their user account probably shouldn’t be given permission to install new software so none of this has any relevance to them.

You average non-technical user certainly isn’t running Debian.


> Hahaha try asking your grandmother to install with apt and you get the same result.

Hahahaha surely you're being daft on purpose pretending you don't know there's a GUI to do this right? https://apps.kde.org/it/discover/

> You average non-technical user certainly isn’t running Debian.

And you know this how? The same way you absolutely knew there's no GUI for apt? :D

Also just fyi, IT employees don't have unlimited time, so they'd rather use apt than whatever weird sequence of steps you devised to install your own application.


>How bad would it suck if every Python app you installed needed to have its own venv for example?

You just described every python3 project in 2024. Pretty much none will be expected to work with system python. But your point still stands, that's not a good thing that there is no python and only pythons. And it's not a good thing that there is no rustc only rustcs, etc, let alone trying to deal with cargo.


It’s not that they don’t work with the system Python, it’s that they don’t want to share the same global package namespace as the system Python. If you create a virtual environment with your system Python, it’ll work just fine.

(This is distinct from Rust, where there’s no global package namespace at all.)


> How bad would it suck if every Python app you installed needed to have its own venv for example?

Yeah I hacked together a shim that searches the python program's path for a directory called venv and shoves that into sys.path. Haven't hacked together reusing venv subdirs like pnpm does for JavaScript, but that's on my list.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: