Hacker News new | past | comments | ask | show | jobs | submit login

> Debian is a binary-first distro so this would obligate them to produce probably 5x the binary packages for the same thing. Then you have higher chances of conflicts, unless I'm missing something.

Ah yeah, this wouldn't work -- instead, Debian would need to bite the bullet on Rust preferring static linkage and accept that each package might have different interior dependencies (still static and known, just not globally consistent). This doesn't represent a conflict risk because of the static linkage, but it's very much against Debian's philosophy (as I understand it).

> I don't know if Rust packages are structured that well.

Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.

> These are typically not used to install everything that goes into a whole desktop or server operating system. They're used to install a handful of applications that the user wants.

I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.

And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).






>I might not be understanding what you mean, but I don't think the user/machine distinction is super relevant in most deployments: in practice the server's software shouldn't be running as root anyways, so it doesn't matter much that it's installed in a user-held virtual environment.

Many software packages need root access but that is not what I was talking about. Distro users just want working software with minimal resource usage and incompatibilities.

>Rust packages of different versions can gracefully coexist (they do already at the crate resolution level), but static linkage is the norm.

Static linkage is deliberately avoided as much as possible by distros like Debian due to the additional overhead. It's overhead on the installation side and mega overhead on the server that has to host a download of essentially the same dependency many times for each installation when it could have instead been downloaded once.

>And with respect to resource consumption: unless I'm missing something, I think the resource difference between installing a stack with `pip` and installing that same stack with `apt` should be pretty marginal -- installers will pay a linear cost for each new virtual environment, but I can't imagine that being a dealbreaker in most setups (already multiple venvs are atypical, and you'd have to be pretty constrained in terms of storage space to have issues with a few duplicate installs of `requests` or similar).

If the binary package is a thin wrapper around venv, then you're right. But these packages are usually designed to share dependencies with other packages where possible. So for example, if you had two packages installed using some huge library for example, they only need one copy of that library between them. Updating the library only requires downloading a new version of the library. Updating the library if it is statically linked requires downloading it twice along with the other code it's linked with, potentially using many times the amount of resources on network and disk. Static linking is convenient sometimes but it isn't free.


OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.

>OSX statically links everything and has for years. When there was a vulnerability in zlib they had to release 3GB of application updates to fix it in all of them. But you know what? It generally just works fine, and I'm not actually convinced they're making the wrong tradeoff.

Let's see. On one hand there is more compile time, disk usage, bandwidth usage, RAM usage required for static linking. On the other hand we have a different, slightly more involved linking scheme that saves on every hardware resource. It seems to me that static linking is rarely appropriate for most applications and systems.


Historically, the main reason why dynamic linking is even a thing is because RAM was too limited to run "heavy" software like, say, an X server.

This hasn't been true for decades now.


Other OSes got dynamic linking before it became mainstream on UNIX.

Plugins and OS extensions were also a reason why they came to be.


This is still true.

Static linking works fine because 99% of what you run is dynamically linked.

Try to statically link your distribution entirely and see how the ram usage and speed will degrade :)


But there's a pretty significant diminishing return between, say, the top 80 most linked libraries and the rest.

Or using OS IPC processes for every single plugin on something heavy like a DAW.

RAM is still a limited resource. Bloated memory footprints hurt performance even if you technically have the RAM. The disk, bandwidth, and package builder CPU usage involved to statically link everything alone is enough reason not to do it, if possible.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: