Hacker News new | past | comments | ask | show | jobs | submit login

There's definitely value in the static approach in some cases, but there are some downsides e.g. your utility will need to be recompiled and updated if a security vulnerability is discovered in one of those libraries. You also miss out on free bugfixes without recompiling.

If you require a library, you can specify it as a dependency in your dpkg/pacman/portage/whatever manifest and the system should take care of making it available. You shouldn't need to write custom scripts that trawl around for the library. Another approach could be to give your users a "make install" that sticks the libraries somewhere in /opt and adds it as the lowest priority ld_library_path as a last resort, maybe?






> e.g. your utility will need to be recompiled and updated if a security vulnerability is discovered in one of those libraries. You also miss out on free bugfixes without recompiling.

This was the biggest pain point in deploying *application software* on Linux though. Distributions with different release cycles providing different versions of various libraries and expect your program to work with all of those combinations. The Big famous libraries like Qt , gtk might follow proper versioning but the smaller libraries from distro packages - guarantee. Half of them don't even use semantic versioning.

Imagine distros swapping out the libraries you've actually tested out your code with with their libraries for "security fixes" or whatever the reason. That causes more problems than it fixes.

Custom start up script was to find the same xml library I've used in the tar ball i packaged the application in. They could then extract that tar ball wherever they need - including /opt and run the script to start my application and it ran as it should. Iirc we used to even use rpath for this.


> Half of them don't even use semantic versioning.

This is a red herring. Distros existed before semantic versioning was defined and had to deal with those issues for ages. When packaging, you check for the behaviour changes in the package and its dependencies. The version numbers are a tiny indicator, but mostly meaningless.


I think semantic versioning actually predates distributions. It just was not called "semantic versioning." It was called Unix shared library versioning.

Imagine a world where every library and every package has their release date as their version. You'd instantly know which software lacks maintenance or updates (bitrot).

To me it seems more attractive than how Nix does it, but I guess they considered such and saw conflicts, therefore went with hashes.


Would you also instantly know if 20250110 is a drop-in replacement for 20240930 or will it require changes in your code to make it work?

IIRC GNU Parallel versions by date.

Recently in the python ecosystem the `uv` package manager let's you install a package as it was on a certain date. Additionally, you can "freeze" your dependencies to a certain date in pyproject.toml. So when someone clones it and installs dependencies it will install it at the date you've chosen to freeze it at.

Personally I love this method much more than versioning.

I think versioning is mostly useful to just talk about software and maybe major major additions/changes e.g io-uring shipping with linux mainline.


On the opposite side of the world in Gentoo, we compile updates to libraries and applications together on a rolling basis all the time, and it generally works out while letting us have everything as bleeding edge as we want it.

There's software that is done and doesn't need constant updates.

Yes, but if statically linked, this excludes all software that relies on security-relevant libraries (e.g. cryptography) or receives data from the network. I struggle to think of a lot of software that would qualify beyond coreutils and friends.

>When packaging, you check for the behaviour changes in the package and its dependencies

Yeah, but the package maintainer for a widely used library doesn't actually have the resources to do this. (heck, a package maintainer for a non-trivial application likely doesn't have the resources to do this). Basically they update and hope to get some bug reports from users.


This is never ever a problem unless a developer insists on always using the most cutting edge version of a library. There's no law that says you have to use the bleeding edge of every library when you make a program. Another issue these days, is that library maintainers often add new features or delete old features without incrementing the major version number. In the olden days it was assumed that minor versions were for bug fixes that don't break compatibility, and when you wanted to change how the library works in a major way, you increment the major number.

Now a lot of stuff is contnuously buggified so there is no concept of stable and in-progress.


I often refer to semantic versioning as "semanticless versioning". Everyone disagrees about what constitutes a change warranting each version number to be increased

Fun part is that it actually is true as for different use cases of the same library change might mean something different.

So it is complicated and there is no solution for every context, therefore we use best approximation.


>Imagine distros swapping out the libraries you've actually tested out your code with with their libraries for "security fixes" or whatever the reason. That causes more problems than it fixes.

I don't believe that it causes more problems than it fixes. It's just that you didn't notice the problems being silently fixed!

There are issues related to different distros packaging different versions of libraries. But that's just an issue with trying to support different distros and/or their updates. There are tradeoffs with everything. Dynamic linking is more appropriate for things that are part of a distro, because it creates less turnover of packages when things update.


"Free bug fixes without compiling". I think YMMV.

It depends a lot on ABI/API stability and actual modularity of ... components. There's not always a guarantee of that.

Shared libraries add a lot of complexity to a system for the assumption that people can actually build modular code well in any language that can create a shared library. Sometimes you have to recompile because, while a #define might still exist, its value may have changed between versions, and chaos can ensue - including new, unexpected bugs - for free!


My current day job has probably 60 apps the depend on one shared library.

Static linking has its place pace, no doubt, but it should not be the norm.


60 programs y'all maintain as part of an overall larger solution or 60 randomly selected apps maintained fully by 3rd parties that just happen to share the same major library version?

The former often makes a lot of sense in terms of deployment or maintenance - it's all really 1 big system so why rebuild and deploy 60 parts to change 1 when it's all done by you anyways. I'm not sure that's really what every build environment should default to assuming is the use case though but going shared libraries makes a ton of sense for that regardless.

60 programs maintained and built by 3rd parties which just happen to share major version of a library (other than something like stdlib obviously) would seem nuts to manage in a single runtime environment (regardless of static or shared) though!


> 60 programs y'all maintain as part of an overall larger solution

That, 100% is our scenario.


To me, the worse situation is when you just want to install a little tool to do a 2 minute job, and apt decides it needs to pull in 60 dependencies. You spend more time fetching and installing all these crappy little libraries, then uninstalling them all later, than you do running the tool.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: