How does an article on NixOS talk about the `rpath` issue without also mentioning the `patchelf` utility that NixOS developers created to solve this issue? It's a small tool that lets you modify ELF executables and binaries. It's also the recommended way for NixOS users to modify binaries to work properly.
`patchelf` solves a different problem: it fixes up an already built binary. Here, I am building the binary myself, so I’d rather make that just work without any extra build steps.
Ah, my bad. I was confused because I never run into linker issues when I build my rust (or any other type of) binaries on a NixOS system.
In fact, I am running the `evdev` example and I don't get any linker errors at all even when I change the linker to LLVM. I am using a nightly version of rust though.
Strongly concur. Patchelf is indispensible for beating sense into third-party closed-source shared objects. The `--set-soname` option is particularly useful in correcting all kinds of ineptitude.
I'm pretty sure NixOS isn't the only one doing this hack. The Yocto build system does something similar. It contains its own build-time binary glibc, and patches its tools to point to its own internal library installation. Or something like that. In effect, Yocto has its own build-time distro, which has to run from any filesystem location.
> I'm pretty sure NixOS isn't the only one doing this hack
When developing ArchMac I had to do godawful hacks to bog-standard libs because whatever build system decided to hardcode a lib path (or forcefully strip one when it should be hardcoded, I've had to handle both) that I had to manipulate through various means including install_name_tool which is not that different from patchelf†.
This kind of issue was not macOS specific, it just turns out the various ways things were built happened to gracefully "work" on most Linux distros by sheer luck but they could have been equally broken.
† Not really a surprise when thinking about it, the concept of Nix derivations is not that different from the concept of Darwin bundles/frameworks (in terms of being a self-contained dependency package) so it's only natural similar issues, and thus approaches and tools to tackle them, emerged.
Scripts and environment variables are an ugly hack. Environment variables are dynamically scoped and will cause problems, if you don't care to suppress them from being inherited by child processes.
Suppose there are two installations of app. One is the system one, and one is locally installed by the user. The user overrides LD_LIBRARY_PATH when invoking the local app. Suppose that that app is used in such a way that it invokes the system-installed app; that could then find the wrong libraries due to the LD_LIBRARY_PATH being inherited.
A program must simply know where its exact pieces are, all by itself, without any external tricks that could influence more than just that program.
Search paths (all of them, including PATH) should be left to the user, for arranging the system; the user should be able to manipulate paths in arbitrary ways, yet the application shouldn't break as far as being able to locate and load its own pieces.
I'm super interested in trying out mold, but I was a bit taken about seeing that it's AGPL licensed; I have no idea what the implications of that license for a linker would be. Would using it to link into a final binary require sharing the source of the binary when its distributed? What about if its used to link statically into a library, and then that library is linked into another binary?
You're running the tool not embedding or linking against its source code. An equivalent example might be if Libreoffice was under the GPL licence[1] - that licence wouldn't have any implications for a spreadsheet you created using it.
Because I am stubborn, when I am linking with a library installed in a nonstandard place, I usually try to get the configure script to do the right thing, even though it is not always easy. But, just in case I lose the battle, I keep in mind the existence of chrpath, a little utility for changing the rpath in an ELF binary. Because you use it on the final build artefact, there is no way for autoconf or libtool to screw it up.
Pretty sure that runpath and rpath are distinct and have slightly distinct behavior. Can't fault you much for making the mistake, though. The two are not given names to be easily distinguished.
RPATH and RUNPATH are indeed different. However, I believe the -rpath flag is used to set either (--enable/disable-new-dtags determines which one is used), so the confusion is understandable.
glibc: rpaths are inherited: When exe depends on libx depends on liby, then liby first considers its own rpaths, then libx's rpaths, then exe's rpaths. HOWEVER if liby specifies runpath, it will not consider rpaths from parents.
musl: rpaths and runpaths are the same and always inherited.
There's also a difference for transitive dependencies. From the Linux manpage for ld.so, about RUNPATH:
> Such directories are searched only to find those objects required by DT_NEEDED (direct dependencies) entries and do not apply to those objects' children, which must themselves have their own DT_RUNPATH entries. This is unlike DT_RPATH, which is applied to searches for all children in the dependency tree.
when I want to install those to /opt/something only thing I need to do is install_name_tool -add_rpath /opt/something
This will add search directories to binary itself. There are some DYLD_* environment variables too but I'm not sure about them... (Some are SIP protected by the way)
PS: It may invalidate signed binaries. Again, not tested such use cases.
A library itself decides if it is relocatable or fixed. If it is fixed the MH_DYLIB records its install name as /path/to/binary (generally by setting DYLIB_INSTALL_NAME_BASE so xcodebuild will merge that with the library name automatically). The binary must be at that path. However this can (and often is) a symlink just like other systems use where /usr/lib/somelib.dylib -> /usr/lib/somelib.1.3.dylib so that minor version updates can be made without rebuilding programs.
If a library wants to be relocatable it specifies an install name of @rpath/binary.
At runtime dyld creates a "run path list". Every time it encounters a load command with an @rpath name it tries substituting paths from the run path list until it finds the library. The main binary along with any dependencies can add entries to the run path list. These can be absolute paths or relative paths anchored from @executable_path or @loader_path. The former being the main binary and the latter being the path to the binary itself (eg if the main app loads a plugin the plugin can reference dependencies relative to the main app or itself as needed).
You can push your own paths in the mix with DYLD_LIBRARY_PATH (searched first) or DYLD_FALLBACK_LIBRARY_PATH (searched last). Check "man dyld" and "man ld" if you want more details.
None of the above requires modifying binaries and so doesn't invalidate code signatures. If you want to use install_name_tool on binaries you build pass "-headerpad_max_install_names" to ld so it will pad out the load commands which makes it easier to edit them.
There have been a bunch of security vulnerabilities around the Windows strategy of auto-loading any dependency from the same directory as the binary so YMMV.
> You can try to do this with rpath of $ORIGIN. But then you'll probably still run into libc issues -.-
Linux (g)libc is sort of equivalent to the Win32 API on Windows so you are not expected to ship your own version just like you don't ship your own ntdll, user32, etc. Since glibc has good backwards compatibility the onlye libc issue you will run into is having to compile against the oldest version you want to support.
And the irony? It was someone on Unix who came up with the idea that when foo.c contains #include "bar.h", the same directory where foo.c resides will be searched for that header first, by default, before other places.
The idea of doing that for the DLL search in Windows could have been inspired by C.
So you are saying this is why Windows users universally complain about "DLL Hell", and Linux users don't? Or is this how MS finally fixed Windows DLL Hell? (Presuming Windows DLL Hell is, indeed, fixed; I wouldn't know.)
DLL hell is about program installers depositing some common libraries, like the MSVC redistributable run-time, into the system folder. So then every other application is using the most recently clobbered version of it instead of the one it came with.
GNU/Linux doesn't have DLL hell only to the extent that there is an entire binary distro with maintainers beavering to keep all of the dependencies straight so that every program that needs a certain shared library is maintained to need the same version of it as any other program.
You will experience shared library hell as soon as you have your own binary application that is not in the upstream distro, and it happens to depend on one of the lesser libraries that do not do symbol versioning like openssl, libbz2 and whatnot.
I've dealt with this plenty in more than one Linux embedded dayjob. In one case, I hacked an elaborate library searching system around dlopen() into such a program.
That's a minimal issue for games. You ship all dependencies with a start script that points LD_LIBRARY_PATH where you want and everything's fine. Or maybe even ship a flatpak. Issues for games come from other places.
If your program is a binary executable, and you have to script around its startup, that is ugly and unprofessional; it looks very bad for the underlying OS and its basic userland infrastructure that you have to do anything like that.
Sure, but it's the standard way to use install_name_tool on macOS to do precisely what you described. The original commenter apparently is not aware of this.
I've been admiring NixOS from afar but given the bending users have to do to use it in this particular use-case wouldn't it seem that NixOS should present in a transparent and seamless way libraries in these shell-environments the author creates for the build to surface the necessary libraries in such a way that "standard" tools like ld/ldd can find them? Shouldn't it be on the shoulders of NixOS to make this such that users of NixOS need not require patching tools or hacks etc?
NixOS changes some standard conventions of Linux filesystem layout and where tools can expect to find things. These are for good reasons and are due to the core of what NixOS is trying to achieve. For building most things those changes are relatively abstracted away (see ld wrapper script for details) and you don't have to know about them. Fiddling around with something low level like a linker is I think a forgivable situation where the abstraction leaks - it is a process that is inextricably linked to where the system puts things.
My issue is the last sentence:
So… .. turns out there’s more than one lld on NixOS. There’s pkgs.lld, the thing I have been using in the post. And then there’s pkgs.llvmPackages.bintools package, which also contains lld. And that version is actually wrapped into an rpath-setting shell script, the same way ld is.
That means that there isn't a problem. NixOS has fixed this, the system works. Except that you have to magically know which package you should be using. This is the sort of problem that I run into with Nix - it's hard to know the correct incantation.
Yes, you need to set the RPATH correctly when building ELF executables and shared objects. You would only not know this if you were only ever building things to install into /usr with dependencies on things in /usr.
> Curious observation: dynamic linking on NixOS is not entirely dynamic. Because executables expect to find shared libraries in specific locations marked with hashes of the libraries themselves, it’s not possible to just upgrade .so on disk for all the binaries to pick it up.
Dynamic linking is not just about being able to upgrade without re-linking. Dynamic linking is not even primarily about that, not anymore, if it ever was.
Dynamic linking is more than anything about semantics that no one has bothered to add to static linking!
Static linking for C is stuck in the 1970s.
Dynamic linking for C makes C more like C+ -- a different language.
Specifically:
- with static linking symbol conflicts are a serious problem
- with dynamic linking symbol conflicts need not be a problem because with direct binding (Illumos) or versioned symbols (GNU), you get to resolve the bindings correctly at build-time and have them resolve correctly at run-time
- at build time you get to list just direct dependencies, and the linker does the rest -- compare to static linking, where you have to list all dependencies only in the final link-edit and then you must flatten the dependency list into some order, and then if there are conflicts, you lose.
For all those who keep harping on how static linking is better than dynamic linking, what I would suggest is that what must be done to make static linking not suck is to enrich .a files with the kinds of metadata that ELF adds to shared objects so we can get the same "list only direct dependencies" semantics when static linking as when dynamic linking. And I would note that libtool does this, just... very poorly.
What I would do to fix static linking:
- have `ld` add a .o to every .a that includes the `-L`/`-R`/-l` arguments given when constructing the .a (normally one does not do this when linking statically!)
- have `ld` look in every .a found when doing a final link-edit to recursively find its dependencies, and, most importantly,
- provide the same direct binding / versioned symbol semantics as in dynamic linking so that external symbols in dependents are resolved to the correct dependencies in the same way as in dynamic linking.
Notionally this is quite simple. But adding this to the various ld implementations would probably be rather a lot of work. Still, if people insist on static linking, this work should be done.
libtool is a build tool made primarily to make your life hell if you do anything that the libtool authors did not plan for. Like who thought it would be a good idea to silently drop unknown linker-driver flags.
It writes metadata into .la files, which are text-based adjuncts to .a files. It's meant to give you a common and portable interface to static and dynamic linking.
But libtool is written in POSIX shell, it's not part of the linker, and it is a bit of a disaster.
> For all those who keep harping on how static linking is better than dynamic linking, what I would suggest is that what must be done to make static linking not suck is to enrich .a files with the kinds of metadata that ELF adds to shared objects so we can get the same "list only direct dependencies" semantics when static linking as when dynamic linking.
It can't. The problem is that .a files do not record their direct dependencies, unlike ELF objects, and instead depend on the final link-edit having the full tree of dependencies provided, but flattened into a list. That flattening loses critical information needed to correctly resolve conflicting symbols.
It absolutely does. When you use cmake, if you link against the target foo which is a static library and itself was marked as linking to bar, then your final executable will have -lfoo -lbar.
Of course this information isn't stored in the .a files, but in cmake's FooConfig.cmake files - who cares as long as it works ? The vast majority of libraries now have those even if they don't use cmake as build system as they are fairly easy to generate.
Those '*.cmake' files only work with CMake, which is an issue if you try using any other build system. There is also pkg-config and the `*.pc` files, which are doing essentially the same job without depending on a build system. But it's really all just patchwork, the files are frequently missing and the whole setup is extremely brittle if you hop between OSs, different library versions or just different CMake versions (e.g. '*.cmake' files exporting different targets and variables). That CMake has to provide those files themselves for a lot of libraries (all those not using cmake) is another issue.
> Those '*.cmake' files only work with CMake, which is an issue if you try using any other build system.
At least meson can parse those. Making every other build system in existence add support for those is definitely less work than changing the format of .a and will yield exactly the same result (not that it matters much, cmake being the standard c/c++ build system for years now)
Also .pc files are near inexistent on the most used desktop OS.
> When you use cmake, if you link against the target foo which is a static library and itself was marked as linking to bar, then your final executable will have -lfoo -lbar.
And that is bad. `-lfoo -lbar` loses important information. It is the linker-editor that needs to have this, and not just the build tooling layered above it. libtool, cmake -- it doesn't matter, they're all broken for static linking as long as static linking is broken in this way.
> Of course this information isn't stored in the .a files, but in cmake's FooConfig.cmake files - who cares as long as it works ?
If the only way to make it work is to adopt a particular build system, then no thanks. But again, it can't actually work -- it can't solve the problems that are in the link-editor itself.
> they're all broken for static linking as long as static linking is broken in this way.
The main app I work on links against
- Qt
- LLVM
- libclang
- ffmpeg
- my app which is itself ~40 libs
- a dozen others
which combined account for multiple hundreds of static libraries on Linux, Mac and Windows and things just work. I could maybe bog my head against theoretical link-time problems which I don't experience or make things better for my end-users and just target_link_libraries(myapp Qt::Core), what do you think is the reasonable option ?
Linux usually (by convention) provides 1 file and 2 symlinks per lib: liba.so -> liba.so.x -> liba.so.j.k.l.
The first one is to make the linker (ld) happy: -la will look for liba.so. The linker puts the SONAME (liba.so.x) in DT_NEEDED.
The second symlink's filename corresponds to the SONAME, so that the runtime linker (ld.so) can locate the library by SONAME in rpaths.
The third one is the actual library, which can be updated while keeping the same soname & same abi.
Now, it would be great if the linker had an option to not only copy the SONAME into DT_NEEDED, but also register the path in which the library was located as an rpath.
Cause the situation on Linux is absurd! You pass some flags -L and -l to the compiler/linker, the linker links something and nobody knows what. Then when you run your executable it has to locate this something again, and you can only pray that your libc and binutils/llvm agree on search paths & order. In most cases this does not work, and you must manually pass -Wl,-rpath,/some/path to add a search path. Nobody guarantees that what the linker links is what the runtime linker uses.
Of course there are many edge cases:
- linking during make without relinking during make install will make your executables register rpaths to build directories instead of install dirs
- sometimes you link to a stub lib that should not be used at runtime
But still, some guarantee that what you build with is what you run with would be a major user experience improvement for linux.
> Cause the situation on Linux is absurd! You pass some flags -L and -l to the compiler/linker, the linker links something and nobody knows what. Then when you run your executable it has to locate this something again, and you can only pray that your libc and binutils/llvm agree on search paths & order. In most cases this does not work, and you must manually pass -Wl,-rpath,/some/path to add a search path. Nobody guarantees that what the linker links is what the runtime linker uses.
The issue you're missing is that the build directory is usually not the location of the final binary objects. The actual absolute path of libfoo.so may well be in /builds/runner/foo-package-4df78af0/build/prefix/lib/libfoo.so, which is unlikely to exist on anyone other than the CI's machine (and even on the CI machine itself for too much longer). The actual location will usually be /usr/lib64/libfoo.so, but the library that is linked against may well not be there at the time of linking (particularly in the case where a package is building both a library and an executable that depends on said library in the same package).
What you really want is for the relative path to the library to stored in the executable. Unless what you want is to actually use the globally-installed library and not one you're building at the same time. There's no single solution that fits every use case!
> The actual location will usually be /usr/lib64/libfoo.so
Usually indeed, for distro's that pretty much support one single version of every library. But this is no longer true for Nix, Spack, Gentoo Prefix and Guix, all these package managers/distro's have in common that there should be no default search paths where all libraries are dumped into.
How about `--copy-link-path-as-rpath` and `--copy-link-path-as-rpath-ignore=/build/dir`, so that ld continues to copy the soname to dt_needed, and registers rpath of non-build dirs. Then Nix, Spack, ... can simply use these flags in their linker wrapper.
I think most people designing build systems (I'm one of them) would prefer to explicitly set the paths at each point rather than have gcc ferry values between inputs behind the scenes.
My build system will have already resolved all these paths. It's very easy to interpolate these paths into the command to call the compiler.
Of course you can't guarantee that the library will get installed to the directory you think it'll get installed to, since there's no unified installer system for all Linux distributions. So even a relative path doesn't necessarily work. The best you can do is hope that the Freedesktop filesystem hierarchy is being followed, but that forces installing software for all users at once instead of per user, despite Linux supposedly being a multiuser OS.
Your complaints are reasonable, but gcc already does this. Here, I'll show you how:
> You pass some flags -L and -l to the compiler/linker, the linker links something and nobody knows what.
I agree, it would be really nice to be able to specific exact shared object paths instead of using -L and -l. Build systems typically already know the full paths to all the objects and the abstraction here is often unhelpful.
This could be remedied fairly easily by allowing (for example) -l to take an absolute path to an object rather than searching -L paths. But gcc already does this - you can just put the shared object on the command line directly like so:
Change this: gcc -lfoo bar.c
Into this: gcc bar.c /path/to/foo.so
The effect is the same, but more explicit. The foo.so.x object will be linked and added to the SONAME list.
> Nobody guarantees that what the linker links is what the runtime linker uses.
This part, however, is by design. We explicitly do NOT want what the linker links to be what the runtime uses. This is how we update shared objects between minor versions to fix bugs without reinstalling every binary on the entire system!
Letting libfoo.so.1 link to libfoo.so.1.x is a huge feature. Locking in an explicit minor version would defeat the entire purpose of dynamic linking.
> Letting libfoo.so.1 link to libfoo.so.1.x is a huge feature. Locking in an explicit minor version would defeat the entire purpose of dynamic linking.
My suggestion is to continue copying the SONAME into DT_NEEDED, and record the dir the lib was found in as an rpath.
Well, you complained about ambiguity in the search process during build, and referencing by path is how to fix that.
Regarding runtime, we absolutely do not want to implicitly embed build paths. As others have said because it's unlikely we will put them in the same place, and it's unlikely we will build and run on the same systems.
The runtime library management system is going to have a structure for where to place libraries. It may be as simple as tossing them in /usr/lib, or it may be somthing where we have different paths for each application. We can't do this if the compiler implicitly dictates universal linker paths.
One aspect I think you may also be missing is that shared objects themselves have DT_NEEDED and rpaths. You would quickly run into very confusing conflicts between binaries built on different systems, or with different build environments.
It's hard to see a problem here, since adding an rpath is very easy. You appear to be asking for implicit, hidden behavior in the compiler which doesn't fit the vast majority of use cases.
rpath is braindamaged; no program should be using that, and a distro build should almost never be inserting such a thing into executables.
I suspect that NixOS is playing with this in order to have a relocatable install: so that is to say, so that user can install NixOS in some subdirectory of a system running some existing distro. Any subdirectory, yet so that programs can find their libraries.
If I were in this predicament, rather than perpetrating hacks to patch the rpaths in binaries, I'd fix the dynamic linker to have a better way of locating shared libraries. The linker would determine the path from which the executable is being run, calculate the sysroot location dynamically, then look for libraries in that tree. E.g. /path/to/usr/bin/program would look under /path/to/usr/lib and related places.
A possibly nice hack would be to extend the meaning of the rpath variable. Give it a syntax, like say that if it starts with @, then the rest of it denotes a relative sysroot path fragment.
E.g. the program that gets installed as /path/to/usr/bin/program would be built with an rpath of "@/usr/bin". So then the dynamic linker sees the @ and does a sysroot calculation. First it strips off the basename to get just the directory part "/path/to/usr/bin". Then it sees, hey, the suffix of "/path/to/usr/bin" matches the "/usr/bin" in the rpath. The suffix is stripped to produce "/path/to" and that path is then used as the root for the library searching. Instead of searching literally in /lib or /usr/lib or whatnot, the "/path/to" part is prefixed to every search place to look in /path/to/lib and /path/to/usr/lib.
Patching binaries is very poor; it changes their cryptographic hash like SHA-256. You want your distro to be installing bit-exact stuff from the packages, and treating it as immutable.
As a rule, NixOS doesn't patch binaries, it causes RPATH to get baked in at build time. Also, as a rule, it doesn't have some sort of materialized FHS-like subtree for each package, it manages a complex filesystem tree where the (bit-exact) stuff from the packages is stored immutably by the hash of the full build description.
Binary patching only comes in when they're trying to get closed source binaries to run on NixOS and is fully managed by the packaging process to happen the same way on every system. These days, though, the approach of using filesystem namespacing to give packages a custom FHS-like view of the world seems to be growing more common instead of the patching.
IMO RPATH is fine, but $ORIGIN-relative RPATH values are best.
That said, it's generally very difficult to build deploy-time relocatable code in Unix-land. The problem is that there's nothing like $ORIGIN for finding static assets, and all the autoconf tooling just makes it so easy to make all paths in object code by absolute paths that include the install $prefix/$bindir/$libdir/$sharedir/$statedir/$etcdir, etc.
Not that one cannot write deploy-time relocatable code -- I've done it plenty. But that it requires so much foreknowledge, intent, and know-how, that it just doesn't get done.
nixOS is built around hashing the outputs of its builds, so you can verify if a build produces the expected output given the same inputs. The reason nixOS patches binaries is so that they can actually find the shared objects they expect when they are not stored in /usr/lib. Since every build output gets its own unique path, this allows for a binary to link against two slightly different versions of the same library, which in practice is never an issue one needs to resolve. However, a more practical issue this solves is that you can have a single sysroot with two binaries that need two slightly different versions of the same library.
> A possibly nice hack would be to extend the meaning of the rpath variable. Give it a syntax, like say that if it starts with @, then the rest of it denotes a relative sysroot path fragment.
This sounds vaguely similar to dyld's @executable_path variable on macOS.
And for people saying "Buy you can't get security updates".
I would rather have a dynamically-linked binary that includes all the dependences in it, at which you can upgrade the dependencies with a tool over the binary, than the madness of shared libraries in system paths. (Well, you kinda get that with appimage and similar).
One of my favorite things about NixOS is how easy it makes it to not use shared libraries, AKA the most brain-damaged, over-used, needlessly complex, crime against computing in common use.
There are good reasons to use LXC and namespaces and shit, but mostly Docker is a workaround for how fucking stupid dynamic linking is as a default.
There are even use cases for .so, but as a default? Someone should be flogged. It was stupid when Sun pushed it with X in the 90s, and it’s stupider now.
Also, gold/mold are worth trying in addition to lld. Depends on your software.
Shared libraries mean that when there's a security issue, I can update the system's version of the library, restart all programs that use it, and have a fully patched system. If everything is statically linked, I would instead need to identify the library version in use for every binary, then get a patched version of each.
Dynamic linking is a tool to improve system administration.
Dynamic linking is a way to get opaque updates from vendors that may or may not have the patch you want, may or may not have a new CVE that you don't want, and have a very difficult time knowing if the code that's going to run when you invoke a given executable is the same or different from what you ran last time and how.
Here I thought symbolic links were "the most brain-damaged, over-used, needlessly complex, crime against computing in common use." I am not making this up, ask Jeremy Allison, who is in a position to know.
Using Nix without relying on symbolic links would be ... challenging.
Symbolic links can be a pain in the ass, but they're comprehensible and serve a purpose. Most importantly, they're pretty opt-in. Now maybe that means you opted in because you're using a piece of software like NixOS that can't work without them, but they don't infect everything and spread like Kudzu.
The `glibc` people won't even make it work properly without being an `.so`. I appreciate that the GCC people are desperately trying to hold on to relevance in the face of a much better toolchain, but at what point does the FSF die a hero or live long enough to become the villain it was trying to conquer?
Jeremy is a bright guy and I have huge respect for him. And I know the `cat-v` folks troll a bit much, but there is some real substance to linked set of arguments [0]. And it very much ties out with my experience.
In the glory days at FB we statically linked everything, and it was amazing. I don't know this firsthand but I've heard it secondhand, and given that we plagiarized basically everything else from Google in those days, I tend to believe the claims that Google did the same.
“I tend to think the drawbacks of dynamic linking outweigh the advantages for many (most?) applications.” – John Carmack
There is no question but that symbolic links are convenient for users. Apparently the problem is that file-system operations on file systems that allow them, or that allow them to be changed, cannot be made secure. Maybe if you could turn off everybody's rights to create or delete symlinks, they would become less problematic. Maybe, have a mount option you could toggle on just when you need to change some, and turn it off again right away.
Dynamic linking used to be a big optimization thing -- saved on disk space, saved on virtual-memory footprint; nowadays we hardly notice. Next, it meant you didn't need to rebuild everything when a library got a fix or backward-compatible improvement. Then, it became a way to get security patches into use quickly.
It is kind of impressive that we don't (often?) see dynamic linking itself used as an attack vector, aside from bugs in the libraries so linked.
Eh, I will respectfully disagree on a couple of points.
Silently patching `.so` code actually obscures whether or not you have the relevant security fix to the relevant library. `libfoo.0 -> libfoo.0.1 -> libfoo -> 0.1.3` would be confusing enough even if vendors didn't routinely change the code out from underneath without moving the "version". If you're linking a `libfoo.a`, not only does the hash of that file change when it gets updated (or was failed to be updated), but the hash of your resulting binary that you choose to run either did or did not change. You don't need NixOS for that. No `LD_PRELOAD` crap can get in front of you because someone grabbed control of environment variables. There's like a zillion less things to go wrong from a security perspective. And even then, the security that you get from running a binary you built against someone who is already on your machine, and already has enough permissions to run it? You're in murky territory already. Keep them out of your box unless you're a cloud provider or something.
I'm no FS expert and I'm prepared to believe that symlinks might create problems for filesystem engineers, but you also sort of need them if you want atomic FS operations on POSIX. Something being a standard isn't a blank check to be bad obviously, but it's a lot easier to change whether or not `glibc` breaks on purpose when statically linked than to change a 30-year-old standard.
Dynamic linking is a nightmare for a number of reasons: it makes it murky and difficult to know what code is running, it makes effective text segments depend on environment variables, it further privileges superuser, it destroys the portability/backwards-compatibility that Linus has fought so hard to preserve in the kernel, it miseducates people about how virtual memory works by leading them to assume that it's some kind of performance win (it's not), it complicates the whole system by a ridiculous amount, it requires that you `readelf -d thing` to even know what crazy `rpath` shit is going on.
Symlinks have problems, I've been burned by them. But you can use them or not as you like, and the complexity is manageable. You show me a serious hacker, I'll show you someone who can get symlinks substantially right.
You show me someone who really, really deeply understands what the hell is going on with dynamic linking? I'll show you Ulrich Drepper and his weird agenda around suppressing LLVM.
Oh sure, I completely agree. But a clear trail of custody around all of the text segments that end up in the binaries that get executed serve both use cases well.
I appreciate that for a lot of desktop users they just want it to work and aren't terribly picky about which minor version of `libfoo` is required to get Firefox or Chrome to start.
But there are vendors and maintainers who are deeply concerned about such things when attempting to give the desktop user a seamless experience, and why a big mushy puddle of who-friggin-knows code makes that job any easier?
Who friggin knows. I think it's just inertia.
The fact of the matter is that everything from snaps to flatpaks to docker to this whole containerization craze is mainly dealing with the pick a card, any card outcome you get with a bunch of random `.so` in `/usr/lib/`.
Wait, I thought you were criticizing Nix because it uses symbolic links.
I originally used Absible but switched to Nix. I found ansible to be too idiosyncratic and brittle to maintain. It also isn't inherently idempotent though it's supposed to be used that way.
I believe it's `ncmncm` who is mounting a credible argument against the existence and use of symbolic links. I'm the "fuck dynamic linking by default" guy.
https://github.com/NixOS/patchelf