nix-shell can even be used as a shebang-line for scripts: https://nixos.org/manual/nix/stable/#use-as-a-interpreter
So there's that.
`nix bundle` (coming with the upcoming flakes feature) promises to make Nix-built binaries runnable without nix, for the opposite of what you want.
That's one of the reasons I wrote APE. The bootloader shell script prefix is only 12kb so you can write a small program and it'll feel like it's actually your program. When you distribute it to your friends (respecting their time not asking them to become build system experts, just to try the cool thing you've written) you'll find that APE lets it run on seven operating systems too and most importantly, it'll just feel like something you've written, and that is yours, rather than, "hey grandma install oracle java first, here's how to remove the toolbar" kind of things where you're just evangelizing someone else's platform.
That is, given a shared library or a position independent
executable (PIE), it returns a single, self-contained file
packing all dependencies.
but closures do allow for deploying to other machines without
concerns about missing dependencies.
In fact, since it extracts to a directory in /tmp, it seems to be worse in comparison -- since you have to take the increase in size into account.
I really do not see what the point of this is, or what niche it's supposed to fill over flatpaks or appimages (which both have sandboxing), or why you would pick this over static libraries -- can someone explain this?
For reasons, I was trying to do that for a while, but it was easier for me to bundle a dynamic exe with the dependencies (including an appropriate dynamic linker) rather than continue to fight with complex makefiles. Without looking at the code, it seems like this ended up similarly.
(but hey, I also felt cheated when I read how clodl works. the hope for magic is always there, isn't it)
Even though I don't know why you would do that, I think you could do it by running a modified version of the dynamic linker. That thing basically runs like an interpreter for your binary. It links the code and then executes it. It should not be impossible to modify it to instead dump your ELF again, no?
I don't write code that requires Glib, but last I remember, there are explicit (lesser-known) flags that give complete static independence :P
My implicit point is that essentially, there is a slew of approaches that seem to be reinventing the wheel to solve the obvious problems in dynamic linking, but they seem to combine the problems of static linking with the problems of dynamic linking. Because people have the memoized knowledge that "static linking old and bad", they twist software into worse solutions because what they want are static libraries and binaries, but for whatever reason they don't feel they can use them.
> or how about statically relinking a dynamically-linked binary you don't have sources for?
Most open source developers don't tend to stray that far off the beaten path, at least in the communities where snap/flatpak/appimage are commonly used.
Glib != glibc
I think it's pretty clear based on context that I'm talking about GlibC though :P
I wouldn't mind if there was some quick way of doing this. I would like to statically link a bunch of commandline utilies that I'd like to just copy in my home folder on various servers without installing them globally.
Granted, I'm not a C developer so I just searched for answers on statically linking, but I never was able to easily create these binaries (with a few exceptions). If I recall correctly I had to individually compile dependencies to be able to statically link to those as well (and a bunch of C_FLAGS/LD_LIBRARY_PATH/INCLUDE_PATH env vars sprinkled everywhere to wire all those up).
If someone has a bash script gcc-static-link that I can use as a wrapper around gcc I'd be more than happy to hear about it :)
It feels to me that the container mentality wants to abstract the complexity of the computing to the point they are creating issues that were solved before then layering that on a complex OS and pretending the OS isnt required |
At some point software has to sit on hardware...
Creating statically linked end user programs that aren't minimal CLI utilities is not trivial.
You need static linking working for all your dependencies (`.a` files) of all your recursive dependencies (hundreds of C libraries, and often up to hundreds of libraries in the higher-level language you may be using).
You need to either use a very configurable package distribution like nixpkgs, or a distribution that specialises in static Linking like Alpine Linux.
These toolchains now make static linking possible and much easier, and nixpkgs can build hundreds of C and Haskell programs statically. Go is also easy. When you can statically link, it is often better, but there are still many programs for which it doesn't work yet, and just "packing up all the .so files" as proposed in the announcement is certainly easier.
For people who want to contribute to static linking, consider contributing to nixpkgs/NixOS and Alpine Linux.
If you're on Linux, musl libc and musl-tools are useful for dealing with this reliably.
This is functionally impossible on $ISA-linux-gnu, because the glibc developers refuse to provide a working C standard library (libc.a).
> musl libc and musl-tools are useful
Sure, but if the software worked (/reliably) with musl, we'd be using that already.
It's cool that they use run-time dynamic loading to "modularize" certain features (such as image loading), but it seriously damages the benefits of static linkage.
It lost this ability temporarily when switching to Meson, but I fixed it in GTK3 and GTK4. But I just checked and apparently it is broken again:
Did GTK3 or GTK4 ever become relocatable on Linux?