Would be cool to get a bit of a "why this matters" intro on repos like this. I clicked through the details, but left wondering if I have any potential use for this. Can I compile this onto a USB stick and use it as a throw-away boot Linux for maintenance tasks? Can I cross-compile this for embedded devices? What is the statically-linked advantage here? Not trying to minimize the effort, I would actually love to see more such things, but felt a bit left out as I couldn't figure out why exactly it exists.
It seems like your definition of "why this matters" is "how can this pick me up as a user" but for some projects "why this matters" is "to see if this works as well as we think it could" or similar more abstract goals. If the goal of the project is to try to be as small and simple of a Linux distro as possible by trying a different take and seeing how it pans out (which it seems like this is) then it's accomplishing it's goals whether or not it tells you how you could use such a distro were it to pan out.
The description of "I built this to try something cool, but maybe someone could find a use for it!" is just fine. But it would be great to know. Likewise: "We built at company Y this for use case x and intend to support it" also tells me what is going on.
As someone who uses linux casually, I often experience pain when I download a program, try to run it, and it fails because some dependency is not installed or is not the right version. So you try to figure out how to install it and it in turn needs a couple of things.
All of this works fine if whatever package manager your distro uses has the program you want, but the package managers don't have evertying!
So I'd love an ecosystem where things tend to be statically linked unless there is a good reason not to.
> So I'd love an ecosystem where things tend to be statically linked unless there is a good reason not to.
Well, every time there is a bug fix in one of your dependencies you/your distribution would have to recompile everything that depends on it. Which will use quite some resources.
So to actually use this you would have to reinstall (nearly) all programs once a week or so (no matter if source or binary distribution). While that is quite possible with modern internet connections, I am not sure if it solves all the problems you might think it solves. After all, even nowadays application authors could provide static packages, but many don't do it.
After using quite a few distributions over the years, I like Arch Linux best at the moment, as the combination of binary distribution, with frequent updates, and the source based AUR, with the very large amount of packages, suits my needs very good.
> every time there is a bug fix in one of your dependencies you/your distribution would have to recompile everything that depends on it
Back in the dark ages, some mainframe operating systems like MVS normally stored executables as object files, and would link the program every time you executed it.
Has the on-disk space savings and updateable libraries of dynamic linking, without most of the complexity. I believe the original motivation was to allow an executable to run at an arbitrary address on systems without virtual memory though.
It was for personal operating systems (and ancient mainframes) that had a multitasking OS but without any MMU support in the hardware. Relocatable executables were required so that you could load multiple executables at different addresses within the single address space of the memory. This is now called Position Independent Code[1].
Modern MMUs meant that the OS would define specific addresses for use by a program, so for example all ELF programs in a distribution would have the same starting address in memory hard coded into the executable (one virtual address within a process is mapped to a different physical memory address for each process by the MMU).
There’s a difference. Position independent means you can move the executable anywhere and run it.
Relocatable means the binary has enough information to fix any absolute addresses in the code after that move. That’s extra work, and means you cannot have the same physical memory mapped to different virtual addresses in different processes, and that you have to either swap out the relocated memory, or repeat the relocation when you swap in the code again (I’m not aware of any OS doing the latter, and don’t see a nice way to do it, but it is a possibility).
“In computing, position-independent code[1] (PIC[1]) or position-independent executable (PIE)[2] is a body of machine code that, being placed somewhere in the primary memory, executes properly regardless of its absolute address.”
“Relocation is the process of assigning load addresses for position-dependent code and data of a program and adjusting the code and data to reflect the assigned addresses.”
> Exodus handles bundling all of the binary's dependencies, compiling a statically linked wrapper for the executable that invokes the relocated linker directly, and installing the bundle in ~/.exodus/ on the remote machine. You can see it in action here.
That isn't really what is being described, or at least not what I'm referring to. It's more like a self-extractor.
Shell scripting and ld tools should allow you to make that yourself.
EDIT: people are apparently entitled to solutions for their problems...
> Care to elaborate?
> Show us a script then! Your internet fame awaits.
I'm not interested in fame, or in reinventing the wheel.
Apparently, you are unfamiliar with this process, so TLDR: there are basically 2 approaches, in order of complexity: 1. working with ldtools or 2. running the executable to find the various parts then dumping a binary, (3 if you consider putting the libraries in a directory then run with LD_LIBRARY_PATH=/thisdir myexecutable)
Care to elaborate? Because I'm not aware of any tool that could be used to combine a binary and all its .so dependencies (and all of theirs recursively) into a single statically-linked binary.
So long as I'm not the one building it, its fine. My internet can redownload everything. It also depends on what libraries are being used. musl does generate smaller binaries than glibc. and if you look at the massive project that is sqlite, building software should only take a couple of seconds. Of course, whether anyone actually goes to such lengths for easy compilation is another matter entirely.
Letting software have dependencies then making the user manage them in the off chance there is some bug in the foundational mostly frozen libraries should be an obviously bad decision at this point.
No other environment forces this kind of complexity on their users and almost no non-enthusiast user wants to deal with this structure.
>Well, every time there is a bug fix in one of your dependencies you/your distribution would have to recompile everything that depends on it
Well no, I could opt not to apply the bug fix if it isn't affecting me. This again is sort of serve mentality, where of course you want to get a bug fix in as quick as possible in case it is a security issue. A desktop user, maybe you don't.
>After all, even nowadays application authors could provide static packages, but many don't do it.
Right, and really what I would like is for application authors to just static link more, more often, on linux. Or at the least ship their programs with installers that apply the needed deps if they are not already there! That the OS itself is statically linked doesn't really matter, but perhaps it would inspire an ecosystem that does this.
> I like Arch Linux best at the moment
Probably suggest arch linux to someone complaining about having to track down dependencies does not make sense, since such a person is more interested in the OS just working rather than making the OS a bit of a hobby that that they want to customize.
> Probably suggest arch linux to someone complaining about having to track down dependencies does not make sense, since such a person is more interested in the OS just working rather than making the OS a bit of a hobby that that they want to customize.
Sure, in the beginning you have to spend some time to set it up, but due to the rolling releases it has a great longevity. And talking about application integration, I prefer the package management style (even through AUR) greatly to the chaotic download some binary and execute it way windows does it. Over time your windows always gets so messy because every device manufacturer seems to think that it is cool, to bundle his device driver with hundreds of megabytes of non-sense-ware (last weekend I installed some Logitech driver on Windows, because the default Windows driver didn't work (on Arch I didn't have to install anything extra, just saying... yes, anecdotal evidence)).
> Probably suggest arch linux to someone complaining about having to track down dependencies does not make sense, since such a person is more interested in the OS just working rather than making the OS a bit of a hobby that that they want to customize.
having used Red Hat, Mandrake, Ubuntu, Debian... Arch Linuxis by far the most "just works" distro I've used
But do any of those come with the ISO available at https://www.archlinux.org/download/? I don't see how you can say something "just works" if you have to go out of your way to choose an installer for it.
ArchLinux is not defined by the ArchLinux ISO ? it will be arch as long as it installs packages from the arch repos, which is what those installer do. So you just burn this iso to an usb and be happy: https://gitlab.com/anarchyinstaller/installer/-/releases
Oh, Windows can have dependency hell. And even worse is when you have a WinSXS blowout and you're trying to figure out why that subdirectory is using up 80GB of space for an unknown reason.
> Probably suggest arch linux to someone complaining about having to track down dependencies does not make sense, since such a person is more interested in the OS just working rather than making the OS a bit of a hobby that that they want to customize.
Then recommend Manjaro, not an OS that “just” loads them down with ads, tracking, forced updates, and more unless — and to some extent still even after — they customize it.
> Well, every time there is a bug fix in one of your dependencies you/your distribution would have to recompile everything that depends on it. Which will use quite some resources.
This is far for more clear cut. Often some dependency bugs, might not impact my program. Even worse, sometimes by fixing a bug my program does not care about, a dependency might break my program in some subtle way. The author of the program is the best qualified person to make the upgrade/not upgrade decision.
Hot take: A great tragedy of the GNU/Linux ecosystem is the fact that ABI+API is still not figured out, and the most common interface between programs is the C ABI, which is embarassingly primitive even by 80's/90's standards. Some people in the FOSS community just want to leave things as-is to hinder propiertary software, and it's the same story with device drivers. You can debate the merits rightfully so, but then there's still companies pushing out binary blobs which break every few kernel updates. As a FOSS developer it's an eternal fight between good and evil with no winner in sight, as a propiertary developer it's pain in the ass to maintain old and new software between all the permutation of Linux distros, and as a user I get to cry because of the lack of popular software and backwards compatibility. Snaps and flatpaks are an ugly hack, literal ductape around this very fundamental problem, and clearly not the solution. GNU/Linux should have adopted COM and/or an adequete proglang with stable ABI a long time ago, and should have tried to control wasted effort put into duplicate software ecosystems (KDE and Gnome).
There is no alternative to the C ABI. If you look at stable ABIs in other programming languages, they're either identical to or thin shims around the C ABI! Even COM, with the caveat of calling convention on MSVC.
Regardless of ABI stability you also need software to have API stability for stable distribution. The two solutions to this problem are to statically link everything or package your shared objects with your software. Then to launch the software you need a shell script that overrides the loader search path and move on with your life.
The ultimate solution is the .app/.bundle paradigm of MacOS. Everything should copy that.
The other issues of packaging software like entitlements/capability based security and code signing are similarly orthogonal. If you're shipping big software today, it needs to have everything it needs and to ignore whatever is on the user's system unless they explicitly request an override.
The real tragedy of GNU is that the interchangeability of software was always possible, package managers just managed to make it accidental and explicit. Too many foot guns there.
> Then to launch the software you need a shell script that overrides the loader search path and move on with your life.
> The ultimate solution is the .app/.bundle paradigm of MacOS. Everything should copy that.
All ELF systems support the $ORIGIN RPATH macro. It lets you build and package an application exactly like, if not better than, you would a macOS bundle. Specifically, it let's you specify at compile time a library load path relative to the invoked binary. You'd specify it as $ORIGIN/../lib for the canonical bin/, lib/, etc layout.
I haven't done native development professionally, but it seems to me like dynamic linking is a leftover from a time when memory and even disk space were precious, isn't it? Are the savings really still worth all the breakage? What is Docker but a way to force static linking on programs that were built for dynamic linking?
Stable ABIs don't seem like a good solution though. It is preferable that the system running the code should make
the decisions on how best execute it. Especially as hardware becomes more specialized and diverse. Additionally,
guarantees of stability necessarily increase maintenance burdens, so they should be avoided whenever possible.
IMO, the best solution would be something like SPIR-V. That is, programs are shipped in an intermediate
representation which a platform-specific compiler would reduce to the final program. That way no stable ABI is
needed, there is less maintenance for distros and toolchains, and end users still get high quality binaries.
Of course there are issues with protecting the intermediate representations of programs from reverse engineering.
But the combination of strong DRM with obfuscation should satisfy most software vendors. After all, most companies
are comfortable shipping Java based programs.
> So I'd love an ecosystem where things tend to be statically linked unless there is a good reason not to.
Me too, but I'm not sure this project brings us closer to that goal.
I don't need the base OS to be statically linked—in fact, that's where static vs dynamic linking matters least, because it all comes preinstalled. What I want is for everything else which I may want to install on top to be available as statically-linked binaries.
Check out kiss linux, another interesting project: k1ss.org/install
I'm considering trying it: having the simplest distribution baremetal, and keeping the complexity "outside" and "contained" (docker for servers, flatpack for desktops and laptops, etc.) makes sense to me
Since most common applications have moved to the web, and the biggest Linux end user deployment by far, Android, was created from fresh beginnings, the idea of the user being concerned with minutia of binaries is already far along a long road to the grave.
This right here is what's wrong with the application space. The idea that this could be true leads to terrible software design and slow, bloated applications. In the current landscape is it perfectly possible (and I'd say even usual) to do most of your work and processing on local system binaries (that aren't a browser or a browser with a different name).
It’s no one’s fault really. As long as people pirate software, it is economically Darwinian / inevitable that all commercial software will migrate to centrally controlled subscription gated servers. This includes open source software, if you correctly count how much of it is repackaged and sold to people this way.
It has nothing to do with pirating. That sort of predatory mercantile behaviour has happened basically ever since humanity invented money. It happens because unscrupulous people are allowed more power than they should have. Stop blaming everyone else for their bad decisions.
Eh, but if that was completely true I'd be using ChromeOS, or something similar. Perhaps some day I will—but as of 2020, I still find myself popping into a lot of native apps day-to-day.
If you need exotic libraries, you install them into /usr/local/lib with GNU Stow for secondary package management. Then point your loader there first when starting programs that depend on them. Your distro stays pristine and the program runs so everything is good.