
FatELF: Universal Binaries for Linux (2015) - ingve
https://icculus.org/fatelf/
======
AnIdiotOnTheNet
This was a good idea when it was part of the Application Bundle spec (or even
68k/ppc fat Mac applications), and it's still a good idea.

That's why I doubt it'll get much traction[0]. ELFs still don't even have an
accepted embedded icon standard FFS.

Anyways, imagine what the world would be like if fat binaries were the norm
and your OS guaranteed support for a "virtual architecture" that you could
also compile to, that had the same interface to the OS as any native
application. Then you could publish a binary containing native versions for
all existing architectures and be assured that it would also work for all
future ports of the OS to other architectures without the need for
recompilation.

You could basically do this today with Linux, actually. Just pick one of
QEMU's many targets to standardize on (and of course also standardize on a
base platform of libraries with a stable ABI) and use it with the dynamic
syscall translation.

[0] Turns out I was right, as this is an abandoned project. I wish I wasn't
able to predict such things via pure pessimism.

~~~
bjpbakker
> ELFs still don't even have an accepted embedded icon standard FFS

Also Apple does not embed icons in their binaries. Their app bundles are not
binaries, they are a directory structure. The icon is just another file, just
like the _actual_ executable(s).

> you could publish a binary containing native versions for all existing
> architectures

This sort of ignores the hardest part of shipping binaries; linked libraries.
Dynamic linking everything is simply not always feasible. Not to mention libc.

Also I don't really understand why anyone on Linux would want this. The fact
that I can recompile all of the software I use, is a really important feature
to me and not a distribution problem. I can see why Apple wanted this to
simplify distribution via their Appstore, but IMO that's mostly to work-around
their specific distribution problems. I don't see any of those problems on
Linux.

(edit: wording, see below)

~~~
admax88q
> The fact that I can recompile all of the software I use, is a really
> important feature to me and not a distribution problem.

I've always found this to be an interesting observation about free software.
So many complicated things like FatELF, dll-hell are just straight up _not_
and issue when you're working in a source code world where you just compile
the software for the machine you're using it on.

Most of the efforts around FatELF, FlatPak, etc seem to be to be driven by the
desires of corporations who want to ship proprietary software on linux, and as
such need better standardization at the binary level rather than the software
level.

It's a win for Free Software in my mind, that we shouldn't typically have to
worry about this added complexity. Just ship source code, and distributions
can ship binaries compiled for each specific configuration that they choose to
support.

~~~
Crinus
Note that source code access and FOSS are orthogonal. AFAIK in older Unix
systems software you'd buy would often be in source code form. In fact at the
past severa lLinux distributions had a lot of such software.

As an example Slackware distributes a shareware image viewer/manipulator
called xv (which was very popular once upon a time):
[http://www.trilon.com/xv/](http://www.trilon.com/xv/)

It is the license that makes something FOSS, not being able to compile/modify
the source code.

------
okket
May not mean much, but the last commit to the source repository was on "Wed,
04 Mar 2015"

[http://hg.icculus.org/icculus/fatelf/](http://hg.icculus.org/icculus/fatelf/)

~~~
Crinus
AFAIK all real work stopped around 2009 when it was rejected from by kernel
developers.

The HN post of the time seems to have more details:
[https://news.ycombinator.com/item?id=921165](https://news.ycombinator.com/item?id=921165)

~~~
saagarjha
The author of this project was left unfortunate situation because they needed
to merge their code into multiple large projects to make this work.

~~~
lathiat
Seems they were a little ahead of their time. Now no one cares about binaries
that ship on multiple distros because that was fixed with containerisation /
snappy / flatpak / docker. But now people actually care about arm and want
their docker to work there.

And no one really cares about the CD image size either. :)

------
peterwwillis
Why doesn't everyone just move toward a standard where all executables can be
a ZIP file with some glue, and every execution results in the OS unzipping
into a cache folder matching the hash of the zip? The OS could also scan a set
of directories for executables and pre-extract all of them, and then you could
run a command which reverse-matches hashes to executables, so that for
example, if you have an app and it's built against a dependency with a
specific hash, it can find that hash on the system regardless of the file
name.

So you could have executables just sprawled out everywhere on disk, but
executing one program would only result in the exact dependencies being
loaded, no conflicts. At the same time, you could reference an executable name
instead of a hash to use the latest installed version. So you could run an app
with "as-originally-compiled-dependencies" or "latest-installed-dependencies".
And if you supported recursively resolving zips inside other zips, you could
deliver an entire app stack with one file.

As far as avoiding a cache directory, it should be possible to mmap() each
specific file in the [uncompressed] zip into memory, have a filesystem driver
translate a zip index into a virtual filesystem, and then reading from the
virtual filesystem would just read from those mmap()ed sections of memory.
That may be stupid, though, so perhaps the zip file can just be a delivery
mechanism and a flat-file standard along with kernel modules can provide the
rest of the utility?

~~~
wahern
I feel like you answered your own question(s).

------
klodolph
One of the big problems here being that some libraries will ship different
header files for each platform, rather than doing everything with #if. For
example, LibJPEG.

------
armitron
I'm glad this was rejected as it was a terrible idea. If it wasn't clear back
then, it certainly is obvious today.

I have to agree with Drepper: It tried to solve an easy problem that didn't
really matter - in a very messy way - without addressing the hard part,
library dependencies.

~~~
wolrah
I agree that it's not really needed in the Linux world, but I disagree with
your conclusion about the reason.

It addresses libraries the same as anything else, make the libraries also fat
binaries and there you go.

The bigger issue I see with it is that fat binaries really only make sense
when you only have the binary and are giving that directly to untrained users.
It was great for Apple because that's exactly how their platform was used, and
through their various architecture transitions fat binaries significantly
eased the pain because users didn't have to care which kind of Mac they had.

When you have a package manager style infrastructure that builds from source
like every meaningful Linux distribution, suddenly it doesn't really offer
anything in most cases. Users just ask the package manager to install
something and it deals with the architecture stuff behind the scenes. Unless
you're trying to create a single disk image that boots on multiple
architectures it's just needless bloat.

From a technical perspective I love the thought that it'd be possible to build
a single disk that could boot on any platform anyone cares about, but from a
practical perspective I can't see any real purpose for such a thing to exist
beyond "because we can".

------
kitd
A bit underwhelmed by the logo. Would have preferred
[https://familyguyaddicts.files.wordpress.com/2014/12/elf-
pet...](https://familyguyaddicts.files.wordpress.com/2014/12/elf-
peter.png?w=500)

