
FatELF is no more.  Linux developer community owned Ryan Gordon. - jeff18
http://icculus.org/cgi-bin/finger/finger.pl?user=icculus&date=2009-11-03&time=19-08-04
======
djcapelis
As someone who has actually built binaries to run across multiple distros,
this doesn't address even half the issues. Very few people are concerned about
binaries that run on more than one architecture when distributing binaries
that run on more than one distro is a harder problem.

This solution doesn't solve the hard problems, but solves an easy one in an
uninspired fashion. Rejecting these patches wasn't a case of holding back
innovation, but merely holding back solutions that more experienced developers
felt were not appropriate for the platform.

There was a good reason for that and playing armchair critic after reading a
sympathetic story from a developer who is understandably hurt after getting
his feature rejected just doesn't help anyone.

The kernel is more well managed than most people give it credit for, and it's
exactly because of this that hacky incremental solutions get rejected, even
when they might legitimately produce a small benefit. Eventually the overall
solution to the problem FatElf is trying to solve is going to have to be
something different.

I feel sorry for the developer, no one likes getting a feature rejected. But
it was the wrong solution and the technical merits must trump other concerns.

~~~
antonovka
_But it was the wrong solution and the technical merits must trump other
concerns._

Your post is very long, but I'm curious to hear _why_ universal binaries are
the wrong solution?

On the Mac, it's the cornerstone of their transparent multi-architecture
support.

Apple (and previously NeXT) inarguably demonstrated its value and ease-of-use
over the past 20 years. Producing x86-32, x86-64, armv6, armv7, and PPC
binaries all from the same source code -- and having them work the first time
-- is incredibly simple on Mac OS X.

NeXTSTEP (the OS) ran on 68k, x86, PA-RISC, and Sparc processors, and it was
equally easy to build software that ran on those machines.

~~~
djcapelis
OS X has a unified set of libraries to work with, Linux does not. This means
the problem is much more complex. Creating a binary that simply runs on
different distributions all on the _same_ architecture is a fundamentally hard
problem. Simply stacking together binaries for multiple architectures together
doesn't result in a universal binary. At worst you have to stack together
different binaries for different combinations of libraries and different
architectures and if you assume even just the latest versions only of the top
5 distributions running on the top 5 supported architectures for Linux you may
well end up with 25 different binaries that FatELF would glue together
resulting in every program being a cool 2500% larger. This is unacceptable in
terms of space and unacceptable in that it still isn't universal after all
that as not all arches are supported and you only get the latest versions of
the popular distros.

This is why I claim FatELF does not solve the hard problems with universal
binaries. The only thing it might solve is someone could distribute a binary
for a particular distro that could run on multiple architectures for that
distro, but they can already do this with shell scripts and this is an
uncommon case and most people would just rather download and store the one
appropriate binary.

(This last is easy with a functional package management system. You should
keep in mind that OS X does not have a functional complete package management
scheme that allows users to discover, acquire and manage 3rd party packages.
Not all Linux distros do either, but those that do are less in need of
universal binaries. This is really a separate discussion though that gets to
the heart of how software ought to be distributed. There are many blind spots
in OS X's software strategy some of which Sparkle is making a half-baked (not
sparkle's fault) attempt to fix.)[1]

At the end of the day you can't simply assume that the solution for OS X is
the right one for Linux and you'd make a much bigger mistake assuming that the
developers who rejected these patches weren't both entirely aware of OS X's
excellent use of this technique and were doing this for anything but sound
technical reasons. (I.E. It's not just NIH, they would happily adopt this if
this was a reasonable solution to Linux's unique[2] challenges with binary
compatibility across "Linux")

[1] There are many things wrong with this and most package management systems
do not incorporate third party repos/packages as well as they should, which
IMO is the larger problem here. Not that corporations should have to manage
repos for every distro, but that becomes a separate mess.

[2] Perhaps read as special.

Aside #432: As you noticed my replies can often be anything but concise. I
apologize if that makes things hard to follow.

------
spitfire
This is sort of an interesting counterpoint to the whole "Open source forsters
innovation" meme open source types love to throw out.

Open source can foster innovation, as long as ego's don't get in the way.
Sadly there are a lot of ego's involved in open source.

NB: I think the idea of fatelf is a very good one. Mainly for binary
applications though. Which OSS guys tend to dislike.

~~~
thingie
He could write the patch, he could try it and persuade mainline kernel (and
glibc and gcc, and...) developers to include it. If the feature is really that
useful, he can even maintain it separately, or some Linux distribution can.
All of which would be impossible with closed software. Innovation in the oss
way, sadly, means, that there are lots of ideas, and most of them thrown out.

Hostile reaction from kernel developers? Well, maybe it's little sad, they
could trash the patch more politely or something, but eventually they have to.
Fatelf adds stuff to kernel, glibc and few other parts of system without
really solving much. He could try something with binfmt_misc and completely
userspace, not add junk into kernel and glibc itself. (With junk I mean
something unnecessary that can be done in userspace. (or not at all))

~~~
cschwarm
> without really solving much

On a German Linux news site (prolinux.de), a commercial Linux game developer
-- Ankh2, IIRC -- once commented that they could not get their game running on
about 2-3% of their customers' Linux computers, despite their best efforts.

I dunno whether FatElf would have solved problems like this in the long run,
but there are problems. You may not be affected but this doesn't mean others
aren't.

~~~
thingie
No, I don't think it will solve this kind of problem. It can do the only
thing: pack two different binaries into one file, which is also binary, and
the kernel has some extra logic to choose, so it is 'transparent'. While this
makes some thing easier (if you really don't know whether you want to download
that strange 'i386' file or even stranger 'x86_64' file, you download one
larger 'fatelf' containing both), it doesn't solve the real problems.
Distributions spent quite a lot of effort to make both 32 and 64-bit binaries
run on a single system. This is just a hack, nothing more. And it can make the
mess even worse. Do I have x86_64 binary in this? Is there a correct elf
packed somewhere? How do I know.

~~~
antonovka
_This is just a hack, nothing more._

Well, it is the cornerstone of Apple's highly successful and entirely
transparent transition from PPC to x86, including drag-and-drop application
installation/uninstallation. Couple it with Apple's compiler drivers and well-
designed multi-architecture SDKs, and it becomes dead-simple to support
multiple architectures and OS releases. Just build your binaries with -arch
i386 -arch ppc -isysroot /Developer/SDKs/MacOSX10.4u.sdk

It also supports Apple's transparent selection of armv7/armv6 binaries on the
iPhone 3GS.

I'd say it's pretty useful, and not really a hack at all.

The only arguments I've heard against it involve package managers and shell
scripts, which demonstrates such a remarkable lack of understanding of ease of
use and user behavior that I don't even know where to start, other than to
say: This is why popular adoption of Linux on the desktop is not going to
happen any time soon.

~~~
thingie
Apple, on the other hand, doesn't have anything even close to a package
manager, for example. Also, "transition". There is no transition for Linux.
Nobody is really interested in completely phasing out an old platform and port
everything to a completely different new one, which is what Apple did. Apple's
Mac OS has rather limited hardware support, in fact you shouldn't even try to
run it on anything that doesn't have Apple sticker. Linux is expected to run
everywhere on everything.

I can't see _any_ possible difference for the fatelf solution and the package
manager and scripts solution for an end user. It's not about the ease of use,
because there is no difference.

~~~
antonovka
_Also, "transition". There is no transition for Linux._

ARM netbooks? That said, part of why there's no 'transition' is that providing
easy-to-use (for consumers) third-party application binaries is this side of
impossible given the lack of stable APIs across distributions and releases.

 _I can't see any possible difference for the fatelf solution and the package
manager and scripts solution for an end user. It's not about the ease of use,
because there is no difference._

I can drag-install any third-party application I want to download, and I can
expect it to work on any machine. The author doesn't have to wait to get it
packaged by a distribution, set up a package repository, etc. I don't have to
wait 3-12 months for the latest version.

I can drag it over to my PPC iMac, and it'll work there, too.

How is that not 'different'?

~~~
thingie
ARM netbook with its limited SSD disk space is really a great usecase for
something that makes binaries two (three, or even more) times larger, while I
need only the binary for ARM. (Yeah, it's not that much for > 100G HDD, still,
it can be quite a lot for 4 or 8G SSD. Especially when it is completely
useless for the computer.)

It's not different, because fatelf doesn't help with this. You can still make
some tiny wrapper, and have completely the same thing. Vendor still have to
make a package fit for your system, for the set of libraries that you have,
correct versions of them, and so on. Yes, Mac OS X system has two or three
recent different versions, mostly mutually compatible. If you were trying to
make a package for only one or two mostly compatible Linux distributions and
one or two of their versions, then it would be also that easy. But it is not,
and fatelf doesn't make any difference.

------
wmf
Look on the bright side: they saved him a lot of time. Usually the Linux
developers tell you to redesign and rewrite your patch from scratch. :-/

------
holdenk
FatELF seems like a bad idea for Linux. The "benefits" listed on
<http://icculus.org/fatelf/> fail to persuade me to want to pay any additional
cost.

~~~
bad_user
Oh, so having future commercial packages that many people depend on, like
Adobe Photoshop or 3D Studio Max ... packaged and distributed as a universal
binary for every major distribution and every architecture ... is of no
importance whatsoever?

This is a major roadblock for commercial third-parties that would like to
distribute ports of their software for Linux.

~~~
thingie
You can do this without fatelf. Put everything you have into a single package
and add "shell scripts and flakey logic to pick the right binary and libraries
to load". (He fails to mention, that fatelf is nothing more than this, only
hardwired into kernel.)

~~~
antonovka
How are shell scripts and flakey logic an improvement on Apple's cross-
platform development/usage model?

    
    
      gcc -arch i386 -arch ppc -arch x86_64 -isysroot <SDK path> example.c -o example
    

Done.

    
    
      > file example
      example: Mach-O universal binary with 3 architectures
      example (for architecture i386):	Mach-O executable i386
      example (for architecture ppc7400):	Mach-O executable ppc
      example (for architecture x86_64):	Mach-O 64-bit executable x86_64
    

I can even transparently _run_ PPC binaries on my x86 Mac.

Universal binaries are a core component of this easy-to-use (for developers
and users alike) architecture independence.

~~~
vidarh
What you are demonstrating is the userland toolchain. What was suggested above
is an improvement by not requiring meddling with the kernel. Nothing prevents
a userland toolchain for Linux from supporting what you are describing above
without kernel modifications.

~~~
antonovka
_What you are demonstrating is the userland toolchain._

No, what I'm demonstrating is an end-to-end architecture-transparent platform,
which includes a complete userspace toolchain, a universal binary format, and
necessary kernel and dynamic linker support.

 _What was suggested above is an improvement by not requiring meddling with
the kernel_

I'm not sure I understand how "meddling" in the kernel is a bad thing when it
provides for user-transparent execution of multi-architecture binaries,
including transparent emulation of binaries that lack support for the host
architecture.

It's not as if it's complex or dangerous to parse the Mach-O or FatELF
formats, and if you take a page from Mac OS X or qemu, you can even do
transparent emulation in userspace.

 _Nothing prevents a userland toolchain for Linux from supporting what you are
describing above without kernel modifications._

How is it useful to build an easy-to-use multi-architecture binary if the
kernel can't actually execute it?

Why are you afraid of doing simple parsing[1] of the binary? The kernel
already does this -- how do you think ELF loading and shell script execution
works?

I can imagine this conversation 30 years ago[2] -- "Why should we add shell
script shebang parsing/execution support to the kernel? Why not just glue
together loader executables that load the shell script"

[1] The dead simple Mach-O universal binary loading code:
[http://fxr.watson.org/fxr/source/bsd/kern/kern_exec.c?v=xnu-...](http://fxr.watson.org/fxr/source/bsd/kern/kern_exec.c?v=xnu-1228#L563)

[2] The introduction of shebang in UNIX: <http://www.in-
ulm.de/~mascheck/various/shebang/sys1.c.html>

~~~
thingie
It shouldn't be hard to make an binfmt_misc handler that would rip the correct
elf from the fatelf thing (or rather a simple tar) and feed it to kernel, so
you can just type ./fatelf-something. You can also make it a shell script with
the binary in it, that will choose the correct one and execute it. Both these
solutions works without a single kernel line (really), and I can't see
anything that fatelf has and these doesn't.

------
coliveira
I think this guy is trying to solve a problem that doesn't exist in Linux. In
Linux, every piece of software is compile for the architecture you want, so
there is no need for a fat binary. Mac OS X needed this because they Mac users
mostly run commercial software, so it is better to have a single binary for
all platforms.

------
chanux
I don't believe this is the way it should end, leaving one enthusiast all
broken.

Anyway I'm not really aware about background details of the matter.

~~~
mahmud
Enthusiasm is not a substitute for pragmatics; all the other contributors are
just as enthusiastic, but they know their patches are not entitled to merger
with the main tree willy nilly, without there being a very _good_ reason.

------
jrockway
I don't see what's stopping him from forking Linux and glibc. If this is
really better, everyone will just use his fork. (It worked for my fork of
cperl-mode, anyway...)

~~~
FooBarWidget
The point is to get patches accepted without forking. Linux and glibc aren't
exactly small. Do you honestly see a single man maintaining millions of lines
of forked code as well as handling packaging and stuff?

~~~
tamas
It's not as bad as it sounds, basically great many of the
<http://git.kernel.org/> repos are actual forks of the kernel, regularly
pulling changes from vanilla, resolving the occasional conflicts and
maintaining/developing their own patches.

~~~
FooBarWidget
Actually it _is_ as bad as it sounds. Although git makes it easier to merge
with mainline, in the end it still requires an active maintainer who manually
merges and tests stuff once in a while. Maintaining a fork requires one's
constant attention and diverts one from doing other - maybe more useful -
things.

