
Revisiting How We Put Together Linux Systems - vezzy-fnord
http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html
======
throwaway000002
So the proposal is another layer of indirection, à la namespaces, à la where-
have-I-seen-this-before.

I'd appreciate it if they'd stop thinking that what they're doing is for the
benefit of everyone instead of in their particular interests.

I wish someone, anyone, would officially pull udev out of systemd for the sake
of all Linuces.

But in reality, I'm betting on hypervisor-based micro OSes as they way to go.
We don't need another "app" format, like what Docker/libcontainer and this
proposal are gravitating toward. We already have one, it's called
HTML5/REST/HTTPS. Get that in your skulls people! A tar-ball is an app that
serves the directory tree, okay. A POST virtualizes this, like a copy-on-write
fork(). Now you have to determine how you virtualize on sessions/user
logins/etc. There is nothing in this decade that needs to be done any other
way!

~~~
asuffield
> So the proposal is another layer of indirection, à la namespaces, à la
> where-have-I-seen-this-before.

Every year or two another group of people gets it into their heads that there
is "something wrong" with the way distributions do things, and they have "the
answer". (Often the answer involves a lot of new names for things.) These
groups always make a lot of noise, launch a bunch of new projects to reinvent
a lot of wheels "the right way", and draw a bunch of media attention.

All of the users who like this sort of thing immediately desert the previous
group and leap to this new one. Last year's group somehow manages to become
even louder as their userbase dwindles.

Meanwhile, the established distros dig into what this latest group is doing,
and finally manage to cut away enough of the rhetoric to identify what real
advantages it has. This usually boils down to a couple of features buried
behind several gigabytes of blogs, mailing lists, wikis, and enthusiastic-but-
content-free rants. The established distros usually say "yeah, that could be
handy" and implement these new features, without rewriting the world.

Sound familiar?

This one looks like some folks decided docker wasn't complicated enough and
renaming everything would let them make a new thing. If you look through their
summary proposal, it's a list of things which you can already do, albeit
without there being any single distro you can install today which has all of
them working out of the box (largely because cryptographic verification and
docker are still works-in-progress - they'll get there).

~~~
marcosdumay
That's normally harmless... But this time it's systemd developers doing the
dance, and they have enough leverage to take several distros hostage into
their new ways.

Or maybe that sort of thing is what is necessary to make somebody replace
systemd for once.

------
otterley
Hidden in these sorts of proposals is an underlying assumption that being able
to install the newest version of some arbitrary program the day it's released
and have it Just Work is something that (a) people actually need, and (b) this
is a solvable problem that doesn't involve significant tradeoffs in usability,
cognition, third-party integration (which is more important in server
environments than people realize), or security.

I've been working with Linux systems for over 20 years now. Despite the
grumblings of wet-behind-the-ears programmers to the contrary, having the
newest version of package Foo is almost never necessary and one can generally
get things done on a 1, 2, even 5 year old version of some operating
environment. In fact, I'd argue that allowing time for software to mature is a
Good Thing -- in the meantime, bugs get fixed, regressions in libraries are
reverted, semantic versioning violations are repaired, etc, which reduces the
amount of needless headache you have to deal with when you just want to Get
Work Done.

Of course there are exceptions (I would never ask anyone to write server code
under Ruby MRI pre-2.1), but those conditions are readily dealt with: you
compile what you need (perhaps statically), put it in a tarball in /opt, make
some symlinks to /usr/local (or update your PATH and other environment
variables) and move on to the next actual problem.

Yes, standards and "old packages" can be annoying if you're used to a greater
level of control on your personal equipment. But fences and constraints can be
good: they keep your eye on the problem you're trying to solve; they keep the
goalposts from moving; they minimize combinatorial compatibility problems; and
they allow people to leverage their zones of expertise at the appropriate
layers.

~~~
FrozenCow
> Of course there are exceptions {...} put it in a tarball in /opt, make some
> symlinks to /usr/local

This becomes unmanageable once you need to do that for multiple applications.
Different applications need different versions. I'm not that familiar with
Ruby, but you can imagine different versions of Ruby itself needing different
versions of system libraries. An upgrade of your OS could become incompatible
with the Ruby version you just compiled yourself.

It's good that some people are looking for solutions to this problem. It's a
worthwhile effort, even though it might not be directly applicable to
everyone. Same was true for Systemd a number of years ago.

~~~
otterley
In practice it hasn't really been much of a manageability problem at any of
the (reasonably large) companies whose servers I've maintained. There are
perhaps five or six mission-critical packages that need to be kept up, and
since we only support two target distributions, the order of complexity is
quite small.

The key is to define your constraints clearly and not to deviate from them
unless you have to.

------
tbingmann
The article is completely by side the point. He's basically ranting about how
they can't test systemd on all distros, because every distro does things
differently. How to fix it? Virtualize distros, yay. That may fix your
problem, but no one else's.

The real point is that, due to distro fragmentation, Linux gives the casual
user an inconvenient shared library linkage experience. At the same time the
distros are really good as packaging software, which gives us the "one-click
install" experience we're so use to. But they do so, at the price of
compatibility, so we have the current distro lock-in situation.

Anyway, as an upstream software developer, I see the medal from the side of
library linkage: when my program starts, the dynamic linker resolves this big
list of symbols that my program contains and finds me matching libraries. You
can extend this paradigm to icons, locale, and other support files as well.

So who tells the dynamic linker which symbols to match? Well, the contents of
the shared libraries, and these already contain versioned symbols (e.g. this
memcpy@1.0 does this, memcpy@1.1 doesnt do that anymore). The easiest solution
to the matching problem is static linking. But that's not what we really want,
because we may actually want to replace symbols with better versions (due to
security fixes, faster implementations, super-set of features, etc).

And who versions symbols? Well, usually the upstream developers of the
important libraries do. And they do that somewhat remarkably well already, but
it could be better. If your program uses an upstream that doesnt version
things, well, bad choice, static linking is probably the best solution here.

So what I would propose, is to move the problem down to a linkage problem,
with better consensus between binary creators and upstream authors on how to
version symbols. And if my system currently doesnt have a matching symbol
installed? Just download it from the distro's well-versioned repositories,
maybe not the whole gtk+ library, but a diff of it to another version.

We would then in the end not have a package installer, but a fetcher-of-
missing-versioned-symbols-for-a-binary installer. But this model is pretty far
fetched, since the current method of compiling code into monolithic shared
libraries is much simpler. On the other hand, the linux dynamic linker is
already very intelligent, maybe it's time for the large distros to cooperate
on this level.

------
kelnos
I usually try not to be blatantly negative, but this is just stupid. Most
users do not care how quickly new versions of software get to them. The
current distro packaging system is just fine. Leave it to Poettering to come
up with something complicated for no good reason.

The funniest part is that he's using systemd as an example. That's _exactly_
the kind of software that I want to change very rarely, and I want those
changes to be vetted and integrated by someone intimately familiar with the
distro I'm running. The idea that I'd run an upstream-packaged version of
something like systemd is absurd.

The proposed packaging system is just another system incompatible with all the
others out there.

Want a standardized system image? Build one from your distro of choice (guess
what: your users don't care about that either), uninstall dpkg/rpm/whatever,
make a disk image. Automate the process so it's reproducible. Done.

~~~
FrozenCow
The current packaging systems has problems. You cannot install just any
version of any application on your system, but a lot of people want that.

A nice example is games. You want to install a game on your system without
working around the package manager. That was very hard to do if you weren't on
the distro that the game was build for. For instance, if the game was just
released, there was little chance it would work on Debian stable...

Steam has solved this problem by using their own package manager and their own
set of 'approved' libraries that other games must link to. Steam always ships
with the set of 'approved' libraries, just so that it can side-step the
libraries on your system.

The same is true if you want to build a piece of software that seems to be
incompatible with your system: it needs a different GCC, it might need a
different version of Gnome libraries.

Also, disk images aren't as simple as you make it seem. If you have a system
and want to upgrade one of your applications (but leave the rest as they
are!), you aren't going to like the current package management tools.

These kinds of problems happen often for a lot of people and a solution is
highly appreciated. That said, the solution in the article probably isn't the
best.

~~~
otterley
> You cannot install just any version of any application on your system, but a
> lot of people want that.

That's the price of shared libraries (which you're trading off for easy
security updates).

If you don't like this, you're still free to run statically-linked programs.

> Steam has solved this problem by using their own package manager and their
> own set of 'approved' libraries that other games must link to. Steam always
> ships with the set of 'approved' libraries, just so that it can side-step
> the libraries on your system.

So it's OK for Steam to do this, but not the OS vendor? I don't follow your
logic.

~~~
FrozenCow
This could have been solved on a different level. If the right architecture
was in place Steam didn't need to solve it.

You can have multiple shared libraries of the same name, but different major
versions. Applications that need the same version can use that same version.
Applications that need different versions can use different versions. The
package repository shouldn't be the conflicting factor.

~~~
otterley
Most distributions have provisions for installing multiple major versions of a
shared library (they change the package name to libfooX to accommodate this).

------
honr
I agree with _most_ [1] of the issues he is aiming to solve. But the approach
would have been useful if he would have built it on top a superior existing
system which is Nix & NixOS. Docker is increasing popular while missing most
of the problem NixOS (and the new vision of systemd) solve, so I'd say it is
not wrong to assume there is definitely solid demand for this kind of app
"packaging" (virtualization).

Tying this to systemd is an "interestingly" bold move, to put it mildly.
People don't switch to btrfs just for the sake of an initrd replacement, after
all, and forcing them is openly asking for replacements. So, I would be very
careful if were willing to see that vision materialize.

[1]: One obvious point of discussion is declaring exact versions of library
dependencies, which in most systems leads to highly undesirable results: just
compare the horrible world of maven packaging (i.e., the java world, mostly)
to the efficient (but curated) world of debian packages. But when you have
exact declarable dependencies (where any dependency dictates exact versions of
its own dependencies), _and_ reliably reproducible builds (the game changing
factor), and hopefully some level of automated testing, you will be fine: You
can quickly upgrade _all_ your package portfolio to the newest version of the
"security fixed" library, have the world get rebuilt against the newness, and
get everything verified (using the tests that you have. You have some, don't
you?). The problem of transferring only what actually changed is non-trivial
but completely solvable in various ways (Example: only send diffs, and issue
new encryption/hashes for the diffs through an automated trusted system that
does the diffing).

~~~
FrozenCow
I totally agree. I was very surprised Nix wasn't mentioned once in the
article. They seem to be solving the same problem, but in a way that should
work across different setups (no btrfs requirement for instance).

Systemd has made a number of good (imo) standards for distros to adhere (most
notably service management and logging), but the standard that the article is
describing doesn't seem like the way to go.

Whatever the case, I'm very interested in seeing how things will progress with
Nix, Systemd and others.

------
ris
"We want to allow app vendors to write their programs against very specific
frameworks, under the knowledge that they will end up being executed with the
exact same set of libraries chosen."

Do we? So if lazy vendor X doesn't get around to updating their application
which "depends" on a version of zlib/openssl/libpng that turns out to be
horribly insecure, we can't swap in a patched version?

I don't see how this is any better than the library bundling problem.

~~~
akira2501
Don't we? I thought this was just called "static linking."

~~~
kjs3
It is, or as I like to call it, "keeping that old insecure library around
_forever_ ".

------
papaf
How is this not Embrace Extend Extinguish?

1\. Systemd as an alternative to Upstart and init.

2\. Systemd does everything.

3\. Systemd becomes more important than the distributions themselves.

I actually like their vision - I just wish my linux boxes were not part of it.

~~~
joepvd
A very apt summary.

Can anyone recommend something from The Other Side of the fence for
desktop/laptop use? One of the BSDs? A spinoff from Solaris? Or, something
else?

I currently use archlinux, and value the rolling release approach with
precompiled binaries.

~~~
justincormack
PC-BSD is the obvious option targeted at desktop. Normal.FreeBSD, NetBSD or
OpenBSD are also options. The Solaris spinoffs are pretty much server targeted
so not really recommended.

------
ealexhudson
I think this is awesome. The ability to deploy new OS versions, and have a
full trustable roll-back, is particularly great.

I'm not totally convinced about encoding a user database into the volumes,
mainly thinking about network directories, but I guess the disk users will
only be local static accounts anyway. Solving the network user database
'problem' properly would be a huge win (and I don't count LDAP as a solution,
although a well-tuned LDAP configuration does work pretty decently).

------
VLM
The largest failing of the article is in the "Existing Approaches To Fix These
Problems" where its ack'd that this is attempt number one zillion to fix some
old problems, and here is a slightly different approach.... the problem
specifically being lack of analysis of past failures to improve the new
proposal. We're just going to end up with POSIX / LSB again, except maybe less
successful because its obviously going to be immensely more complicated. Its
possible that analysis was done but not documented.

The tone of the article is very strange, like someone who has no experience
with distros or sysadmin work came in with a blank slate from an unrelated
field, like perhaps sorting algo research. I simply can't identify with most
of his postulates, so its no surprise I was all WTF at the proposed solution.
Or rephrased, its an academically interesting idea in an abstract sense, but
seems to be unrelated to, and not useful for, the existing real world. (edited
to emphasize, that this is a particular instance of the general subject that
new opinions from uninformed outsiders are often interesting to look at,
sometimes even insightful to have a new pair of eyes, but are usually a truly
awful source of advice)

------
uint32
> systemd cabal

That phrase isn't doing them any favours.

Software packaging sucks. It will continue to suck until there is a concerted
effort from a quorum of stakeholders to fix it. I'm not holding my breath.

------
anon4
> The classic Linux distribution scheme is frequently not what end users want,
> either.

THAT'S THE WHOLE POINT YOU GODDAMN MANIAC. It's not what they want, it's what
I want.

> Many users are used to app markets like Android, Windows or iOS/Mac have.

STILL THE FUCKING POINT. Making desktop linux more like windows is like
selling beef steak and lamenting you're having a hard time getting vegetarians
to buy it.

Fuck, this made me mad. I couldn't get past the "Users" section, he might
redeem himself farther on, but I doubt it.

~~~
RDeckard
You'll probably get downvoted, but I gave you on upvote since it made me
laugh.

------
aruggirello
I beg to differ. IMHO package management in Debian and derivatives is nothing
short of brilliant. If a security issue arises in a library, or a new feature
is implemented, you update it and all of the software using that library will
benefit.

OTOH it would be great if apt could automatically take advantage of btrfs
snapshots to better ensure atomic updates, or to allow rolling back from
updates that might break something. This would be much better than just
choosing a different kernel at boot.

~~~
dsr_
It would, unfortunately, tie package management to a particular filesystem.

As an optional item, I like it; letting it connect to zfs, btrfs, LVM, or
what-have-you on a plugin basis seems plausible.

------
BlueZeniX
Two words: "Use NixOS".

~~~
DanKlinton
How does this help me from "Desktop app for tracking stars" or "Video game"
application developer perspective that I want to build once, publish once and
non-technical users will be able to install simply on(Debian, Ubuntu, Fedora,
CentOS, NixOS.....) and not depend on distros packagers?

Like I can do for OSX or Windows?

~~~
ansible
Users would have to install NixOs (or GuixOs) as the base operating system,
there is no getting around that.

But, after that, it seems to solve many of the versioning and dependency
issues.

~~~
BlueZeniX
Not necessarily: nix the package manager runs fine on other distros, as well
as FreeBSD, OSX, Windows, SmartOS, ...

Nix doesn't need fancy filesystem shenanigans to do its job: build isolated
packages with complete runtime dependencies.

NixOS is built on top of it to provide atomic system-wide upgrades and
rollbacks.

------
regularfry
So if, just to post hypotheticals, I actually wanted systemd but _didn 't_
want a crazily overcomplicated snapshot system forcing me into btrfs' less
well-tested areas, would that be possible in this brave new world?

------
rakoo
So, union mountss ? This looks _a lot_ like what plan9 used to propose:

[http://www.cs.bell-
labs.com/magic/man2html?man=mount&sect=1](http://www.cs.bell-
labs.com/magic/man2html?man=mount&sect=1)

with the added feature that each sub-fs is independently distributable, which
is very good. I fear tying it to btrfs (or any specific fs) might be a bit
much though.

I don't know if that could be possible by mounting upstream tarballs as read-
only fs in userspace... maybe a crazy idea.

------
Zardoz84
Slax did something similar with aufs + squashfs . This looks very similar but
with a standardized way of handling dependencies and paths (something that
Slax modules not had, at least the last time that I used it). The idea is very
powerful and interesting.

------
stemc43
OMG - finally.

