
Why systemd? - jamesog
http://blog.jorgenschaefer.de/2014/07/why-systemd.html
======
Sanddancer
I think part of systemd's problem, as much as Poettering et al will try to
deny it, is that it is full of NIH. One of the things this post criticizes,
and Poettering criticizes, is the BSD-inherited daemon() function. Being
curious, I looked at the function's implementation, both in FreeBSD's
implementation, and glibc's implementation. FreeBSD's implementation handles
pretty much everything the daemon writer themself would want to -- it sets the
signal handlers and masks appropriately, double forks, creates a session, sets
PIDs unless you tell it not to, and changes to the root directory unless you
tell it not to. Glibc's misses important steps, like the signal manipulations,
tries too hard to create a typical null device, and otherwise completely
misses the point.

The biggest problem I see with system is that the developers don't play well
with others. Instead of working with various parties, like the glibc
maintainers, to fix deficiencies elsewhere, they expect developers everywhere
to drop what they're doing to redesign how their projects work, when they work
just fine for the many, many other unix architectures out there. Too much of
systemd is based on magical pixie dust, compatibility be damned, and not
enough on actually making things better.

~~~
cbsmith
> double forks

That's actually not something you want. It turns out that makes process
management unnecessarily hard. That said, the glibc implementation isn't
terribly good either. The CW is not to use either.

> I think part of systemd's problem, as much as Poettering et al will try to
> deny it, is that it is full of NIH.

The most exasperated criticism I see about systemd is its use of dbus for a
communications infrastructure, because dbus is both a system bus and a desktop
session bus, and everyone associates it with the latter. If they'd just done
the NIH thing and rolled their own communications protocol (like various other
parties), they'd be deflecting a lot of that criticism.

Honestly, the NIH syndrome seems at least as prevalent amongst systemd's
critiques.

~~~
tankenmate
You need to double fork() in order to ensure that the daemon won't be the
controlling owner of the tty once it calls setsid(); i.e. that it will be re-
parented to init.

So personally as a sysadmin that occasionally runs daemons from the command
line I much prefer a double fork(); I don't want the daemon to exit when I log
out.

~~~
dalias
Both forking and double-forking are completely wrong behavior for supervised
daemons run as part of any automated system. Forking makes sense for small
systems which lack any automated supervision and where a human admin is
responsible for all process starting and stopping, but it's a bug anywhere
else. Modern Linux has various methods for working around this bug (like the
method of assigning an alternate process for orphaned descendants to be
reparented to, which systemd uses, AFAIK), but really any production-quality
daemon should provide a non-forking way to start it.

~~~
tankenmate
You _have_ to fork, it's the only way to make a new process. Now exactly which
process does the fork()ing is, somewhat, academic. Some might argue about the
band aid approach of the requirements of parenting and process groups; but
that would require leaving POSIX (and hence source compatibility) behind.
Allowing any process to set their own parent, process group, or controlling
terminal without very strong bounds is dangerous; kind of like having Erlang
with mutability, it's possible but it breaks the model (and all the cuspyness
that comes with it) and leads to bad juju.

Requiring init to spawn the final daemon process (and hence getting the pid
from fork()) is a red herring, any process can get a list of its children and
query their proc structure (modulo permissions, but most sane people run init
as root).

init should be as small as possible to do _only_ what it is required to do;
this is simple anti-fragility at work. The less code and state the better.
Feel free to stick your process manager, dependency calculators, and helper
daemons in other processes. init should be able to handle something as
catastrophic as a SEGFAULT and just execve(/sbin/init) and just carry on
(while logging locally and remotely of course).

~~~
cbsmith
> You have to fork, it's the only way to make a new process.

I think he was presuming you already have all the processes you need, and this
is just about getting them architected right.

------
dsr_
The reason people use UNIX-like systems is because they work reliably. In
order to make a complex system work reliably, it needs to be easily fixed. In
order to fix a system, a person needs to understand it as well as be able to
make a change in it. And in order to understand a system, it helps very much
if that system is straightforward and lucidly verbose.

I hope systemd will live or die on its merits; I fear that it will take over
via politicking.

~~~
audidude
> In order to fix a system, a person needs to understand it as well as be able
> to make a change in it.

So if I follow you, it would be easier to grok the whole system if all the
code was in different places?

~~~
nisa
Opaque C code that sits in lots of spread binaries and some end-user
documentation in man pages don't help you a lot if have to debug an issue.
It's okay from a user perspective but with systemd (or other complex systems
that rely on in this principle) you need to start reading c-code, start gdb,
deal with dbus... if it's a toolbox of scripts a `grep -r <error>` is often
the first step on the way to understand and learn something and fix the
problem.

This is more difficult if you have a lot of abstractions and binaries lying
around. You need to start reading (often nonexistent) documentation and
abstract C-code...

It likely does not matter for 95% of users and I've rarely had to do something
like this but you are losing some control as developer/sysadmin. For some it's
important as their productivity and job depends on solving such issues fast,
others will never have this problem...

I've never had a problem with systemd through. But if you have it's difficult
to fix it on your own.

~~~
audidude
> Opaque C code that sits in 20 binaries and some man pages don't help you a
> lot if have to debug an issue. It's okay from a user perspective but with
> systemd (or other systems that rely on in this principle) you need to start
> reading c-code, start gdb, deal with dbus... if it's a toolbox of scripts a
> `grep -r <error>` is often the first step on the way to understand and learn
> something and fix the problem.

What's opaque about Free and Open Source C code?

Some might argue (this guy included) that statically typed, statically
analyzed C code will result in fewer people having to debug their system code
than the equivalent code written in a particular variant of shell code.

~~~
nisa
> What's opaque about Free and Open Source C code?

Everything if you are sysadmin/developer trying to fix an issue. At first you
need debugging symbols to pinpoint the problem, then you need to read the
source.. it all takes time. E.g. you need to learn about dbus-monitor and dbus
calls and need to grasp some internal concepts of systemd if something goes
wrong. It takes time and patience you usally don't have or don't want to spend
on such details. For comparison i.e. FreeBSD rc is only shell-scripts:
[http://www.freebsd.org/cgi/man.cgi?rc(8)](http://www.freebsd.org/cgi/man.cgi?rc\(8\))

I don't want to say that one is better than the other but the latter is for
most folks far easier to debug and modify than the first. However as I've said
it only really matters for few people. But I can understand that they are not
particularly happy about this new complexity. And bugs happen.

~~~
audidude
> I don't want to say that one is better than the other but the latter is for
> most folks far easier to debug and modify than the first.

Generally bad form to make claims on behalf of "most folks" since you are in
fact a single person. It's totally a valid argument if you say this on your
own behalf.

And yes, bugs happen. But fixing the C code, is in my experience, much easier
than tracking them down in `bash -x`. Especially when dealing with race
conditions between services/triggers/device initialization.

~~~
nisa
Yes. Shell scripts are it's own unique kind of hell. I'm really speaking on my
behalf here. I can read shell good enough to follow and debug issues in it.
However digging into internals of systemd and its interactions with dbus and
other binaries is opaque for me.

Maybe it's just a different perspective - as a developer systemd likely eases
a lot of pains and makes otherwise problematic and error-prone problems easy
but as an sysadmin that mostly deals with servers it feels sometimes like
forced unnecessary complexity that can introduce difficult to debug issues.

~~~
gtaylor
As a sysadmin, I'll take systemd units over SysV init scripts any day. They
tend to be shorter, more simple to read, and I don't have to worry about the
race conditions or services not restarting correctly due to varying
daemonization techniques.

~~~
nisa
Yes. I don't intended to argue about that. For that systemd is perfect and I
like using it too. I mean such problems as a hanging boot in an lxc-container
where I've once got only a (not a lot on google about that at this time) red
error message that something went wrong. How to go from there? It's sure
possible but it's a lot of work.

I don't say that's the norm and I don't say this happens often but if you
build custom stuff and do "strange" things it's easier to know what's going on
if you grasp the complete system. This is more difficult with systemd.

I believe it's a valid criticism and I realize 95% of users never need to care
about this. However it's still a valid point if you build complex systems that
are not "off the shelf".

------
nextos
We often criticise systemd for being too bloated, and making it hard to write
a drop in replacement. I totally agree with this line of thought.

However, in my mind it has made several awesome things possible. My boot time
got dramatically shorter when I adopted it thanks to parallelization. Besides,
daemons have now simple and robust service definitions. Sys V had become a
mess!

Lastly, lightweight containers are the real-deal for small development tasks
(not for production!). Just one command: systemd-nspawn, and you're ready to
go. Docker is currently a bit more complicated to set up.

Arguably, many features, including containers, should be moved out of systemd.
Right now, more than a monolithic architecture, I think systemd is rather
shipping too many things under the same project umbrella.

~~~
bwood
> Lastly, lightweight containers are the real-deal for small development tasks
> (not for production!).

I've come across this sentiment a few times in the last month, but I haven't
yet heard an explanation other than "VMs are battle-tested and containers
might leak data to each other". Is there something more that I'm missing? Why
aren't containers a good idea to use in production?

~~~
e12e
If we (fairly or unfairly) group Linux' LXC (eg: docker) and * bsd's jails,
the main contrast with "proper" hypervisors (xen/kvm/vmware/bhyve(? That new
thing in freebsd 10?) is (the possibility of) full resource
accounting/limitation. Go ahead run you pi-digit-finder at "100%" cpu, pipe
/dev/zero over an ssh pipe to /dev/null on some box and pipe it to a local
file as well: no other vm or the host will notice. You only get 1mbs, x cycles
of cpu and x mb of disk.

Secondly, assuming a bug in the kernel, one might assume root in a container
can lead to root on the host. Bsd jails have been pretty solid for the last
few years afaik - but hardware support for virtualization might still get more
of both separation/safety _and_ speed. There have been som bad bugs in (as i
recall) the io system in xen, leading to similar issues ... but again the last
time i saw anything on that was years ago.

Ymmv - generally docker doesn't have "run untrusted code, safely, as root" as
a design-goal (yet, afaik) (not entirely sure about lxc, née vserver -- the
underlying technology) -- so don't expect it to do that. Isolation and
security (esp. without sacrificing performance) is very hard to get right. Or
so a long series of privilege escalation exploits across many different os'
seem to indicate.

~~~
ldng
> lxc, née vserver

Just to be a little pedantic, LXC has definitly be inspired by pre-existing
Vserver and OpenVZ. But it's a different implementation.

A lot of things that are viewed as innovations from Docker really already did
exist in 2006~2007. Maybe a bit cruder but not that much. OpenVZ was very
close to that. AUFS is the only real innovation as far as I know.

Anyway, Docker guys were smart enough to ride the cloud wave and hype the
thing. I'm pretty sure Parallels missed the boat because they went the
opencore way (OpenVZ/Virtuozzo).

------
mhogomchungu
Lots of people do not seem to understand the criticism of systemd.

systemd = init system + a whole lot of other things.

When people complain about systemd,they usually do not complain about what it
does or how it does it in the init system part.That part is pretty solid as
far as functionality is concerned.

When people complain about systemd,they usually complain about the "whole lot
of other things" part.Lots of people have different complains and my biggest
one is on udev.

udev is a core component in any modern linux system I see systemd absorbing it
as nothing but a political move and a power grab.They could have left udev as
an independent project and just create a dependency on it.

The "whole lot of other things part" will,by definition,make any other project
that is just an init system seem very much deficient in functionality when
compared to systemd.

~~~
navait
This init process debate has brought out one of the worst elements of people
in the FOSS community: treating some FOSS technology as an extension of their
identity.

Let's keep some perspective here. We are literally just talking about an init
system. There are many others you can use. systemd is not taking away your
freedom in any meaningful sense of the word "freedom". Debates should be about
the technical merits of system _not_ baseless accusations that they are making
a power grab.

~~~
lmm
> systemd is not taking away your freedom in any meaningful sense of the word
> "freedom"

Systemd has made it impossible for me to run an up-to-date Gnome on FreeBSD.
That feels a lot like taking away my freedom.

~~~
clarry
There's legal freedom, and there's practical freedom. Sadly, the latter is
seldom talked about.

You can be stuck in a maze, in a pit or under a tree trunk. You're legally
free, since no law or copyright license is saying may not get out. Yet, you're
stuck and if you can't get out, you're not free at all.

When it comes to software, it is the size, complexity and complicated
interdependencies that make the maze. As the system grows, an individual's
practical freedom erodes. For instance, I complain about the web a lot. Even
with a browser's source code and a permissive free license, there's close to
nothing I can change about it in practice. It would be far too much work for
me to maintain millions of lines of code and remain compatible and
interoperable with a huge, fast-changing stack of technology... and the more
you diverge, the harder it becomes. It's an uphill battle, and at some point
you have to give up if you're not the giant. So the four software freedoms are
reduced to two (or less). In practice I don't have the freedom to do what I
want.

I think the FSF's stance of reducing freedom to a merely ethical issue is
alarming. How would Dr. Stallman have felt if he had gotten his printer driver
with a free license but so much code and complexity and dependency that it
would've been impossible for him and a small team of hackers to actually port
it and make it run on his system?

Of course, systemd alone is not approaching that level of complexity. Except
that it's not only systemd.. the trend seems to be that all aspects of a
modern OS are getting larger and more complicated. A little here, a little
there, it all adds up and becomes _a lot, everywhere._ It's a sad trend.

There was a time when you could've picked up a book that pretty much describes
all of your system's hardware at a low enough level so that you could start
writing your own bootloader and OS from scratch, with the knowledge that you
can interface with all the logical hardware devices. And it didn't take
hundreds of thousands of lines of code. Now the amount of accumulated cruft we
depend on is so large that the idea of writing your own not-a-toy OS is
laughable...

Standing on the shoulders of a giant is necessary and helpful, but when you
have to do too much of it, it stifles innovation and encourages monocultures.

I don't have anything against systemd per se; it doesn't represent my ideals,
doesn't bring me features I want, and so I don't want to use it. If Ubuntu and
Debian want to use it, they're of course free to do so. However, I am
concerned that with the notion that "systemd has won", its proponents are
going to assume it to be everywhere and build future software with the
attitude that it is okay to depend on it -- who cares about the people who
would prefer not to use it, let them suffer for disagreeing with the king!

Before the systemd rage, sysvinit might have been "the winner" on Linux in the
sense that it was most widely used and supported. But we didn't have this sort
of polarizing "sysvinit has won, fuck everyone and everything else" notion.
Other distros with other init systems have happily coexisted all along, and
these other distros haven't had to constantly fight a growing dependency on
one specific init system.

~~~
pdkl95
> How would Dr. Stallman have felt if he had gotten his printer driver with a
> free license but so much code ... that it would've been impossible ... to
> actually port it and make it run on his system?

While I am not an expert on legal language and how it can be used by lawyers,
I believe RMS and the FSF at least attempt to address this in the GPL Version
2 (there is probably a similar requirement in the GPL Version 3, but it is
more complicated so I'm not sure which requirement corresponds to this quote
form v2):

"The source code for a work means the preferred form of the work for making
modifications to it."

Code that is so large complex that it is not practical to understand or port
wasn't written by a human. Such code is probably a template or macro expansion
of the real source, and the "preferred form" would be the pre-expansion
source.

This may not cover all ways of obfuscating the code, but "preferred form" _is_
trying to be as inclusive as possible.

~~~
clarry
> Code that is so large complex that it is not practical to understand or port
> wasn't written by a human. Such code is probably a template or macro
> expansion of the real source

Or it was written by thousands of people over twenty years.

GPL doesn't protect you from accumulated cruft, complexity and snarly design
that makes it hard to understand let alone modify a system.

------
ultramancool
Am I the only one who's disgusted with this bloated, convoluted, dbus-
dependent pile of crap? I mean, c'mon, binary log files? I'll pass, thanks. It
replaces way more than it needed to.

I prefer the BSD-style philosophy, nice, simple rc.conf, used to run Arch till
it got infected with this garbage too. It slowly progressed away from it's
BSD-style roots. So recently, I just gave up and moved to FreeBSD. Not a
single regret so far.

~~~
drdaeman
I think I'm mostly fine with journald (as a concept). At least I can explain
reasoning for it to myself. A switch from non-structured to structured data
provides a significant advantage, and indexes are useful. At least I had too
many times grepping a multi-gigabyte log file. Sure, relatively modern
(RFC5424) syslog protocol has structured data too, but in my experience most
software had never bothered to use it. So, forcing a switch through
introducing another protocol that has structured data baked-in isn't a too
terrible idea.

My only issue with journald's binary log files is that they're of homebrewn
custom format, that's not accessible by any standard means. Plain text files
aren't directly readable by humans, too, but we have cat, less or alike tools
to pass such data to terminal (sometimes an iconv is required, say, if log
entries contain filenames that has characters outside of ASCII range), and
those tools are available on every modern OS out there.

Personally, I think, a compromise that'd satisfy me (YMMV) would be either an
industry-standard log format (like, maybe, sqlite - it's fairly simple,
universal and omnipresent nowadays) or, even better, storing data in text
files, but having accompanying binary index and metadata ones that store non-
human-readable stuff (like hash chain - bet, no sane human would ever check
cryptographic log integrity by hand) and provide additional information for
faster machine access.

But the heck journald is a tightly-coupled part of into systemd instead of
being a separate project is beyond me. I can't reject that systemd has some
good things about it too, but it's too terribly monolithic and unhackable
compared to mostly-scripted init systems. And such negative points easily
outweigh the positive ones.

~~~
Sanddancer
The biggest problem with dropping the syslog protocol altogether is that
there's a huge amount of other stuff that speaks syslog. Things like
networking gear use it for the same purposes as a normal *nix box, and getting
them to switch is going to be like pulling teeth. With your particular case,
I'd say you should look at rsyslog and/or syslog-ng. Both of them have
backends that talk with actual databases, so you can have all your tools
readily available, and can additionally dump to plain text and/or email
messages at the same time. As to the why for journald, it seems very much like
the rest of Poettering's MO of NIH. He doesn't seem too capable of working
with other project makers to get his goals handled, so just does everything
himself, to the detriment of the overall community.

~~~
drdaeman
Well, journald has syslog compatibility layer and can talk syslog, so
supporting any existing software is not an issue. It doesn't speak syslog by
itself, so it probably can't forward logs to another networked syslog server,
but I don't see anything that prevents implementing this, if necessary.

The point is, journald also introduces a new protocol that's oriented at
logging structured data. This way it not just provides a feature, but forces
developers to think about structuring their log output in a machine-readable
manner. I think that's the excuse that I believe is the journald's raison
d'être and that I personally accept.

Just my opinion, though.

~~~
jude-
Compatibility isn't the problem. The problem is the use of structured binary
data _itself_ for logs. Logging binary structures instead of raw text makes it
very difficult to recover from crashes or misbehavior, since the logging
facility (journald or otherwise) must ensure that the log's structure on disk
is consistent at all times.

I've seen more than my fair share of journald corrupting its own log due to
unclean shutdowns. If I'm going to be grepping the journald log file anyway to
reconstruct it (possible, but not easy, since journalctl is useless here),
then why bother using it at all? It fails at the very task it was built for.

~~~
pdkl95
> journald corrupting its own log due to unclean shutdowns

Exactly! What the binary log advocates seem to be missing is that those
unclean shutdowns (often called "crashes") are probably the very thing you are
going to want to search for the in log. In general, few people care how (or
if) log stores that the cron daemon yet again ran the hourly maintenance
without any errors. What _everybody_ who has had to search a system log cares
about is "what happened right before the crash happened".

The current data that needs to be committed to the log _successfully and
immediately_ almost by definition happens at a time when you do not have the
time for the complexity of an atomic addition to a database. Often there is
barely have time to any disk write at all.

The only way make the system log useful would be to make adding events
synchronous. As nobody wants to deal with a syslog that is 10,000x slower (or
worse), the only sane option is what we always did - make the writes simple
and immediate, and defer any fancier feature.

Have they never heard of "log parsers" before? If you want it in a search able
DB (which can be very useful), you do that from original log either as an
async daemon or defer it with cron or similar.

------
uselessdguy
Disclaimer: I develop uselessd, probably have a warped mindset from being a
Luddite who values transparency, and evil stuff like that.

The author of this piece makes the classic mistake of equating the init system
as the process manager and process supervisor. These are, in fact, all
separate stages. The init system runs as PID 1 and strictly speaking, the sole
responsibility is to daemonize, reap its children, set the session and process
group IDs, and optionally exec the process manager. The process manager then
defines a basic framework for stopping, starting, restarting and checking
status for services, at a minimum. The process supervisor then applies
resource limits (or even has those as separate tools, like perp does with its
runtools), process monitoring (whether through ptrace(2), cgroups, PID files,
jails or whatnot), autorestart, inotify(7)/kqueue handlers, system load
diagnostics and so forth. The shutdown stage is another separate part, often
handled either in the initd or the process manager. Often, it just hooks to
the argv[0] of standard tools like halt, reboot, poweroff, shutdown to execute
killall routines, detach mount points, etc.

To stuff everything in the init system, I'd argue, is bad design. One must
delegate, whether to auxiliary daemons, shell scripts, configuration syntax
(in turn read and processed by daemons) or what have you.

sysvinit is certainly inadequate. The inittab is cryptic and clunky, and
runlevels are a needlessly restrictive concept to express what is essentially
a named service group that can be isolated/overlayed.

Of course, to start services on socket connections, you either use (x)inetd,
or you reimplement a subset or (partial or otherwise) superset of it. There's
no way around this, it's choosing to handle more on your own rather than
delegate. In systemd's case, they do this to support socket families like
AF_NETLINK.

As for systemd being documented, I'd say it's quote mediocre. The manpages
proved to be inconsistent and incomplete, and for anyone but an end user or a
minimally invested sysadmin, of little use whatsoever. Quantity is nice, but
the quality department is lacking.

sysvinit's baroque and arduous shell scripts are not the fault of using shell
scripts as a service medium, but have to deal with sysvinit's aforementioned
cruft (inittab and runlevels) and the historical lack of any standard modules.
BSD init has the latter in the form of /etc/rc.subr, which implements
essential functions like rc_cmd and wait_for_pids. Exact functions vary from
BSD to BSD, but more often than not, BSD init services are even shorter than
systemd services: averaging 3-4 lines of code.

A unified logging sink is nothing novel, it's just that systemd is the first
of its kind that gained momentum, but with its own unique set of issues.
syslogd and kmsg were still passable, and the former also seamlessly
integrated itself with databases.

Once again, changing the execution environment is a separate stage and has
multiple ways of being done. Init-agnostic tools that wrap around syscalls are
probably my favorite, but YMMV.

As for containers, it's about time Linux caught up to Solaris and FreeBSD.

~~~
vidarh
> The init system runs as PID 1 and strictly speaking, the sole responsibility
> is to daemonize, reap its children, set the session and process group IDs,
> and optionally exec the process manager.

The process manager gets killed. How do you recover?

If you have respawn logic for it in PID 1, how do you log information about a
failure to respawn the process manager?

Perhaps you build in some basic logic for logging. Where do you _store_ the
data? What if the user level syslog the user wants you to feed data to can't
be brought up yet, because it depends on a file system that is not yet
mounted?

There may very well be alternatives to the systemd design, but I've yet to see
any that are remotely convincing, in that most of them fail to recognise
substantial aspects of why systemd was designed the way it is, and just tear
out stuff without proper consideration of the implications.

Most proposed alternative stacks to systemd falls down on the very first
question above.

I agree with you that it doesn't seem like a great idea to stuff everything in
the init system, but I don't agree that "one must delegate" unless the
delegation reduces complexity, and I've not seen any convincing demonstrations
that it does.

I'd love it if someone came up with something that provided the capabilities
and guarantees that systemd does with indepenent, less coupled component,
though.

But there's no way I'm giving up on the capabilities systemd are providing
again.

~~~
hueving
Isn't that a pretty narrow corner case? I can count the number of times the
process manager has been killed on one hand.

~~~
edwintorok
Also you depend every-day on another process that is special in some sense
just as the process manager: Xorg. If Xorg dies all your desktop applications
die. By your line of reasoning Xorg should be moved into PID 1 too, which is
definetely not a good idea.

I don't say that Xorg hasn't crashed, it did rarely when running RC code or
proprietary drivers. In fact I probably had as many Xorg crashes as kernel
panics, which says something about how stable Xorg is. Still I wouldn't want
to run it as PID1, where a crash would really bring down everything.

~~~
vidarh
I don't understand how you come to the conclusion that putting Xorg in pid 1
would be even a remotely fitting comparison.

For starters, as an example, I have 100 times as many servers than I have
desktops to deal with - for a lot of us Xorg is not an important factor. But
the process manager is vital to all of them - server and desktop alike if you
want to keep them running. If the process manager fails, it doesn't matter if
it wasn't Xorg that took things down.

Secondly, that X clients fail if the server fails is not a good argument for
moving Xorg into pid 1 too, because it would not solve anything. If pid 1
crashes, you're out of luck - the best case fallback is to try to trigger a
reboot.

Having (at least minimal) process management in pid 1 on the other hand serves
the specific purpose of always retaining the ability to respawn required
services - including X if needed. (Note that it is certainly not _necessary_
to have as complicated respawn capabilities in pid 1 as Systemd does).

Having Xorg in pid 1 would not serve a comparable purpose at all: if it
crashes, the process manager can respawn Xorg. If you then need to respawn X
clients, and be able to recover from an Xorg crash, there are a number of ways
to achieve that which can work fine as long as your process manager survives,
including running the clients under a process manager, and have them interface
with X via a solution like Xpra, or write an Xlib replacement to do state
tracking in the client and allow for reconnects to the X server.

Desktop recoverability is also a lot less important for most people: Every one
of our desktops have a human in front of it when it needs to be usable. Most
of them are also rebooted regularly in "controlled" ways. Most applications
running on them get restarted regularly. People see my usage as a bit weird
when I keep my terminals and browsers open for a month or two at a time.

On the other hand, our servers are in separate data centres and need to be
availably 24x7, and many have not been rebooted for years, and outside of
Android and various embedded systems, this is where you find most Linux
installs.

While we can remote reboot or power cycle most of them, with enough machines
there is a substantial likelihood of complications if you reboot or _shudder_
power cycle (last time we lost power to a rack, we lost 8 drives when it was
restarted. Even with "just" reboots there is a substantial chance of problems
that requires manual intervention to get the server functional again (disk
checks running into problems; human error the last time something was updated
etc.)

That makes it a big deal to increase the odds of the machines being resilient
against becoming totally non-responsive.

~~~
edwintorok
I think you raised an interesting point here 'for a lot of us Xorg is not an
important factor', I agree. The same could be said about some of the features
that systemd provides that cause a lot of flames (binary logs). It has been
said before that systemd is monolithic, and this is probably what makes
switching so hard.

It is all-or-nothing, whereas if you could gradually replace the old
sysvinit/policykit/consolekit/etc. stuff with systemd/logind then problems
during that transition could be debugged more easily. You could also choose to
not replace some components where the systemd/non-systemd replacement is
broken.

------
contingencies
Case in point .. today .. rebuilding an X11 desktop system on Gentoo, some
weird set of dependencies around gnome beneath the window manager wants to
pull in systemd. I finally work out a way around it, but it wastes half an
hour of my time.

My take: Containers are not well managed by general, daemon-oriented process
supervisors with a localhost-oriented purview. However, those supervisors
would do well to use container-related features to better secure and manage
daemons as appropriate. In future, processes will be more likely managed
across clusters by parallel capable supervisory systems with high availability
goals and network infrastructure configuration, load and topology knowledge.
Less and less people will even see the init system, except perhaps behind a
logo or as it flashes past while booting their device in debug mode.

(Edit: stumbled on [http://www.gossamer-
threads.com/lists/gentoo/user/284741](http://www.gossamer-
threads.com/lists/gentoo/user/284741) which explains the scenario .. would
hate to be on BSD)

------
callesgg
I think systemd actually clears up a loot of stuff. As the article describes.

The main thing that scares me is the binary loging format I can think of some
benefits but mostly it just seams scary. I guess I will get go se later if the
benifits outweighs the rest.

~~~
tracker1
I was actually pretty happy with the way upstart handles logging... it's about
as transparent, and easier to deal with.

------
fsniper
I'm new on the boat about systemd debate so I'm still reading and reviewing
the situation. But the more I read the more I'm getting away from systemd.

In principal everybody is on terms with the need for a new and modern init
system. But yet I'm not even sold on this issue. sysvinit is still holding
stance with extra tools and doing it's job cleanly. By introducing a fully
reimplemented and still controversial system with many dependencies and with
need for many reimplementation on our existing software we are not helping the
issue but blurring the waters.

And What's the fascination about boot times?

Nowadays on desktops nobody boots. You just boot once and hibernate/suspend
forever. And for servers, if you are rebooting you are doing something wrong.
So pulling efforts from building controversial init systems to optimize
hibernate/suspend in the kernel would be a better effort on this field.

~~~
johnny22
boot times are the least important reason to use systemd.

------
stephen_g
There's a lot of negativity going on here...

As far as my experience goes, I've found it actually works really well on all
the servers I've moved to CentOS 7 and on the Fedora desktop I play around
with (my main dev machines are Macs) it's significantly improved boot time...

I'm sure there are some valid concerns about design and such, but as far as my
usage in production goes, I can't say I've had a single problem with it... It
makes it a lot easier when I need to write files then the messy init scripts
before also.

------
dschiptsov
Out of confused mind.)

There is _no fundamental problem_ that it "solves" which other UNIXes
presumably still does have. The problem does not exist. AIX, Solaris, *BSD and
many old-school Linux guys will tell you that.

Also, any old-school guy will tell you that a kitchen-sink, put-it-all-in
design is a wrong way.

btw, user processes supervision is a task of an OS kernel, which it handles
via a bunch of specialized syscalls, not of some "man-in-the-middle" user-
level daemons.

There is actually nothing to talk about, except some ambitions and bad
designs.

~~~
icebraining
_btw, user processes supervision is a task of an OS kernel, which it handles
via a bunch of specialized syscalls, not of some "man-in-the-middle" user-
level daemons._

I'm pretty sure /sbin/init runs in userspace even on *BSDs and Solaris, and
does process supervision.

~~~
dschiptsov
You would be surprised how a few processes it supervises. getties (remember
these?) what else?

initscripts has nothing to do with /sbin/init, surprise?

