Hacker News new | past | comments | ask | show | jobs | submit login
Why systemd is winning the init wars and other things aren't (utoronto.ca)
129 points by onestone on Feb 12, 2014 | hide | past | favorite | 153 comments

Interesting follow-up, this stuck out for me:

"To start with and as usual, social problems are the real problems."

This really resonates with me. Once any organization gets to a certain size, getting change to happen is more "social problem" and less "technical problem." When you don't select for people at that stage who can move an entire organization over to a new way of doing things, your organization will stagnate and die. It used to really annoy me at Google when technical evaluators who dismiss those "soft qualities" in candidates. I had a fairly famous Googler tell me point blank that anyone could change the organization by shouting loudly enough. Which I tried to explain was just as false as saying anyone can solve a problem if they write enough code. But alas, they were not ready to hear that at the time.

The author makes an excellent point that much of the root of the "disagreement" can be summed up with a strong sense of "I wouldn't have done it that way." Which is no doubt true, but it has to be combined with "I would do it like this, and I'm willing to spend the next 3 - 5 years listening to the community and addressing their concerns." I certainly see that is a much bigger commitment.

yes indeed social problems. or as one of my favorite quotes goes: "Trust no one! The minute God crapped out the third caveman, a conspiracy was hatched against one of them!" --Col. Hunter Gathers, OSI (Venture Bros parody of Hunter S. Thompson)

More likely, three conspiracies were hatched, one against each one of them.

One of the reasons why the Blue Man Group perform in groups of three is because if you have one person, you have one person. If you have two people, they either agree or disagree. Three is the minimum number of people it took for there to be an outsider.

This article is a bit off the mark. Daemontools and runit didn't replace init because they weren't designed to replace init, or to be exclusive technical choices in general. Non-exclusivity means no displacement, just happy co-habitation. Daemontools and runit work great under sysv init, BSD init, or under systemd.

There's a little piece of insight which seems to escape many: there is no serious technical reason why systemd must run as init. Systemd could have been written to coexist in a number of various ways following previous models, running under init and doing things for init.

Systemd is winning this war because it created the war, by conflicting with sysv init.

> Systemd is winning this war because it created the war, by conflicting with sysv init

Except Upstart also has config files instead of shell scripts. And both of them support all your old sysv init scripts. And writing the config files is so much easier to get right the first time than rolling your own goddamn shell script by default.

On a sort of related note, I have never had an interaction with daemontools that wasn't miserable. I always find the fact that it gets waved around like a bloody shirt in these discussions terrible and terrifying.

The article specifically says what you're saying: I don't think any of them have really been developed with replacing SysV init in Linux distributions or elsewhere as a goal. DJB daemontools certainly wasn't

Systemd is winning this war because it created the war, by conflicting with sysv init.

Upstart predates Systemd, systemd just has more traction.

Several of the systemd developers worked on upstart or worked on porting fedora to upstart. Lots of the design decisions- socket based activation vs event based, cgroups integration- were made because of things that were (and still are) broken in upstart.

Sure. What I think is unclear that I intend to highlight is that systemd (and yes, upstart) have intentionally and unnecessarily created this technical conflict.

well, I wouldn't call the technical conflict unnecessary - if there wasn't a need for a more featureful init system, then there wouldn't have been a motive to write a new one. As for the social conflict, that's just what you get with social groups experiencing change.

Your assertion is strictly false. All features of this new process invocation system could be implemented without making a "new init system." That's my primary point in commenting here. It is a false dichotomy.

> All features of this new process invocation system could be implemented without making a "new init system."

Er, no. A major feature of systemd is that it is declarative. You have hooks for running custom commands, but a unit file is declarative and easy to parse.

If you want to acquire more intelligence about an init script, you have to resort to disgusting hacks like parsing comments in order to get a dependency system working. And socket-based activation? Sysvinit is a dead end. It's the counter to "keep it simple, stupid": when you fail to capture enough information in your "simple" model, you are going to end up with more issues than if you had created a slightly more complex system.

I think the point is that you could have the old init system launch systemd, which would then work as it pleased.

Then you have two init systems. That's even more complicated than having only systemd.

We already do. I think most people in this discussion aren't realizing that the current sysV init system is an extremely small pid 1 /sbin/init, and most of the logic in external rc scripts. Moving rc scripts to systemd declarative syntax is fine.

The important part with respect to software architecture is less complexity in pid 1. Subprocesses of init can be safely updated and restarted. More code in init is a real problem -- that's why systemd as pid 1 is controversial.

At the same time, there is a point to be made for having things in a single process - this makes it much easier to ensure that the crucial parts of the init system are available and working.

Can you elaborate? I don't understand this perspective.

I'm not sure this was the point of parent, but anyway, why would you want something like that? It's not like sysvinit has any redeeming feature, and you would still want to have systemd .service files. I could understand an argument like "/bin/systemd does too many things and ought to be split up more", but the rationale for having bad old sysvinit launch /etc/init.d/systemd.sh escapes me. Not to mention that you definitely don't want both systemd and sysvinit to attempt to launch scripts in /etc/init.d, when they don't have a .service...

That is indeed my point. The reason: pid 1 is special on unix. Here's a pretty decent article on the subject: http://ewontfix.com/14/

Say something comes along at that point and kills systemd. Now everything below it is orphaned and has init as it's new parent.

Sysv-init does not know what to do with those processes other than reap zombies.

Now you have a system full of unmonitored processes, just as without systemd, and no standardized way of restarting the services.

This is why systemd needs to run parts of its logic in pid 1 to be most compelling.

You can launch systemd without letting it be pid 1, but you lose functionality it can't provided outside it.

But if systemd was pid 1 and something killed it, you would have a kernel panic. How is that better than a system with a bunch of unmonitored processes? At least with the latter you can safely bring down the system, instead of having a hard crash.

Pid 1 is special - it can't be killed.

It can crash due to bugs it can't handle, or it can voluntarily shut down.

Try it:

  vidarh@opus:~$ sudo kill -9 1
  [sudo] password for vidarh: 
No effect.

In the case of systemd, if it runs into a non-recoverable situation, crash() in core/main.c gets called, which then proceeds to try to create a core dump and spawn a shell as an absolute last resort to give an admin a chance to take corrective action, which is already a step up from your typical init assuming the manage to get the part of systemd that runs as pid 1 (by no means all of systemd runs as pid 1) as stable/bug-free as your usual init.

Of course there's an uncertainty there, and they'll have to prove they can keep that part rock solid or it'll be useless.

There are plenty of ways to kill pid 1. The major concern is simply software error, which is another reason why the current systemd design is poor. But if you'd like a concrete example, go ahead and attach gdb and use your imagination.

Systemd can run without being pid 1. In fact, systemd can run without being root, and by default systemd now starts a systemd user instance when a user session is started (there's a pam module to do this), or you can run systemd with "--user".

So if anyone wants to run systemd as a process monitor like Daemontools, separate from pid 1, they can do so.

But there are technical reasons for systemd to run as init: A key feature is to precisely track whether or not a service is running or not.

sysv init can not do this. It can track whether or not an individual process started from inittab is still running, but for large multi-process servers this is not all that practical as a process monitoring method. Hence the proliferation of process monitoring applications.

More importantly, since there's no ordering or dependency control, I've never seen a system rely on init for process monitoring this way for all its services. In practice, a bunch of pieces gets started in the init scripts, and all monitoring ends up placed externally or you then start a process monitor like Daemontools.

The problem is that all of these process monitors depends on a relatively benign environment where they are not messed with, and where they themselves are so rock stable that they never end up orphaning the services they start.

In practice, while Daemontools for example is well written and as stable as it can be, by virtue of running outside pid 1 it is not immune to the effects of the surrounding system. It can, and does, end up orphaning monitored processes in a variety of circumstances (say the OOM killer runs amok after your system ran ludicrously low on memory).

When that happens, unless your app was exceedingly well written, and the vast majority of server processes I have to deal with on a daily basis are not, your process is now unmonitored and you have no good way of controlling the process other than killing it and restarting. Finding pid-files have been overwritten, or are empty (say the disk ran full too, while it was being written) and multiple instances running is a fairly common scenario.

By running as pid 1, systemd is protected against being killed. By then applying cgroups it can precisely track whether or not what was spawned is still running, even if it forks more stuff. By applying this to the boot process, it can provide this functionality to everything that gets started during boot.

This is functionality that init does not provide, and none of the process-monitors running outside of pid 1 can provide.

It may not solve problems you have, but I've had to deal with the fallout of process monitors running outside of pid 1 more than I care to remember.

It's funny you mention the systemd pam module. I just recently was configuring a new Linux mint install and enabled ldap and all of a sudden noticed logging in via the desktop or ssh would hang for over a minute. I tracked it down to the pam_systemd module. Not sure what the problem in there was though.

"But there are technical reasons for systemd to run as init: A key feature is to precisely track whether or not a service is running or not"

This is not true. Tracking a running service doesn't require being init. Any process can do it.

"sysv init can not do this."

Nor should it. Services running under sysV init can, however.

"More importantly, since there's no ordering or dependency control"

Yes, there is -- it's implemented by the rc system. It's crude, but it's also just a bunch of shell scripts and completely pluggable. The systemd logic could trivially be inserted here either in place of /etc/rc, or by something that sits directly under init and drives the rest of the process.

"In practice, while Daemontools for example is well written and as stable as it can be, by virtue of running outside pid 1 it is not immune to the effects of the surrounding system. It can, and does, end up orphaning monitored processes in a variety of circumstances"

Any process can daemonize away from a process manager. systemd adds cgroups for tagging or containing process trees -- this is possible under runit/daemontools and it would only take minor changes to the supervise process to instantiate the cgroup and supervise accordingly.

To be clear: Any process can add cgroup support to track forked children. Solving this problem by adding cgroup support has absolutely nothing to do with becoming init.

"By running as pid 1, systemd is protected against being killed."

No, that is false. There is no such protection -- killing pid 1 is easy. Rather, by running as pid 1 systemd will cause a kernel panic and bring down everything with it.

"By then applying cgroups it can precisely track whether or not what was spawned is still running, even if it forks more stuff. By applying this to the boot process, it can provide this functionality to everything that gets started during boot."

sysV already exports most boot ordering into subprocess. It is trivial to achieve these tasks with systemd as a child of init.

"This is functionality that init does not provide, and none of the process-monitors running outside of pid 1 can provide."

It is true that init doesn't provide these features, nor should it. Your second statement is false -- a common misconception as I've hopefully explained.

If you have any further technical questions about how one CAN perform all these actions as a child of init I would be happy to explain in detail.

Great comment.

The thing that strikes me as the very weirdest about all of this is that many of the systemd proponents seem to be incredibly animated and aggressive about systemd, but most of their arguments are either non-technical or completely wrong. Like, what's the motivation? Why have so many people gotten to this mentality?

runit seems to have been designed to be able to replace init: http://smarden.org/runit/replaceinit.html

It's possible, but it's not designed to. It was designed to run under init, as a daemontools clone. Explained further here: http://smarden.org/pape/djb/daemontools/noinit.html

"In contrary to Richard Gooch, I suggest not to implement service dependencies and runlevel handling in the Unix process no 1, /sbin/init, keep it small and simple, that is why I wrote the runit package"

Thank you for your excellent insight, as I would have never considered nesting inits.

"none of these alternate init systems did the hard work to actually become a replacement init system for anything much"

And yet again, the anti-init-choice people show how they still don't understand the argument many of us have against the systemd change.

This accusation presupposes that a change was necessary or desired in the first place. Sorry, no, some of these things should not be tied together. Without that tying, we already have (known, tested) tools that cover most of these features.

Yes, I understand that some people have needs that require more (or different) features in their init process. So they should use systemd or whatever else solves those needs. Requirements and uses for general purpose computers vary a lot, though - especially in an environment like UNIX that encourages customization. My needs, for example, were mostly met by OpenRC's reworking of sysvinit. Some minor customization solved that problem completely. I recognize that some people see systemd as a good fit for their needs. Why do the anti-choice people refuse to recognize that other people might have different requirements.

Again, I'm fine with systemd, as an option. It's the bundling and takeover of all the other tools that is a large part of the problem. Blaming others because we didn't implement your made up requirements is another part. Acting like a monopoly and tying other projects together to force upgrade because of vertical integration is yet another part that makes me question motive in addition to the technical issues.

You want us to support systemd? Disconnect it from other stuff like the INIT (pid=0), the logger, and IPC (dbus). Allow all those to remain as they were previously. Let systemd be just the process launcher/manager, and allow let all of those parts work stand-alone with existing tools. Note: providing more features when your tools are used together is fine (and expected). This way, the software can stand on it sown, and if it really is big of an improvement as the systemd supports suggest, the migration will happen naturally over time.

If, on the other hand, using one tool continues to have the requirement of trading many other system-level tools that I already know and use, for unproven newcomers that seem to ignore the lessons of the past, well... I'm sticking with what already works.

The dispute here is philosophical. You want a distribution that follows The Unix Way. Where init does its little job and dbus does its little job and so on. Small, simple tools.

But there is another way. A way that believes that when you unite the core tools into one powerful process you can do cool things. These cool things have been proven, from the basics like faster boot times, to more advanced features like saving/restoring random seeds, automount, SELinix integration, and so on.

You are arguing "this isn't Unix!!!" but that is the whole point. You can win an argument with a mountain climber by pointing out he is going downhill, but not a skier; since for a skier, going downhill is the point. These Linux distributions are not temporarily disoriented mountain climbers accidentally heading away from Mt. Unix. They are deliberately skiing away from it at a high rate of speed. What they will find at the bottom of the slope is an interesting question, but simply pointing out that their strategy is bad for mountain climbing is neither here nor there.

What I am saying is, if you want to run an OS that believes in the Unix Philosophy, the shortest path is to install Unix. You are going to have about as much luck convincing Fedora that they ought to be Unix as you would convincing Redmond.

There's also a huge amount of redundancy inside, and between:

- SysVinit

- xinetd

- atd

- crond

- supervisor

- syslogd

and the other things systemd replaces.

'Small, single purpose tools' is Unix philosophy. But DRY is engineering philosophy. Each repeated piece of logic doubles the scope for errors.

One could also argue that 'doing one thing well' is handling services.

> There's also a huge amount of redundancy inside, and between, SysVinit, atd, crond, supervisor, syslogd and the other things systemd replaces.

Such as? I'm pretty sure code shared between these systems lives in (or could easily be moved to) shared libraries, specifically to avoid this problem. But this is orthogonal to the design of the init system.

> But DRY is engineering philosophy. Each repeated piece of logic doubles the scope for errors.

While DRY is a great engineering principle within the scope of a single project, it doesn't justify excluding alternative implementations or entangling unrelated concerns. Specifically, DRY is not an excuse to forbid competing init system implementations, even though having more than one creates code redundancy. Moreover, "doubling the scope for errors" is meaningless from the user's perspective in the context of init systems: if there are X errors in systemd and Y errors in OpenRC, I will not be plagued by X+Y errors because I will only be running one of them at a time.

> One could also argue that 'doing one thing well' is handling services.

Ah, but "handling services" is not a well-defined task. To some, it means doing something like what sysvinit does--reaping orphaned processes and nothing more. To others, it means doing something like what systemd does--not only managing processes, but also managing devices, sessions, storage, power management, cgroups, system IPC, and network interfaces (and possibly more in the future).

> Specifically, DRY is not an excuse to forbid competing init system implementations

No-one is forbidding anything. You can run Debian with any init you choose. All we ("systemd proponents") want is that when upstream software decides to depend on something systemd provides, the debian maintainers don't have to (they can, if they feel like it) bend over backwards to replicate all systemd features for non-systemd platforms. Not demanding unpaid volunteers to do extra work is not the same thing as forbidding other init systems.

> You can run Debian with any init you choose.

When layers above the init system begin to make hard dependencies on it, this will be unavoidable in practice. This is the problem, since it either takes away my choice in init system, or requires unpaid volunteers to work on removing the dependency. Neither is desirable, and the grandparent's argument is that this could all have been avoided if the systemd developers had made better design choices.

Don't get me wrong, I want systemd to be an option. I like the idea of using cgroups to manage services. I do it every day. What I do not want, however, is for systemd to be a requirement for having a working system.

Systemd is controlled by Red Hat in a way in which critical system components including kernel haven't been controlled before. Not by single corporate entity.

That's what we know about this company from an old (2007) article:

> “When we rolled into Baghdad, we did it using open

> source,” General Justice continued. “It may come as a

> surprise to many of you, but the U.S. Army is “the” single

> largest install base for Red Hat Linux. I'm their largest customer.” [1]

It is better to go with a grass-roots solution, even the one technically inferior, that isn't being influenced by one single vendor or government (especially the one that has a tendency to indiscriminately infect other people's systems [2]).

Also, the Interface Stability Promise [3] by systemd team is just a promise, nothing more. Will Red Hat keep it if it is to decide at some point, that it no longer serves it's bottom line? I wonder if it can be considered legally binding.

[1] http://archive09.linux.com/feed/61302

[2] https://www.youtube.com/watch?v=vILAlhwUgIU

[3] http://www.freedesktop.org/wiki/Software/systemd/InterfaceSt...

Originally posted in this thread: https://news.ycombinator.com/item?id=7210064

You do understand that systemd is open source, right? If they decide to leverage their "control" into something the rest of the community doesn't like, we fork it.

> If they decide to leverage their "control" into something the rest of the community doesn't like, we fork it.

I hope so. My concern is that when we'll see systemd grow in size and more and more software depend on it's interfaces and components (as in GNOME with logind) at the certain point it could take an insurmountable amount of resources to maintain the fork. At this point it might be easier to give up on Linux and build around some other kernel and userland, I hope this will never happen, though.

Also, there is a difference between steering the developments in the preferred direction and outright destructive actions. As in https://en.wikipedia.org/wiki/Boiling_frog.

Interesting. Most of my work with Systemd has been via CoreOS, which is linked to GregKH, who was actually a SuSE dude.

In userspace the Unix way is about disconnected utilities connected by pipes, but that hasn't been uniformly true at more of the "system" level (well, GNU did originally have that philosophy all the way through, even down to the kernel, but most Unixen haven't). The BSDs, for example, generally pride themselves on tighter coupling in the core OS than Linux has traditionally had. You can swap out some components from a default FreeBSD install, mostly more application-level ones (sendmail->postfix, clang->gcc), but the base system is integrated and does things one way, supporting some customization but not every kind of wholesale surgery you might want to do on it. You certainly can't swap out the init system for another one in any kind of supported way. The same is pretty much true of most other Unixen, such as Solaris (which has a pretty integrated init system, too).

I generally like the utilities+pipes model, but for system stuff it's seemed like false advertising to me for a long time anyway: the Linux kernel is monolithic, and many of its features (like cgroups) don't make much sense unless you have some userspace counterpart configuring/arbitrating them and a uniform way of making sure everything gets set up consistently. Traditionally this is handled by a tangle of shell scripts with distribution-specific conventions to make sure they don't step on each other's feet (Debian has piles of this in Debian policy). That's a unified system in practice, because you can't really arbitrarily rewrite the init/configure/reconfigure scripts or swap things out and still have a working system that is Debian-integrated. The scripts are literally separate files on disk, but in practice they're a spaghetti-code implementation of Debian Configuration, a monolithic system that handles initialization and services and package configuration and can't be modified without extreme care if you want things to keep working in the way all the other packages (and apt, dpkg-reconfigure, the diversions system, etc.) expect.

>You are arguing "this isn't Unix!!!" but that is the whole point. You can win an argument with a mountain climber by pointing out he is going downhill, but not a skier; since for a skier, going downhill is the point.

Besides Linux is not UNIX either. And it's 2014 already, we found that some UNIX choices (like "don't dictate policy" in X Server were bad) and others led to underpowered tools.

It's not like we're still on text terminals, running 2-3 daemons at most, each with simple needs, and most of our work is constrained in the textual domain.

You just described both my development environment, and my production environment. Neither needs an overly complicated init system to work.

Really. You develop on the equivalent of a VT100 terminal? You don't use a web browser, or Xwindows, just a tty? Are you sure about that?

I would generally agree with this.

This philosophy (along with it being Free Software) is why I chose to migrate to Linux in the first place, so long ago: it was Unix. Ya, ya, "Unix-like", technically, due to the trademark on Unix.

That last statement is very strange:

> Fedora that they ought to be Unix

As somebody who wore an authentic Red Hat fedora for many years, this makes no sense. They've certainly called themselves a unix (in the generic, no trademark), and alternative to Unix (tm, aka Solaris, HP-UX, and others derived from Bell Labs IP) http://www.redhat.com/solutions/migration/migrate-from-unix-...

Wikipedia, on fedora: "OS family: Unix-like"


When the term "Linux" is used in general, we mean a unix. The distinction has only ever a trademark thing.

So yes, refugees from Windows coming over to distros that have always been unix(-like) and trying to change them into the some sort of Windows-like junk is a problem. Some of us left that world of bad design little choice a long time ago, and have no intention of returning.

> They've certainly called themselves a unix (in the generic, no trademark),

At one point, they probably were a Unix. But Unix as a place and Unix as a goal are two very different things. Both skiers and mountainclimbers find themselves at the tops of mountains.

> When the term "Linux" is used in general, we mean a unix.

You mean a Unix. I mean Linux. There is no "we".

> So yes, refugees from Windows coming over to distros that have always been unix(-like) and trying to change them into the some sort of Windows-like junk is a problem.

I don't believe you. Where are these vanguard Unix-style Linux distros? RHEL? Adopting systemd. Debian? Adopting systemd. SUSE? Adopting systemd. These are distributions from the days of old. They have formed a consensus that systemd-style Linux is the future.

I mean I think it is fair to say that the Linux community is divided on this issue. But characterizing the systemd/dbus/ufs folks as "Windows refugees" is totally inaccurate. Systemd has the backing of important community members of the Linux community.

>When the term "Linux" is used in general, we mean a unix. The distinction has only ever a trademark thing.

Not really. The implementation of the kernel, userland features etc are quite different between traditional derived unices and Linux.

But it sure is a full blown POSIX.

Modern Linux distributions don't tend to honor "The UNIX philosophy" is a common theme these days, or so it is said by a vocal group. The GNU tools tend to be more feature rich then many of their counterparts, often with performance advantages and usability but more is more and more code tends to mean more bugs. Fedora is in many ways the leader of this, it and Ubuntu are symbolic to those folks.

The irony here is that they can go fork all of this stuff, it's not like none of it has forked before, if they don't like redhat they can run something else. Getting passed off and forking is also the UNIX way

"The Unix Philosophy" is what everyone outside of Unix calls good software design. Using modular, decoupled components is standard industry practice for most developers with just a few years of experience, and these practices have evolved from half a century of engineering experience, such that we know how not to do things, even if there's no right way.

You wouldn't make a "god object" in C++ or Java for example, and nor would you tightly couple every class in your OOP system by refusing to provide abstraction boundaries. Well, you might do if you're strapped for time, but you would probably consider it a "hack" that needs amending.

That's how some people see systemd, as a hack. It's tightly coupled to the kernel it sits on, and it's a leaky abstraction that exposes much of that to userland processes that depend on the systemd api. In turn, those applications are (or will be) tightly coupled to linux.

> This accusation presupposes that a change was necessary or desired in the first place. Sorry, no, some of these things should not be tied together. Without that tying, we already have (known, tested) tools that cover most of these features.

This is the argument we've heard over and over again.

The problem with traditional unix init scripts, together with cron, acpid, inetd, etc is that you need a big brittle mess of shell scripts to tie it all together to form a system that actually works.

While the traditional approach may work alright for a static setup like a server that is booted once and stays on forever, it's not good enough for a desktop, laptop or mobile device. There may be changes in power (mains/battery/low power), network connectivity (ethernet, wifi, mobile) and connecting peripherals (including external disks) and lots of other things which need starting/stopping services or mounting/unmounting partitions. The amount of shell script glue to make these things work is unwieldy and doesn't really work that well.

And distribution maintainers have to make a choice, supporting more than one init system is going to be a burden that will hurt end users in the long run.

Finally, no one is forcing you to use a systemd based distribution. If OpenRC solves your problems, do go ahead and use Gentoo or another OpenRC based system.

"While the traditional approach may work alright for a static setup like a server that is booted once and stays on forever, it's not good enough for a desktop, laptop or mobile device."

Sorry for sounding like a troll, but what you're saying is that the traditional approach works alright for setups where unix is actually good, and doesn't for setups where unix is actually crap.

Lets exclude mobile devices from this picture (because those don't use general purpose Linux distributions and arguably don't use unix at all, but a unix-kernel with specialized - non-unix - userspace) and concentrate on servers and desktops/laptops...

This systemd rage shows exactly why it's 2014 and while Linux is extensively used and trusted on the server side, it continues to limp along on peoples' desktops: it constantly reinvents the wheel (badly) with monolithic pieces of bloatware that are usually throw away tried-and-true solutions for unproven experiments. It throws away the old way of doing one thing and do it well in favor of components that do too much and become so complex that troubleshooting is a nightmare, exactly the kind of complaints we have about non-unix systems...

Need an example? Where are the people that used to criticize the Windows registry now? I just recently almost(1) had to throw away my entire home dir on Ubuntu just to try and reset Kontact, because in this day and age the configuration structure of this stuff is far worse, and it's almost impossible to separate one component's configuration files and folders from everything else. And I'm not even going to start ranting about how in the name of $DEITY a mail/calendar application needs MySQL running in the background and about there is not a single decent mail client for Linux right now. Mail... a decades-old problem!

I don't object to systemd itself, but I do object to systemd invading my servers that work just fine without it.

(1) "Almost" because I just learned to live with the occasional crashes and misbehavior caused by some pieces of old configuration still lingering somewhere.

As someone responsible for dozens of servers and 100+ vm's: The "tried-and-true" init solution is utter shit. It's duct-tape and band aid.

The current init solution does too much. Not in initd, but in the massive mess of scripts piled on top of it to get a working system.

You might want to take a look at FreeBSD's init and process management stuff. It's very clean, tidy, well engineered and doesn't involve sysvinit.

Or I just use systemd. My interest in getting rid of sysv init is less work, not having to to spend time dealing with an init system not supported by the Linux distro's we depend on.

Doesn't matter if FreeBSD's solution is better or cleaner or both, I'm afraid.

I think he was suggesting you try out FreeBSD, which I also recommend.

It's a solid server environment.

I have, but I have no reasons whatsoever to consider a switch - the potential upside is way too small to be worthwhile.


Because the potential upside is tiny. There's nothing in FreeBSD that I can't get in Linux that I have any particular need for. I'm sure there are features that are important and valuable to other people, but that doesn't help me.

That leaves near 0 benefit, and some unknown non-zero cost and risk associated with reduced experience with it, potential application incompatibilities and other unknowns, as well as a time cost of re-imaging servers and re-deploying vm's that would put tens of thousands on the cost side.

That's not exactly a cost-benefit situation that justifies spending time considering it.

If the FreeBSD guys develop something so amazingly much better than Linux that we could save lots of money by switching, that could change. I don't see that as very likely, though.

As it is, FreeBSD vs Linux is a bit like Coke vs Pepsi: If you have a preference, and its available, pick it, but it makes very little sense to expend lots of effort to replace one with the other.

In FreeBSD land, nobody wastes eons of time hacking out bad code experiments like systemd and then arguing between themselves on whether to distribute them.

It's a stable system you can rely on, year by year. That's not a small upside for a server platform.

This is precisely why I recently trashed my last few Linux machines and moved them over to FreeBSD. Well to be honest it was because it's absolutely tiresome not having a cohesive well documented system to rely on without the politics and collection of dickhead advocates and pseudo-religious leaders around my ass like mosquitos. Oh and ZFS, MAC, audit, decent ACL support, binary upgrades etc.

However I agree that there is little motivation to move if what you have works. I did mine during a hardware refresh.

> I don't object to systemd itself, but I do object to systemd invading my servers that work just fine without it.

I think the Lennart and the Red Hat engineering team have more important things to do than covertly install systemd on your server. Please quit with the hyperbole.

And there you exactly pinpoint the problem. The FLOSS movement used to be about caring about other people and projects. RedHat* and its employees only give a damn about their use-cases and the rest of the "community" can go to hell.

Which, of course is a logical stance to take for any corporation, however how easily everyone goes along with it is just appalling.

*this, of course, goes for every corporate entity, though some work with the community better than others.

How have I pinpointed the problem? I'm sorry - are you arguing that the RH guys should be covertly installing systemd in the parent posts servers?

> The FLOSS movement used to be about caring about other people and projects.

Was it? I thought the primary driver was "I had a problem, this is the fix for my problem, maybe it helps you. I share this solution, you share yours.". Which basically holds true nowadays: Systemd solves Redhats problems. If it solves enough problems for enough people then it will prevail and other people might enhance it to fix their problems as well if they deem it a viable foundation. If it doesn't, then systemd will remain a niche solution that fixes redhats problems - which would totally be fine with me.

Given that we have the discussion on the debian list I'm inclined to believe that at least enough people think systemd is it to give it a push. If you don't like it stick with what you like or roll your own, but don't bitch and moan about redhat building a software that suits their use-case primarily and giving it away for free.

It would be nice if that held true. The problem isn't that RedHat developed a solution for their own problem, it's that their crowd are incredibly vocal about other people adopting it, even to the point of verbal attacks and attempted character assassinations against people who disagree with their stated aims. And of course, they will never admit publicly that systemd is developed to solve RedHat's problems, but will try to convince people that they have more noble aims, to solve the problems of the wider community. It's rhetoric. If they just dumped the software and let it's technical qualities speak for themselves, there wouldn't be so much drama.

Everyone "goes along with it" because they've evaluated it and decided it fits their uses cases too.

Redhat does not have the power to force anyone to adopt systemd.

And yet they do. They pay enough developers working on enough big projects to have the power to force adoption (not to mention the sponsoring of various other things, like infrastructure). By making it a hard dependency of Gnome for example (which in itself has turned into a 3rd-party developer hostile project, but that is another discussion entirely, but if you care, go read IgnorantGuru's bloggins)

So they're using end-user pressure (no systemd = no Gnome) to force adoption.

You're forgetting the traction that OpenRC has gained in the past year, they have already implemented cgroup support and experimental process tracking and it's moving beyond simple shell scripts.

It doesn't have feature parity with systemd, but it has portability and it's more permissively licensed, and factoring the speed of development I anticipate that it has a bright future ahead of it.

Not to devalue systemd, but I suspect that a lot of Unix problems could be solved if Unix offered a better general-purpose solution for gluing things together other than "big brittle masses of shell scripts". In particular, shell itself is an ugly mess of weird syntax, gotchas, and corner cases.

Granted, init-related scripts have become complex along the years, but we may argue that happened because of the same type of reasoning that's now justifying the move to systemd: the need to do everything in one place and automagically.

You know, even if I would agree with you that "shell itself is an ugly mess of weird syntax, gotchas, and corner cases" that doesn't preclude init scripts from being redone in something other than shell.

"People who don't understand unix are bound to reinvent it, badly."

This has become the tag line for Linux, and unfortunately is starting to reach the previously sacred grounds of basic (server) infrastructure.

The problem is not that they're shell scripts. The problem is that they are not constrained in a way that makes it possible to parse them easily and know what the state of the system is expected to be before and after processing.

Instead you're forced to blindly execute them, and hope the script handles all the appropriate corner cases. In 20 years of maintaining Linux servers, it rarely goes more than days between situations where I come across init scripts that fails to account for some situation in ways that causes me aggravation.

It's not uncommon to come across init scripts that regularly fails to stop a service because it has no proper way of determining which pid the process has, for example.

Just the other day I had to deal with a server where the init scripts happily let 3 instances of a server process trying to operate on the same dataset; thankfully the app locked the files in questions and were just screeming bloody murder in its logs, but they were also all three competing for various resources. The init system of course had absolutely no way of telling that something had gone wrong.

The only solution to have a sane server setup with SysV-init is to not use it for anything but the basic tasks and to start a proper process monitor. Then I'd rather just get rid of init entirely, since you're pulling in a bunch of other code that can do almost all of init's tasks anyway.

I argue that shell scripts became more complicated because of quite the opposite of what you're claiming. The original Bourne shell already supported, piping, control structures, etc. So people thought, hey, why design a new init system that does everything properly when we can just push all this logic to shell scripts and have them act as glue?

It's the perfect example of "worse is better". They chose to keep the core init system simple, but as a result everything else became a complicated mess that doesn't handle corner cases properly. vidarh's comment is spot on.

Towards the idea that you can write them in another language, we in the Perl community ended up developing something of that nature[1]. It was originally intended to make writing daemons in Perl easier by pulling out common functionality (double fork and such), and giving a consistent clean interface. One of the bigger requested things was the ability to use it as basically the entire init script ([2] for an example).

[1] https://metacpan.org/pod/Daemon::Control [2] http://hashbang.ca/2012/04/16/wherein-i-realize-the-bliss-of...

> In particular, shell itself is an ugly mess of weird syntax, gotchas, and corner cases.

Just like C! (syntax is mostly OK though).

This is not coincidence.

If your modus operandi is "works 95% of the time" then you end up with C, shell and sysvinit collectively known as Unix.

Years passed and we discovered a bunch of things. Everyone agree that Bash syntax could be better, and C is not perfect either (that's why new languages like Rust are being developed). We didn't reach perfection a couple decades ago, so if someone want to try and replace the good old tools, I say "go on!": it would be exciting to see something better.

> While the traditional approach may work alright for a static setup like a server that is booted once and stays on forever,

Actually, it's not. Unless the server also has no dynamic resources and nothing in userland ever changes.

I am not seeing any justification for why these tools HAVE to be a single, monolithic tool.

> script glue

That's what shell scripts[1] are FOR - they are the minimal glue that binds the large applications together. Besides, "big brittle mess" is a matter of opinion. Are you trying to tell me that this OpenRC script is a "mess" that needs to be replaced, taken from /etc/init.d/cupsd my current desktop:

    depend() {
        use net
        need dbus
        before nfs
        after logger
    start() {
        ebegin "Starting cupsd"
        checkpath -q -d -m 0755 -o root:lp /run/cups
        checkpath -q -d -m 0511 -o lp:lpadmin /run/cups/certs
        start-stop-daemon --start --quiet --exec /usr/sbin/cupsd
        eend $?
    stop() {
        ebegin "Stopping cupsd"
        start-stop-daemon --stop --quiet --exec /usr/sbin/cupsd
        eend $?
Yes, some scripts get much longer, because they have more complicated needs. Even in an alternate environment such as systemd, those will always be complicated.

Things like changes in power or network addresses have been handled by OpenRC just fine (note: that was opinion, and systemd is better in some cases). More important,y those requirements do not suggest a need to bind tools together into one package, thoug; instead, they suggest a well-defined API* (or ABI) is needed. In OpenRC, that is simply another runlevel that you trigger on power-plug changes, etc.

Your argument seems to be that the using many tools is too complex (even though each tool in isolation is much simpler, reduced in scope, and is likely to have fewer bugs), while tying that functionality together is faster to develop (despite makng the problems into a complex, interdependent mess). Despite being labeled an "april fools joke", RFC 1925 [2] has important wisdom. In particular, there seem to be a lot of people ignoring Truth #5:

    It is always possible to agglutinate multiple separate problems
    into a single complex interdependent solution. In most cases
    this is a bad idea.
This is really a restatement of the UNIX design philosophy. Or maybe you're prefer this version, written by some old hackers you may have heard of[3]

    Even though the UNIX system introduces a number of innovative programs and
    techniques, no single program or idea makes it work well. Instead, what makes
    it effective is the approach to programming, a philosophy of using the computer.
    Although that philosophy can't be written down in a single sentence, at its heart
    is the idea that the power of a system comes more from the relationships among
    programs than from the programs themselves. Many UNIX programs do quite trivial
    things in isolation, but, combined with other programs, become general and useful
This idea about keeping problems separated was true when those guys wrote UNIX in the first place, it was true wen Dijkstra suggested structured programming instead of "goto", it was true when Alan Kay suggested sending Messages between Objects for yet more encapsulation, and countless other examples throughout the history of software. It is still true now.

If you're having a hard time seeing this, I suggest you re-examine the problem and ask if an API could tie the pieces together, because it probably already exist in an unspecified form.

> Finally, no one is forcing you to use a systemd based distribution.

Of course not, but the monopolistic tactics being used to vertically integrate systemd IS forcing a problem - on purpose - with various other tools such as udev and gnome.

I'm obviously going to stick to Gentoo for desktop use, but even there, the disruption caused by systemd has caused a significant mess by breaking previously-working software on purpose.


1: If this is merely a hatred of BourneShell/BASH, I would actually understand that as it certainly has its layers of cruft and gotcha/wtf behavior. A new Little Language[4] with special support so the standard "init script" support to make the scripts trivially small in general case could be really nice.

2: http://www.ietf.org/rfc/rfc3439.txt

3: Brian W. Kernighan, Rob Pike, "The UNIX Programming Environment"

4: http://c2.com/cgi/wiki?LittleLanguage

So my cups.service is:

  Description=CUPS Printing Service

  ExecStart=/usr/sbin/cupsd -f
  Also=cups.socket cups.path
It requires no extra things - Your cups script depends on checkpath and start-stop-daemon. More executables, more dependencies so yeah, I'd argue it suffers coupling and is a mess. You say that systemd makes things a "complex, interdependent mess" but it's better than the brittle dependent mess of the status quo. The usage of start-stop-daemon doesn't even ensure that the app was killed - it acts just as a killall <servicename>. It doesn't catch a process in a Disk Wait state or even if the process has trapped the SIGTERM - It's not just useless, it's deceptive as to its functionality.

People continually spew the line about the UNIX philosophy. If you want to use Unix, use Unix.

Linux is [becoming] better than Unix.

The other super awesome thing about systemd is: all errors and output from this service is captured by journald with 'cupsd' as the source.

Compare this to SysV init, where you have to manually wrap stuff in syslog calls (and the facility names are 'uucp' and 'news' rather than anything meaningful). <3

To your openrc example: How do you statically determine what state the system is meant to be in this moment? Without doing that, you are unable to determine whether the system is in that state, and so unable to take corrective actions.

Furthermore, how do you even determine whether or not a service is still running? Init can not sanely do that for anything but the simplest cases, and a non-pid-1 process can not do that without the risk of losing the capability when that process dies (it's when, not if when you run more than a handful of servers or vms - when you have aggregate uptimes across a system measured in centuries and individual uptimes on most servers measured in years, it does not matter how robust a component has been written - you will see them fail, not necessarily of their own doing).

And a "new litle language" would not solve the issue unless it is purely declarative. A large part of the point is to be able to reason about system state, in part by being able to tell what the state should be, in part by being able to verify what the state actually is and operate on it (start/stop etc.). The former requires you to be able to statically determine the current intended state; the latter requires you to be able to accurately track which processes forms part of a service, even if the parent daemon dies and orphans a bunch of stuff (and in that case it is not helpful if the process trying to do this can die and orphan everything and lose the state information).

I'm pretty sure openrc uses (or has the option to use) cgroups for dealing with just this. Containerizing services isn't a new idea--Solaris and FreeBSD can do this already, for example.

Regardless of the init system, we've had the ability to control daemons with cgroups already for some time (cgexec and friends). Even in humble sysvinit, I can edit an initscript and ensure that the service it starts runs in a cgroup. Hypothetically, I could even do this inside of start-stop-daemon.

> Why do the anti-choice people refuse to recognize that other people might have different requirements.

Go build your own LFS systems if you want ultimate choice. You don't get to force upstream authors (who like systemd) and distribution developers (who like systemd) to do what you want.

> I'm sticking with what already works.

sysvinit works only if you combine incredibly low expectations with a bad case of Stockholm syndrome.

But you get to force upstream authors and distribution developers who would like to be able to use something else? Sure.

That is not a real argument on the merits of permitting this choice. It is just saying "nyah nyah I have the power and you do not."

Where did he indicate he wanted to force anyone?

Upstream authors are free to use whatever they like. It may impact which distributions wants to include their applications the more repackaging the distribution needs to do to include it, as a resource availability issue, but that's it.

Distribution developers already often substantially rework or write init scripts from scratch anyway, and that is exactly one of the reasons systemd is attractive:

It drastically cuts down the scripting boilerplate that otherwise has to be done or adapted to their distro for huge numbers of packages. It also contains most of the distro specific stuff outside of init.d, opening the door for lots more reuse.

But if you're an upstream developer and wants to ship sys-v init scripts, nobody has the power to stop you. Downstream will either simply write a systemd init file, or you might choose to add one if you like, or if you put LSB comments in your sys-v init file downstream might opt to use that with Systemd if they want to.

How exactly is upstream being forced to do anything?

How exactly are distribution developers being forced? If they are against their distributions decision, then they have the choice to package something else anyway and either ship it separately or fork the distro.

> But you get to force upstream authors and distribution developers who would like to be able to use something else? Sure.

How is any upstream developer forced to rely on systemd?

How is any distribution developer forced to rely on systemd?

The only people who are forced to use it are people (like me) who aren't prepared to put in the work of doing something else.

systemd runs as pid 0 so that it can relaunch processes when they die. If it were not pid 0, something would have to be managing it, which just leads to a "turtles all the way down" scenario. I'm not sure what you mean by "disconnecting from dbus," but I do know that some services require the d-bus service to be started before they run. A good init system must handle that. Similarly, a good init system needs to handle logging. It's frustrating to try to start a service and be unable to, and not know why because a message dumped to stderr went to /dev/null. I've been in this boat before and it's not fun.

Smart people at Red Hat spent a long time thinking about the problems faced by a modern init system. They looked at what had already been done, including the Mac init system, upstart, and the Solaris init system, and came up with something cool for Linux. I think it's sad that so many people are attacking this.

Anyway, nobody is forcing you to use anything. You can use Slackware or even one of the BSDs if you don't want systemd.

There is no "turtles" issue here. Pid 1 is special - if it dies, the system panics. It is good design to keep pid 1 as simple as possible and put the complex job logic in a sub-process. There are only two things on unix which init must do: reap any children it inherits, and start something else to do the heavy lifting.

Even sysV init works this way. The current sysV init is very simple -- the invocation complexity lives in /etc/rc and the related rc.d scripts which operate as subprocesses.

"Smart people" is a silly thing to say. We're all smart people and sometimes smart people make poor technical decisions. Let the matter lay upon its technical merit, not upon empty platitudes.

> There is no "turtles" issue here. Pid 1 is special - if it dies, the system panics. It is good design to keep pid 1 as simple as possible and put the complex job logic in a sub-process. There are only two things on unix which init must do: reap any children it inherits, and start something else to do the heavy lifting.

If the kernel fucks up the system dies and the kernel is much larger than systemd. Either way most of systemd's functionality is in other subprocesses.

You're really good at copying-and-pasting canned responses from previous comments... throwawayhugerandomnumber

I take issue with the "nobody is forcing you" part. If you want to use Linux, and there are good reasons for it, it will likely be increasingly difficult to avoid systemd. You can argue whether that's "forcing" or just "applying pressure", but it's real.

That said, your other comments resonated with me. I used to fear systemd, and worried that losing the ability to hack on init guts shell scripts meant loosing the power to make my system work for me. I've since discovered that with systemd I don't feel the need to hack on init guts shell scripts like I used to, because it actually manages services intelligently on its own.

> I'm not sure what you mean by "disconnecting from dbus,"

dbus is a dependency of systemd itself, not just of the services that require it.

With kdbus (dbus in the kernel) on its way it is largely moot anyway.

"Disconnecting from dbus" at that point will not be very meaningful.

Moot? Putting dbus in the kernel is demented. Creating a hard dependency between it and a monolithic set of interdependent systemd code is even more demented.

It's like the Linux folks looked at Mac OS X and learned nothing.

Hint: Mac OS X handles more that systemd does, and still does it in a way that maintains separation of concerns between service management, configuration management, logging, and IPC.

If a process dies I would rather it stay dead. Can systemd monitoring and auto-restart be globally disabled?

It's disabled by default, you need to enable it on a per-service basis.

Since there's different needs (I ansolutely really want my bind, httpd, crond and others to restart when they die - I have no desire to log in and do that manually), you just opt in to the auto restarting in the .service file - systemd does not restart anything by default.

The main issue is like it ever was with the UNIX wars, now Linux wars.

Cloning UNIX was not enough, now each distribution wants to "improve" UNIX on its own way.

Not sure how you see progress and innovation otherwise. Much of what is good in Linux comes from experimentation and people/distros 'doing their own thing' which sometime improved the ecosystem, and sometimes resulted in abandoned projects. But things have not stagnated.

As for UNIX, perhaps you're familiar with Plan 9? Some of the principal UNIX designers were unhappy with the result, so they went and worked on improving it. Nothing is good enough the first time around.

I lived through the UNIX wars, where POSIX doesn't mean a thing because each UNIX has its own version of standard and most complex applications still required OS specific APIs anyway.

If anything, systemd may help rein some of that in but making service files a bit more common across the distros than the various special snowflakisms of each lot of init scripts. Or so we may hope.

None of the current UNIX descendants are all that similar to the original UNIX in the first place. Every single one have tried to improve on UNIX.

Hence the UNIX wars.

If you have a system made up of small and simple tools, it can only so small and simple things. Imagine designing an airplane this way. It would never get off the ground.

Petty politics aside, it seems to me that the comparisons between Systemd and Upstart are analogous to those between launchd and Solaris' SMF.

The crux of the comparison is whether init should explicitly follow a dependency tree or resolve dependencies dynamically.

The other complaints fall out of that concern: in following the launchd model of dynamic resolution, systemd is forced to bundle in complex features. On OSX launchd serves as a hub from day one, so coordinating IPC and mount-watching is not so foreign. Similarly, this hub functionality restricts modularity, although launchd doesn't subsume autofs fully.

The reality here is that we're looking at a philosophical difference. While religious wars start over which is the "right" view, I'm unconvinced there is one. Much as SMF and launchd are comparably functional, so will upstart and systemd be.

I read his article and didn't see anything besides "it is better because well.. it is better" without much technical explanations on the why. Then read over it again and saw a link to his earlier article.


which does help defend his position even while I don't share his opinion.

This seems like a solution in search of a problem. Yes, restarting broken/dead processes is handy, but this is not the job of init.

I can't wait to run regedit.py to fix my broken box...

Have you actually looked at what systemd does? Here's a handy chart curtesy of wikipedia: http://en.wikipedia.org/wiki/File:Systemd_components.svg

That just seems insane to me, what do I know... (admittedly not much about init systems)

Still, systemd tries to do everything under the sun, it just doesn't seem like an init system, it seems systemd basically wants to be an operating system and assume all responsibilities as such.

> That just seems insane to me, what do I know... (admittedly not much about init systems)

> Still, systemd tries to do everything under the sun, it just doesn't seem like an init system, it seems systemd basically wants to be an operating system and assume all responsibilities as such.

Traditionally the things that systemd does were either done by a huge mess of shell scripts or were not done at all. There's no shortage of people who insist on doing things the way they were done before but to me, systemd makes a lot of sense.

It's true that it combines things that were previously done by cron, acpid, getty, pam, etc to deal with power management, network connectivity and login, etc. But I still find it a better alternative than having a lot of distribution specific shell scripts gluing all those individual services together. Besides booting faster, there are advantages like better fault tolerance and consistency.

All those components that systemd manages, that's easily, I don't know how many millions of lines of code, and it all runs as pid 1, which, when it crashes, takes the whole system with it.

Just doesn't seem terribly smart.

Do you have any idea how much code is in the kernel? If any line crashes it takes the whole system with it. Doesn't seem terribly smart, does it? And yet the whole world uses Linux instead of Minix.

Systemd is splitted out into multiple processes.

> Do you have any idea how much code is in the kernel?

Yes I do, and that's kind of my point. It's a Kernel. I.e. it's the proverbial operating system. It's written as an operating system. Systemd wants to be the operating system, obviously, which is fine. Just call it the Systemd OS and provide the kernel and user shell as well, and you won't need linux anymore.

I think what the parent is getting at is that, even though there are many daemons and tools that make up systemd (and even though they are each scoped to a particular concern it addresses), most of them are tightly coupled to one another. To run logind, for example, I need systemd to run as PID 1.

This monolithic architecture is unsettling in the long term--it raises the barrier to entry for independent innovation in the same space. At least before, the various daemons systemd replaces (atd, crond, xinetd, acpid, udev, dbusd, etc.) could be independently modified, disabled, or replaced without breaking each other, and without much hassle. Instead, improvements to systemd's daemons have to conform to a moving-target API in the same layer (other systemd daemons) and the layer beneath them (systemd PID 1), as well as the layer above them (i.e. the UI).

You could say the same thing about the Linux kernel, yet it hasn't stopped improving.

First, the boundary between Linux and the rest of the system is very well-defined. I can run my userspace on top of multiple different kernels, since my userspace is loosely coupled to Linux (yes, there are exceptions, but not so many that it prevents GNU/kfreebsd or GNU/solaris from working, for example). This is not the case with systemd, which over the past several years has been subsuming larger and larger swaths of userspace, introducing all sorts of tight couplings and inter-dependencies that weren't there before.

Second, the pace of innovation in Linux is definitely slower than in userspace, and getting your patches adopted is more difficult relative to most userspace programs (this is the high barrier to entry I spoke of). Your patches either have to get approved by Linus et al., or you have to host them out-of-tree and hope your users know how to apply them and compile the kernel themselves (and if you want to do this for them, you have to do it for every kernel you need to support). If you're lucky, your patches can be isolated into a kernel module, in which case you "only" need to keep it up-to-date with the kernel API (which is a moving target).

Contrast this to working on a program like, say, xinetd. While the option for submitting patches still reduces to "go through the maintainer" or "host them yourself", the "host them yourself" option is much more tenable, since the codebase is smaller, and the "don't break userspace" policy Linus enforces ensures you aren't coupled to a moving-target API. The lower barrier to entry brought on by loose coupling explains why Gentoo can get away with forking udev, but not large swaths of the Linux kernel.

Systemd has the effect of making innovation in the plumbing layer (xinetd, syslog, udev, cron, etc.) a lot like innovation in the Linux kernel. Can you imagine the absurdity of having to maintain a version of xinetd for each separate systemd API change?

Yeah, except systemd is a more a collection of daemons and tools under the same umbrella (and project repository), than a huge monolithic PID 1 daemon. It's even described in the chart you linked.

It's actually quite small, even including all those components. Its components are quite granular, as well.

Compared to the reams of code in existing init systems + all the shell scripts out there related to init and/or management, systemd has a tiny amount of code.

Also, there are useful features that can't be achieved without being pid 1.

Perhaps you should look into how Mac OS X handles these things while still maintaining separation of concerns. Systemd is a joke of a software design by comparison.

Systemd is winning because it makes a Linux system to a modern one.

The time where you compiled your own Linux kernel with the drivers you needed and got a static system are gone. The Linux kernel now works in a plug and play way. Block devices for instance can always pop up, not only after you wait and arbitrary amount of time at boot for all devices.

The idea that the traditional unix process separation is enough is from the same time as the gets function. The Linux kernel here offers cgroups for better seperation.

fallacy #23: argumentum ad novitatem

I'm not claiming that hot-plug is better, but that hot-plug is the status quo of the kernel. Modern here means that it integrates well with a modern linux kernel.

Also grouping processes together and isolating them is not really new but a proven technology (FreeBSD jails, virtualization). It's also pretty hard to argue against that the knowledge about security of unix systems didn't increase.

The irony about the fallacies is that just throwing them around even without writing a sentence goes against the very nature they were conceived in. The corresponding counter-question would be: "Why is modernity here something good?", but that question is already answered in the post to begin with.

You are right.

I am actually in a phase of archeology where I read a lot of rob pikes, ken thompson, ted nelson, linus torvalds, esr ... (http://harmful.cat-v.org/) and I begin to question myself a lot of things. Even the statu quo.

At work I deal with a lot of dependency hell (system and distributed software requirements, confinments (VM and chroot or jails)) and I begin to doubt some of "the wisdom and progress" I have been adopting.

I search for answers now because I think some old "conceptual bugs" are bitting us very hard (like the way http url are built, threading, shared libraries, the abuse of concurrency) and I don't know anymore what progess is.

I just kind of feel status quo is a very old hard rock band that should be forgotten :)

What about openrc?


What's up with the words words words

According to one of the links in the original article, OpenRC and sticking with SysVinit were also evaluated as options early in the discussions.

I'm sorry could you post a link?

I just really don't like how this stuff is couched in words from either an apt/yum dudes blog that focuses on boot time.

No, I can't post a link. That phornoix article only links to other phoronix articles and I can't find a way to an actual discussion thread or meeting minutes from people that were actually involved and I don't have time to search for one.

It's cool, everyone digs monolithic stuff anyway.

I believe the main reason it wasn't really considered in the debian discussions was that they couldn't find any real documentation of it.

Hopefully what happened with pulseaudio does not happen with init

It can't really - the problem with PulseAudio was that it was adopted by a major Linux distribution before it was ready for primetime. It worked well after a while.

Systemd is has been the main init system of several distributions (like Fedora) for some time now, so I wouldn't worry.

press "E" in grub, then add init="/bin/busybox" to the kernel command line. Then apt-get purge systemd (mounting may or may not be necessary).

That way you can still claim "the first thing I did was to remove it". But contrary to pulse, it will also be the last ;)

This article lacks any real substance, and perhaps that embodies my dislike of systemd in general. The demonstrate what I mean, I've modified the parent article, replacing "systemd" with "Windows", "sysv" with "Unix", and "init system" with "operating system", plus a bunch of other title changes to make it coherent, without really modifying the impression of the post: http://pastebin.com/sLsDVBG8 (Don't take it too serious.)

Obviously this is just a bit of humor, but it sounds like something that could've been written in 1990, and the popular choice at the time has made at least a generation of developers and sysadmins suffer, and which some are still suffering from. I think the lesson we can learn is that "ideas" are not unimportant, and should be properly looked at before jumping to binding decisions, because what "solves your problem" now, may cause significantly more in the future.

The biggest fear i have after reading all of the latest articles popping up is that it's too monolithic, just as a lot of other people. However from reading on their faq it seems they've split out into several different processes and if the processes themselves are well scoped i can't really see the problem?

I tried googling for something but i can't find any good documentation about what different processes systemd uses and how they interact and their roles in systemd? Anyone have any tips?

Wow. This is the sanest commentary about systemd and the state of things I've read to-date.

There are _numerous_ problems with the argumentation being used here. Starting from the top, the first one is actually kind of silly.

"none of these alternate init systems did the hard work to actually become a replacement init system for anything much."

This conveniently overlooks whether or not there was any reason to do so. The scope of "classic" init is very simple. Its scope is well defined. It amounts to "start and stop the services when they're supposed to be started and stopped" and that is _all_. Those other init systems mentioned all expanded the scope of the problem of starting and stopping service a bit further, but only a tiny bit along the lines of deciding whether a county border should be on one side of a twisty creek or the other--in practice, very few people even care. Systemd expands its scope by _miles_ and strides boldly forth as a proud example of _scope creep_.

The bulk of the rest of the argument amounts to "Well, systemd did all this hard work!". This isn't KINDERGARDEN. Its insane to argue that people should be giving systemd a free pass simply because the developers _worked really hard on it_. All those edge and corner and simply misbehaving daemon cases aren't things systemd should be more than peripherally concerned with--they're things to push back on the original developers to fix from the start, and in the meantime those workarounds should be held in their proper regard... as _workarounds_, not major features, because again... _scope creep_.

Lastly, while they're busy giving systemd points for trying real hard and solving a bunch of problems that it shouldn't be bothering with, they ignore the fact that systemd was busily creating _new_ problems... but I'm sure they'll also credit systemd for solving those as well. Specifically Poettering himself recently posted about a "bug" with apparent filesystem corruption caused by systemd failing to properly unmount a filesystem when it's upgraded... during which at some point they appear to have decided that PID 1 is some mystical holy number (which it is _not_) and that going back to the initrd the system booted with is a completely fine and reasonable thing to do, and that it is not, in fact, a sign that one has seriously painted themselves into a corner. They're essentially arguing that they didn't paint themselves into a corner because they're standing in the middle of a long hallway... surrounded by paint... and that the real problem is the lack of roof access.

It's a _huge_ workaround for what is simply not a problem for a small, simple initd that _only_ handles a small, well-defined problem and handles it correctly.

...but you pro-systemd people enjoy having to explain why half the core system functionality has to stop whenever you have to upgrade any of it. The rest of us will continue to use systems that do what they're supposed even while being upgraded.

The same way any other crap (such as Windows, Java, PHP, NodeJS, Docker - you name it) wins - its winning is due to a "catchy meme" about it, which triggers an automatic, ignorant snap-judgement of an unsophisticated consumer.

Look, with Java you don't have to think about hardware and OS - a greatest meme ever (and you have to pay for tons of hardware because Java = waste). NodeJS - you could code server-side apps the very same way you write stupid web-pages (without any deep knowledge) - great meme. With Docker no sysadmins are required, and an underlying operating system is just viewed an abstract "container" for an app (they you will pay to "experts" who would spend weeks analyzing why your crap is so slow and unpredictable in production). With systemd you don't have to understand the subtleties, and it is run with Docker. Yay!

The list could be too long.

I forgot "the web-scale database" with stop-the-world write locks and syncing-buffers-is-not-my-problem and other innovations.

At least windows has a decent and stable init system though!

I think it is debatable whether or not a relation between words "windows" and "stable" makes any sense.

I disagree. The 400-odd Windows machines under my command are incredibly stable, reliable and well performing.

It's not Windows 98 any more.

Do you apply updates? If so, have you ever seen how it breaks functionality of legacy code? Never seen problems with third party tools like Delphi or Java? Never seen Trojan or a virus?

Yes we apply updates after testing them. MS11-100 was the only breaking change we've had in 10 years and we picked that up in test no problems at all and knew about it as we read about it before applying the patch in the KB article that accompanied it. Did you know Ubuntu LTS releases shipped buggy and broken MySQL versions multiple times? We caught them in testing. Canonical haven't fixed MySQL in 10.04LTS yet.

No problems with Java, ever in 10 years. We don't use Delphi. Did you know that Debian broke the entire SSH key generation infrastructure for a bit leading to insecure key generation. We regenerate our own keys up front anyway.

Never seen a Trojan or virus because we have proper mitigation at the edge of our network and on critical machines and everything is locked down properly. Did you know about SSH worms -- you know the things that hammer the crap out of every node with SSH on it for the last 5-6 years? We mitigate at the edge (authenticating firewall).

If your opinion of windows is based on such things, you're approaching the problems wrong.

I never seen any statistics about bootnets hosted on rooted Debians. I also remember that in Metasploit framework the number of exploits targeting windows is incomparable larger than Linux related. The last time I made a firewall it was OpenBSD/spark64 based, and I think it is still in production. I never used Debian or Ubuntu as a server. There are Centos or RHEL for that, and I am capable to recompile and repackage any tool I needed.

Sorry, I cannot understand most of mentioned difficulties.

Likewise I don't understand your POV -- in fact I think you're spouting rubbish much as most advocates seem to. I've been dealing with Windows, Linux and BSD systems for 20 years and there's really not much in it between them.

The bad rep Windows gets is from idiot users clicking Ok. If we had user friendly OpenBSD desktops with idiots pushing the buttons we'd have the same statistic with a different OS.

The brand doesn't matter - the fundamental problems are the same.

Let's say that I am very sceptical about any proprietary bloatware pushed by big money and developed by outsourcers on a payroll. I also do not believe in the myth that these corporations employ more talent that is behind some selected open source projects and startups. In Russia where traditionally IT is heavily dominated by Windows and now Java and SAP, I have seen enough, and it is what biased my POV.)

Oh I agree entirely with you there.

Forget the words bloatware, outsourcers and money for a minute...

Surprisingly it's rarely the core products of the companies that are problematic. It's usually the army of consultants and cash backhanders (the enterprise lot) who come wading in to pillage everyone. That's where the bad rep comes from. They build half-arsed products on top of these platforms which everyone comes to hate.

Regarding open-source talent; much as it is anywhere else, it's hard to come by. In fact a lot of the open source software out there is considerably worse than the closed source stuff that I've seen over the years. There are a few gems here and there for example the FreeBSD operating system, LLVM, valgrind, postfix, postgresql etc but the vast majority is total half-baked shit. I include a big chunk of GNU in that as well which is shameful.

The vast majority of commercial software is total shit too.

Regardless of that, the core of Windows and even Java (but not SAP - I've had my fair share of integration there) are good products. Just don't go and grab piles of enterprise integration junk on top and it's fine.

That, IMHO, is where all the pain comes from and why people have such hatred.

Hatred is infectious as well. It takes experience to distinguish justified hatred and rumors.

That was a good one! ;)

"init wars" what a joke. the fact that systemd has "declared war" on other init implementations is all the information I need.

See, you're missing all the fun drama: http://lwn.net/Articles/583182/

No thanks.

Because nobody wants Canonical to control a critical part of their distro.

Upstart is "controlled" by Canonical in the same capacity that systemd is "controlled" by RedHat. Both have a team of full-time employees working on them (plus a surrounding community), and both control the APIs as well as the reference implementations.

Better to have Red Hat in control of Debian, then?

In preference to Canonical, yes.

systemd is more open, has a greater count of contributors, and is already more widely adopted with more distros planning their move to it. I'm not saying it's without problems, but anything (including keeping sysvinit) is better than Upstart.

GNOME is practically a Red Hat project these days, but people aren't advocating its removal (although yes, GNOME is awful since 3.x)

As it is, Red Hat already maintain or otherwise have a hand in a lot of things that are in Debian - seeing them as somehow adversarial isn't true, while Canonical have a long tradition of keeping their own work close to their chest and not sending improvements upstream.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact