Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Broken by design: systemd (ewontfix.com)
172 points by uggedal on Feb 10, 2014 | hide | past | web | favorite | 201 comments



Nice article, but at this point it's game, set, match -- systemd has won. The Linux community overwhelmingly supports it, and its advantages clearly outweigh its perceived disadvantages.

"It's not Unix" is no longer a valid critique. The old Unix design philosophy is dead. People want systems of components that function as an integrated whole. They do NOT want pieces that are generally unaware of each other (this is touted as the virtue of "loose coupling").

Yes, pid 1 does a lot. That's by necessity. It's not your grandpap's Unix anymore. Do you want to control all the system services, including the "badly behaved" ones? That functionality needs to be in pid 1. Do you want to bring the system up safely and sanely, accounting for things like hotplugged disks in /etc/fstab? That functionality needs to be in pid 1. Systemd is the correct design for modern Linux. It also brings the advantage of better integration and greater commonality across distros.


Even on systemd that stuff isn't all handled in pid 1.

There are 6 systemd service processes running on my machine handling different components.

Snipped out the irrelevant processes.

    % systemd-cgls
    ├─user.slice
    │ └─user-1000.slice
    │   └─user@1000.service
    │     ├─6612 /usr/lib/systemd/systemd --user
    │     └─6616 (sd-pam)
    └─system.slice
      ├─1 /sbin/init
      ├─systemd-udevd.service
      │ └─151 /usr/lib/systemd/systemd-udevd
      ├─systemd-logind.service
      │ └─345 /usr/lib/systemd/systemd-logind
      └─systemd-journald.service
        └─125 /usr/lib/systemd/systemd-journald


People want systems of components that function as an integrated whole. They do NOT want pieces that are generally unaware of each other (this is touted as the virtue of "loose coupling").

This may be true of "mainstream" users' applications, but is it really true of system software? And should we really attribute the cultural shift toward monolithic systems to shifting preferences rather than leap-first "design"?


Why do those things need to be on pid 1? Just asserting it is not really convincing.


Any init system needs to be pid 1 [1]

This blog post does not identify the subtle argument that upstart maintainers made against systemd at pid -1 : it also brings along with all the other services (like logind) that frameworks like Gnome depend on.

So, if Gnome wants to depend on logind and logind depends on systemd (post systemd rev-205), this means that Gnome is irrevocably coupled to systemd which must run as PID-1 [2]. This means it becomes hard for upstart to be used with Gnome.

There has been no doubt whatsover that an init system needs to run as PID-1.

[1] http://en.wikipedia.org/wiki/Process_identifier#Unix-like [2] I must add here that Gnome does include a fallback path to not depend on systemd components.


A part of any init system needs to be pid 1.

The question is how much of it, and whether or not Systemd piles too much stuff into pid 1 that could instead be farmed out to separate processes, handled by separate binaries, that could be more easily upgraded or replaced.

(I don't know; I've not looked at Systemd enough to have an opinion)


they are separate binaries - however all of them depend on systemd (post rev 205) and cannot be run independently.

this is because all of systemd's services rely strongly on cgroups for resource control and upstream kernel wants a single reader/writer for the cgroups API (the current filesystem model is going away). systemd is just following lockstep with what kernel wants and is becoming that API.

There has been a lot of FUD around the "PID 1" disaster, but invariably every init system will have to have a single point cgroup writer. Systemd is just ahead of the curve.


http://lists.freedesktop.org/archives/systemd-devel/2013-Jun...

Anyone has more information about this from the kernel perspective ? Will systemd be the only API ? Sounds strange.


This is a reasonable evaluation of the status - https://lwn.net/Articles/575672/

Basically, if you use systemd as the init system, then yes - it is the only API [1] . If you dont want to use systemd then there are alternate implementations of the cgroup API (e.g. cgmanager - http://cgmanager.linuxcontainers.org/ ). IMHO, they are both incompatible with each other.

Lennart has a reasonable justification of why systemd is implementing it's own API [2]

[1] https://lwn.net/Articles/557111/ [2] https://lwn.net/Articles/557140/


PID1 also comes with powers over capabilities and (in some cases) cgroups. You can do much of that with FSCaps, etc - but to control the bounding set, you have to be pid1. So - if you want init to be able to limit what processes can do, you will be pid1.


Linux has always supported userspace diversity, so nothing wins completely.

We are at a point where containers are going to change what people want in a Linux distro and not sure that monolithic will win.


Containerization will make this entire kerfuffle pointless. Running single image big machines with lots of services is a 90s and early 2000s architecture aesthetic that will fade into the background as individual services migrate to a PaaS model for enhanced modularity and mobility.

I haven't looked deeply into systemd, but from how the conversation around it is going, it would seem to be solving complexity with more complexity.

I'd like to see a mashup of CoreOS plus NixOS integrated with a cluster container management system.


"is a 90s and early 2000s architecture aesthetic"

I hope you realize "containers" are so similar (identical compared to "single image big machines") to how mainframes worked in 70's.

Just saying nothing is new, cycles are the norm and don't be so high and superior cause your new hotness is gonna be the "old, broken shit" in a couple decades.


And mobile is the new desktop because there is enough computational power there. Soon, we will see containerization on mobile because of too much complexity and not enough security.

Containerization has always been a good idea. http://en.wikipedia.org/wiki/FreeBSD_jail has been around for almost 14 years. I am aware that full system virtualization dates back to the 1960s, in no way do I think that old eq bad.


CoreOS is of course systemd based...


I know, but CoreOS won't be the only lightweight OS designed for cloud deployments. I'd be pretty jazzed for a Lua or Gambit Scheme based system. It will probably be node.js though, always bet on javascript.


Been working on Lua based stuff that can bring up enough of a system for containers (networking and so on, still some bits missing) https://github.com/justincormack/ljsyscall



I wish l33tstart was a real thing. Maybe have it talk over nanomsg using capnproto? But the messages should be cryptographically signed. Would it handle logging as well? Could those log messages get processed through summingbird or storm? Ok, Clojurescript -> Gambit Scheme -> Native exe


daemontools doesn't run as pid 1 and achieves all of this.

http://cr.yp.to/daemontools.html


It does not. Not even close. https://news.ycombinator.com/item?id=7210308


That comment is insane. Why would log files need journaling? They should be serialized.

How is adding a line to a run script "duplicating logic badly"?

Coupling is clearly bad, as evidenced by ... 30+ years of software design, but is a necessary evil. I don't see any reason why it's necessary in such a low level system component. "this is not rocket science" you don't impose dependencies on something you want to be reliable.

Daemontools is clearly the most mature solution, from a perspective of stability. It's probably why nobody wants to use it, there's nothing to play with or exploit for profit because it already works.


That comment is insane. Why would log files need journaling? They should be serialized.

systemd-journald provides tampering-attestation features that plain-text logs do not, and cannot, provide. It's fundamentally more secure.


Fallacy: having owned the user that writes logs, I cannot fuck with your logs if journaled.

Reality: if I owned the user writing logs, I can do anything I want with the logs, no matter how they're stored.


No, you cannot do anything you want with the logs.

Here is something you cannot do: you cannot alter the logs without anyone noticing, since log entries have a rolling hash.

This prevents an attacker from being able to modify logs without detection, and was implemented in systemd after the postmortem on the kernel.org breakin, i.e., it was developed to counter an actual threat.

I would suggest you read up on what log attestation actually does.


Logs can be modified in real time by being intercepted during writes to the buffer before the new hash is applied. Because of this, only the logs up to the intrusion can be preserved, and even then if the old message/hash/key/etc is still in memory, the old messages can be modified.

The only truly secure way to record messages up to the point of break-in is a one-way remote log. Any new messages after break-in is subject to falsification.


> The only truly secure way to record messages up to the point of break-in is a one-way remote log.

And if you have this then you don't need a fancy cryptographic syslogd to detect tampering.


The fancy cryptologic syslogd is fallible as I mentioned previously, so why depend on it? And really if you really want to cover your tracks you can just cause random disk corruption over the whole disk (and just happen to damage the journal in the process). Things will start failing rapidly, fsck will later show itself fixing the disk corruption, and it will be assumed to be a hardware error and the incident ignored.

Security half-measures do not mean you are secure. If it can be hacked, it will be hacked, and then what was the point of sacrificing your whole system's init system?

(Also you could just replace syslogd with a journaled syslogd, rather than replacing your entire init system... but I guess that's off-topic?)


I'm agreeing with you.


>rolling hash

FAIL. You just wrote outdated software. Under this dumbass idea of journaling, which by defition is there to allow editing.., the best thing you can do is built a rolling function, which basically means coping the data, which basically means increasing your attack surface.


The journal sealing provides forward security. The sealing keys are exchanged regularly, and the old keys are deleted safely. A verification key, which is stored on another device or offline, can be used to calculate the sealing key for any given moment. While an attacker can change the logs, this will always be visible in an audit.

http://lwn.net/Articles/512895/

https://eprint.iacr.org/2013/397.pdf


the old keys are deleted safely

Assuming nothing has managed to redirect those unlinks for example...


I guess you're right. The mechanism is forward-secure, which means for an attacker that tampering the logs is the first thing to do, not the last ;-)

As soon as the attacker manages to extract the state of the key generator from memory at one point of time, the log entries from that and the following sealing periods can be modified and cleanly sealed.


From what I understand systemd can optionally store a copy of the logs on a remote server. The key to modify the logs on the remote server is changed in such a way that even if a system is compromised the log copy on the remote system can not be changed.

These remote copies of the logs could actually be used to detect log-tampering and 0day exploits.


>From what I understand systemd can optionally store a copy of the logs on a remote server.

anything can have any feature, what matters is how it's built.


rsyslog allows you to sign your logs if you want to: http://www.rsyslog.com/how-to-sign-log-messages-through-sign...


daemontools can't reliably monitor daemons that reparent themselves to init or kill their pgroup.

I have written hacks to work around this (I have a gentoo VM that uses supervise as an openRC replacement, and have several hacks to work around this issue). It's a pain in the ass, and one reason why systemd runs in pid 1.

I prefer daemontools, but it's not perfect; it made a different set of tradeoffs.


How does svscan handle service dependencies?


It doesn't. That's the point. Nothing in daemontools does. Dependencies are bad.


I'm not sure if you're making a point more subtle than I'm reading, but when most people in the systemd debate say "dependencies are bad", they mean that having loads of things depend on the init system is bad.

The comment you're responding to is talking about dependencies in boot services, e.g., sshd depends on having the network interface up. Systemd does a great job of handling this sort of thing, allowing services to start in parallel, but ensuring that things only start after all the pieces they rely on have started. That sort of dependency isn't "bad" -- it just is. You can't remove a dependency like this, it reflects reality.

That said, I really don't care for systemd. I think the complexity it adds more than counteracts the benefits it provides, but I'm old and didn't mind SysV or BSD init.


I'm saying that the dependency on the network being up is bad for services. An example of a bad case: if the network interface is hotplugged or down and you're running Redis, do you want it to restart and have to drag the entire AOF file off disk just because you yanked the interface card because it was failed or failing? Your 20 second hotplug problem becomes a 15 minute block of IO coming off disk because you had to restart the process.

So many problems come from such a simple dependency.

Designing all services to handle and expect failures and just let init restart regardless of the state is the RIGHT solution.


Linux was never UNIX. Influenced yes, but not UNIX. That said, in the FOSS world it is never 'game, set, match' it is 'game, set, fork' :-). For what its worth though, these sorts of debates have happened forever, MOTIF vs CDE, Streams vs Sockets, Signals vs Events, TCP vs OSI, Etc.


I'm not so sure this is entirely true, especially with pid namespaces becoming a real thing. The true pid1 could be greatly simplified for the sake of system stability while providing sub-pid1s that handle actual service, device, and application management.


Everyhing in pidns is destroyed when pidns init dies.


I'm not sure what you're getting at here. Yes, that is obviously true. As is the case with the true pid1 init. Care to expand?


I mean if you protecting yourself from panic by moving stuff from real PID 1 into pidns root, you're doing it wrong. pidns root will segfault and kernel will SIGKILL everything inside which is equivalent to good old panic.


> systemd has won

Not in the Gentoo ecosystem where we have OpenRC: https://en.wikipedia.org/wiki/OpenRC


We'll likely switch to systemd, over at Gentoo, when we need to. As we discussed at FOSDEM, it's a matter of 'when' rather than 'if'.


If that switch happens and systemd is forced on us I'll be dumping Gentoo after 12 years.

systemd gives me nothing that openrc doesn't already provide and it messes with a lot of things that I do care about (eg. logging, cron).

If RedHat is so intent on forcing their junk through everyone's throat it's time to move back to an OS where technical insight reigns instead of corporate money and marketingspeak.


To be honest, if Gentoo moves to systemd it'll probably be the final straw that pushes me to switch back to Windows after a decade or so of Linux use. Things have just being getting more and more buggy and unreliable; I'm already on the verge of switching just so I don't have to deal with the state of audio on Linux these days. Every PulseAudio version introduces new and fun breakages and they don't do stable releases anymore, so at best it'll get fixed in the next release which will have a bunch more poorly-tested code that breaks something else.


I have been able to keep PulseAudio out of all my Gentoo systems so far. I seriously doubt anybody can impose systemd on me.


Gentoo is still popular as it's the basis for ChromeOS and CoreOS, but CoreOS uses systemd and I'm not sure re: ChromeOS.


I was under the impression ChromeOS used upstart.


ChromeOS uses upstart.


Also Debian is still on sysvinit, and there is a vote being held to decide what init system to use for jessie as default.


That vote has almost total agreement that systemd will be the new init system. At this point, they're stuck in a quagmire of other bureaucratic nonsense and debates over related issues like whether packages will be allowed to require a particular init system.


Or maybe they need to figure out how to keep the non-Linux ports working since systemd is highly tied to that particular kernel.


neither the hurd nor the bsd port use the debian linux sysinit currently.


If this were true, why would you bother saying it.


Because the holdouts in the init wars debate are whiners who are trying to obstruct useful progress being made on the Linux platform, e.g., Ian Jackson.


It is broken by design because it contradicts with some general principles, such as "single responsibility", "share nothing", which leads to "divide et impera" and other "nice things".

Unix philosophy, according to which a system is a set of lose-coupled tools, each of which, at least in theory, is supposed to do just one thing, but do it well, and to follow certain conventions, such as using text streams as input and output, which allows them to be hooked together into pipelines, is a really great one. To sacrifice it just for very contradictional arguments of people, who, perhaps, not fully understand it or its consequences for a complex system's design, is, at least, shortsighted decision. (look what a wonder is Plan9, at least from the system design perspective and "less is more" principle).

As for so-called "problems", BSD people would tell you that most of these "problems" are mere creations of distracted minds.)

Introducing various daemons and ctls merely adds unnecessary complexity compared with the uniformity of "just shell scripts" and a very simple set of conventions. At least everything works fine with FreeBSD and its huge ports collection.


We're dealing with irreducible complexity here. Unix philosophy keeps the complexity out of the components - fair enough. But that merely means that the complexity reappears in the glue code (sysvinit and friends). And glue code is harder to secure, being scattered, disparate, and executing in complicated VMs such as the shell-plus-kernel, and therefore hard to reason about.


FreeBSD, OpenBSD, Plan9, AIX (Solaris decision to use Java-based tool was plainly idiotic) for some reasons would think otherwise.


Which Java-based tools is Solaris using in this area? I don't think SMF is Java-based.

SMF is pleasant to use and Linux should have copied it entirely, including syntax, but they didn't (NIH syndrome?).

http://en.wikipedia.org/wiki/Service_Management_Facility

systemd is really a pain to use. I'm still amazed at how quick everybody was at adopting it. Perhaps people were really desperate to ditch the sysvinit stuff.


Systemd is closed to a blatant launchd (OS X) clone


I'm no expert in init systems genealogy but a quick search shows launchd was released in March '05 while Solaris 10 with SMF was released in January '05. Surely they were not working in secret over such a basic OS infrastructure component, so it's possible both exchanged ideas.

But more specifically, how do you think they are equal that SMF could be called a clone? I've had a Macbook for a few years but never bothered with launchd much. Care to elaborate?


I must admit, breaking the Unix philosophy for ZFS to unify FS/volume/RAID management is a dream come true.


But which one is it that the broad industry finds more useful?


The "Unix philosophy" of one tool, one job doesn't really apply here. Linux isn't just a server OS anymore, which is where the Unix philosophy makes the most sense. The BSDs do this quite well since their command line utilities are developed together as a system that fits together. On Linux we have a mismash of tools developed with different philosophies and conventions. They fortunately work well enough for server/programming tasks, but this design approach does not hold up well for complicated desktop systems.

Not that systemd is the answer. I think we need an init system that ignores severs completely and is designed with Laptops, mobile devices, and graphical management tools in mind.


NB: "loosely-coupled", not "lose-coupled". Which might be something entirely other than you'd intended.


Yes, "loosely", of course. Thanks.)


What are these "general principles" of "unix philosophy"? Where can I read about them?

edit: thanks, those both look like great books so far.


Kernighan & Pike, The UNIX Programming Environment, 1984

http://www.powells.com/biblio/62-9780139376818-0



Maybe "The Art of Unix Programming":

http://www.faqs.org/docs/artu/


Everything is a file. Ted Nelson presentation on how it is quite broken by design http://www.youtube.com/watch?v=Qfai5reVrck


Thanks


> PID 1 brings down the whole system when it crashes. This matters because systemd is complex.

The Linux kernel is much more complex and also brings down the whole system when it crashes.

> Attack Surface

Sysvinit runs shell scripts and has known race conditions. A system like that is not secure by design. It is as secure as it is because exploits were fixed one by one over the years.

> Reboot to Upgrade

You must also reboot to upgrade sysvinit.


> The Linux kernel is much more complex and also brings down the whole system when it crashes

Yeah. And there is nothing good about it.

> Sysvinit runs shell scripts and has known race conditions

Bugs can be everywhere. But the smaller the paramount part the better.

> You must also reboot to upgrade sysvinit

You don't


>> The Linux kernel is much more complex and also brings down the whole system when it crashes

> Yeah. And there is nothing good about it.

I beg to differ, and I think a lot of people will disagree with you. Regardless, your snarky remark fails to state any problems with the Linux kernel, or identify what you consider a "good" kernel.


I think he meant there is nothing good about complexity and bringing down the whole system on a crash. For the Linux kernel this is probably an unfortunate necessity, for an init system it is not.


He's probably thinking about some kind of micro kernel (which are pretty cool), or redundant CPUs (which probably wouldn't help much)?


> You must also reboot to upgrade sysvinit.

The whole point of the article is that you don't need to upgrade sysvinit or another minimal /sbin/init like those found in the BSDs.


The argument is kinda weak given how many kernel updates are there and how often they come (and kernel upgrade _requires_ reboot regardless of PID 1).


Not necessarily. Ksplice[1] for example allows the kernel to be upgraded without rebooting.

[1] https://www.ksplice.com/


You have completely missed the point of the article.

So if not systemd, what? Debian's discussion of whether to adopt systemd or not basically devolved into a false dichotomy between systemd and upstart. And except among grumpy old luddites, keeping legacy sysvinit is not an attractive option. So despite all its flaws, is systemd still the best option?

No.

After which he proceeds to explain how to solve points 1, 2, and 3 by moving most tasks out of PID 1.


It is as secure as it is because exploits were fixed one by one over the years.

That's largely how things get secure. Though pre-emptive security through code audits and library rewriting (the OpenBSD approach) also helps.

Init scripts have had those years. Systemd has not.


Yes, but we didn't decide to not use sysvinit on those grounds.

It's quite a nifty case to make, that we can't use a new init system because we haven't used the new init system for a long time already to find the bugs.


It's quite a nifty case to make, that we can't use a new init system

That's not the case I was making. Rather it was oofabz's apparent criticism of sysvinit's security profile based on the fact that bugs were found and removed individually. That's not a particularly valid criticism.

There are more valid ways of establishing software quality based on bug estimation (defects found, estimated undiscovered defects remaining). The rate with which init scripts are changed (the fact that they can be modified, possibly erroneously, by individual systems admins is another consideration). A system that's been specifically designed to be resistant to coding and use errors (again: OpenBSD) is better. Some init script systems are better than others, and I've found the Debian scripts to be simpler, cleaner, and more robust than Red Hat's (which use multiple levels of includes and indirection).

In the case of systemd, I've found the touted benefits ("faster booting" -- ORLY? I prefer not having to boot/reboot systems), complexity, novelty, and track-record of the primary developer to be quite troubling. Technology is a complexity management system. Much of the problem in the GNOME project comes from failing to manage complexity appropriately (oversimplifying a complex space, creating complex tools with more levels of indirection (gconf and related utilities). It's a train wreck I saw going off the rails over a decade ago. I've got similar fears for systemd.

Yes, booting is more complex, some systems (especially cloud deployments) may benefit from expedited startups, dynamic management of devices and services can be a benefit. But off of that in PID 1? And don't even get me started on the logging aspects.

I'll be sitting this one out as long as I can.


> (defects found, estimated undiscovered defects remaining)

how does one estimate the latter? (Actually curious if this is a known thing)


It's a known thing. Look up "software defect estimation".

The methods aren't too dissimilar to how population ecologists estimate the population of a wild animal population (you tag individuals and note the rate at which you're repeated re-tagging the same ones), the estimated total falls out via statistics.

It's not a method that's applied frequently even within organizations (I've worked in and with QA numerous times), and my copy of Cem Kaner's Testing Computer Software doesn't seem to address the matter. Boehm's Software Engineering Economics discusses software reliability modelling at page 181 (Chapter 10: Performance Models and Cost-Effectiveness Models).

IEEE has "An efficient defect estimation method for software defect curves" which looks like it should cover the area, membership or purchase required:

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=124539...

"Applying Software Defect Estimations: Using a Risk Matrix for Tuning Test Effort", James Cusick, Wolters Kluwer

http://arxiv.org/pdf/0711.1669.pdf

BUG COUNT ESTIMATION: http://www.testdesigners.com/bugnets/bugcountestimation.html

References Capers-Jones, leading authority in the field.

Many methods start with an assumption that software coding is essentially a progress of inserting bugs into code at a known rate (and there are studies which have established such rates with fairly high levels of confidence). So debugging and QA are then a bug removal function. Knowing what your bug insertion rate was allows you to estimate how many of those bugs you've removed.

That's the theory.


this just makes me want to push spec review, coverage/unit testing, and smoke testings a more stringent requirement for new codes :\


That actually strikes me as a more reasonable objection than many I've heard. On the other hand, systemd is actually in use in various places, so bugs are being looked for and found, so this process doesn't begin with Debian's adoption and you're back to balancing probababilities the same as you would with any change.


> Debian's adoption

In this case, adoption means moving systemd from being optional to default. Debian has had systemd in use for quite a while now with current statistics at 7.5% of all installation (accordingly to popcon http://qa.debian.org/popcon.php?package=systemd).


Package names are a bit confusing here. The systemd package includes all the binaries - one of which, logind, is required by GNOME - but doesn't install systemd as /sbin/init and doesn't boot with it by default. The package to do that is called systemd-sysv, for reasons having to do with the special treatment of sysvinit as a "required" package in dpkg.

So 7.5% of Debian systems have it installed, but only about 0.3% (plus however many of that 7.5% that are setting init=systemd in GRUB) are using it.


setting init=/sbin/systemd in the kernel cmdline is the recommended way, so i guess the 7.5% do count as using systemd.


Systemd is controlled by Red Hat in a way in which critical system components including kernel haven't been controlled before. Not by single corporate entity.

That's what we know about this company from an old (2007) article:

> “When we rolled into Baghdad, we did it using open

> source,” General Justice continued. “It may come as a

> surprise to many of you, but the U.S. Army is “the” single

> largest install base for Red Hat Linux. I'm their largest customer.” [1]

It is better to go with a grass-roots solution, even the one technically inferior, that isn't being influenced by one single vendor or government.

[1] http://archive09.linux.com/feed/61302


> It is better to go with a grass-roots solution, even the one technically inferior, that isn't being influenced by one single vendor or government.

If politics was a problem the most practical way would be to just fork systemd because it has friendly license and is feature rich.


So we don't trust Red Hat because it's customer is the US Army?

I also would say that every critical system components are control by different ententes. BUT don't like systemD you can use any of a dozen options.


I'd be more concerned about RH and its own incentives (look at the clusterfuck that's GNOME) than the US Army specifically.


Sure. I have a gut feeling that GNOME reworking was done solely to make trouble for Canonical.

The Interface Stability Promise [1] by systemd team is just a promise, nothing more. I wonder if Red Hat will keep it if it decides that it no longer serves their bottom line.

[1] http://www.freedesktop.org/wiki/Software/systemd/InterfaceSt...


Linux kernel is maintained by sane people, unlike systemd/udev: http://article.gmane.org/gmane.linux.kernel/1369384


> Reboot to Upgrade

This is not needed anymore. Check systemd's manual for 'systemd daemon-reexec'


The article specifically addresses this feature, but I'm not knowledgeable enough about the topic to judge the quality of the argument.


So, I used to vaguely help develop another proposed init replacement a few years ago, and it's entirely correct. The only way to upgrade init is to have it dump state and re-exec() itself, and there are theoretical failure modes that will cause init to crash before it's even possible to set up a crash handler to catch the problem, causing a kernel panic.

What's more, supporting upgrades this way means you need to support every previous state file format, and the more state and features there are the harder this is to do. (Seriously, it was a pain in the butt to keep track of.) systemd is developed primarily by and for Red Hat who avoid this problem by not supporting upgrades to a new distro release from within the running system - but most other major distros do and will likely be bitten hard by the issue.

Without support for this you can't even reboot cleanly after upgrading init or any of the libraries it uses, because init will keep old deleted executable open and make it impossible to remount root as read-only (an obscure quirk of Unix filesystems that basically only init developers need to know). You don't just need to reboot to upgrade init, you need to boot into a special upgrade environment that doesn't use the system's init, and Fedora does. That's why Fedora needs daemon-reexec - otherwise they wouldn't even be able to apply security updates to glibc without rebooting first. Compare this with kernel upgrades, which can be installed immediately and simply don't take effect until the next reboot.


Without support for this you can't even reboot cleanly after upgrading init or any of the libraries it uses, because init will keep old deleted executable open and make it impossible to remount root as read-only

Lennart Poettering on how this is done in systemd:

https://plus.google.com/+LennartPoetteringTheOneAndOnly/post...


That's quite a clever solution - except that upon reading what Lennart says in the comments, systemd-shutdown is apparently closely coupled with systemd and receives enough state from it that it cannot be used with any other init system. So I'm not sure it's safe to rely on that for any major upgrade. (Not entirely sure why it needs all that state either.)


See what happens when you kill PID 1 on a system using sysvinit.


On Linux, nothing:

The only signals that can be sent to process ID 1, the init process, are those for which init has explicitly installed signal handlers. This is done to assure the system is not brought down accidentally. (From kill(2))

In practice, you will find that:

    kill(1, SIGKILL)                        = 0
results in exactly...no effect.


It's also because Unix doesn't generally believe in being forgiving (or, UX in general):

    someapp &

    [job 1 backgrounded]

    kill %1

    (job 1 is killed - yep, you can use kill for jobs)
Miss a percent there? You've just sent a TERM to init.


Which won't matter if you're a normal user. If you're root - why are you using the job system to kill your jobs?


> If you're root - why are you using the job system to kill your jobs?

Probably because you know the features of your shell and enjoy productivity.


Because thats a job for kill.


I wonder what would happen if you used OOM_Killer on process ID 1.


PID 1 is meant to be immune to the OOM killer. Last time I did development work in that area, it was also impossible to attach a debugger to PID 1 without patching the kernel, and although it worked I'm not sure how safe it was to do so.


    zx2c4@thinkpad ~ $ sudo strace -p 1
    Process 1 attached
    select(11, [10], NULL, NULL, {1, 122329}) = 0 (Timeout)
    stat("/dev/initctl", {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
    fstat(10, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
    stat("/dev/initctl", {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
    select(11, [10], NULL, NULL, {5, 0})    = 0 (Timeout)
    stat("/dev/initctl", {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
    fstat(10, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
    stat("/dev/initctl", {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
    select(11, [10], NULL, NULL, {5, 0}^CProcess 1 detached
    <detached ...>
Ptracing seems to work for me.


My biggest concern with systemd has to do with its creator, Lennart Poettering. He created PulseAudio and Avahi. Neither of those are particularly reliable, stable pieces of software. Also, Lennart's opposition to making systemd portable is worrying. The other unixes have it hard enough already.

I don't want to single him out and hate on him. Lennart's probably a nice guy, and he's contributed a lot more to open source than I have. But having the people responsible for PulseAudio involved in the next-generation init sounds like a recipe for disaster.


> He created PulseAudio and Avahi. Neither of those are reliable, stable pieces of software

They weren't when they were first created, but since they became mainstream and other people joined their teams, I've found them to be just as stable* and far more featureful than the things they replaced

* not perfect, just not worse than what came before


They're still not. For instance, the latest PulseAudio release includes a patch that contained incomplete changes that broke various resamplers but was included anyway. It'll be reverted in the next major release which will include other half-baked changes. (It turns out that wasn't even the cause of the crashes I was experiencing; even before that patch it was broken in a way that meant just having the official volume control application open caused random crashes. I think anyway - unfortunately, all the variable names are misleading and the comments non-existent, so it's impossible to be sure what the code's meant to do.)


A minor disagreement, that software is perfectly reliable with decent hardware and perfect 3rd party software, it just does not degrade gracefully (to say the least) when things it interoperates with misbehave. If you're not doing networking or audio, it doesn't matter if buggy hardware makes PA go bonkers or weird service defs make avahi behave weirdly. A lack of error handling and error recovery.

Its a violation of the old unix principle of be liberal in what you accept.

The most likely outcome of that philosophy installed in everyone's init system is going to be a heck of a lot of systemd crashes, although it'll all technically have a root cause of some other software or perhaps dodgy hardware. None of those crashes would occur with an init system that handles errors better, so systemd is going to catch a lot of blame.


Actually, the flat volumes feature that's on by default in recent PulseAudio versions (and cannot be disabled without editing a config file as root) relies on entirely unspecified behaviour of the audio hardware, namely the timing relationship between volume changes and sample data. It has a hack that avoids deafening people unexpectedly on most hardware, but at the expense of very audible volume glitches every time it decides to fiddle with the global volume.

(Also, thanks to inconvenient analog issues like component tolerances, the most reliable way for hardware to deal with PulseAudio's flat volumes is to provide no volume controls except for a single digital one that simply undoes all of PulseAudio's weird fiddling. This volume control would provide no functionality that couldn't just be implemented in software, but PulseAudio handles devices without one really badly.)


Many open source projects are no particularly reliable. PulseAudio and Avahi are if anything well above average stability wise. They also have many more developers than Lennart in case you are fighting some sort of personal war against him.


I went to Lennart's systemd talk at Linux Plumbers conf a few years back. He's seems mostly technically competent, and he's definitely made big contributions, but his attitude ("my way or the highway") leaves much to be desired. That being said, it may be an (over)reaction to all the hate he's gotten. And I very much agree, his attitude on portability is going to come back to bite him (and many others).


Oh, good! Based on my experience with attempting to run Linux as a first-class citizen, rather than in a VM hosted by an OS which can deal with actual hardware, I feel sure we can all look forward to another decade of "You can't get it working? Oh, you must just be an idiot, then." followed by "Why does no one use Linux on the desktop?"


Because you blame Apple when OSX doesn't run on your Windows laptop?

It is not the fault of the kernel maintainers, or any of the periphery developers, that you can't put Linux on your Windows computer. You want a computer that runs Linux, buy from thinkpenguin or system76 or Dell or HP a laptop or desktop meant to run it, or build your own and do the research on what vendors provide driver support.


I won't ever use a distribution that uses systemd as pid 1. There is only one reason for this.

I want to be in control how my system works. To do that I need to be sure of what it is doing and how. As the complexity of a system grows, the number of things it does under the hood grows. At some point I can't keep track of what's going on, and I no longer have confidence in my knowledge of the system, without setting aside time for a code review.

With pid 1 being incredibly simple, and the executables it runs being incredibly simple, I know what's going on, I know what the possible problems are and I know how to fix them. The more complex it is, the more difficult it is for me to know what's going on. That's all it comes down to for me.

As an aside: A lot of systemd is API-driven, meaning if you want a feature you have to build it into the system as new code. This means you have to be a C developer to leverage some new feature, instead of writing an interpreted script and interacting with it via pipes/plugins/etc. If your current system is dependent on some feature that doesn't yet exist in systemd, you now have to build it in, instead of using what worked with sysvinit before.

Not only is systemd broken by design, it's still in its infancy, and does not support the same features that have been built into other systems for years. This additional load of development for many orgs serves no purpose, and is essentially ignored by the leadership that decide to change init systems by fiat.


> I want to be in control how my system works. To do that I need to be sure of what it is doing and how. As the complexity of a system grows, the number of things it does under the hood grows. At some point I can't keep track of what's going on, and I no longer have confidence in my knowledge of the system, without setting aside time for a code review. > With pid 1 being incredibly simple, and the executables it runs being incredibly simple, I know what's going on, I know what the possible problems are and I know how to fix them. The more complex it is, the more difficult it is for me to know what's going on. That's all it comes down to for me.

How is, say, sysv init better in this regard? Instead of having declarative service files that say what should be done, you have a collection of disparate shell scripts, all with their own implementations of basically the same things. The complexity isn't reduced, it's spread around (and IMO it's more complicated).

If you want to have control over your system, wouldn't you want to use an init system that can actually keep track of all of the child processes of a service? Brittle hacks like pidfiles cannot ensure management over all child processes of a service -- this is only possible with cgroups.

> As an aside: A lot of systemd is API-driven, meaning if you want a feature you have to build it into the system as new code. This means you have to be a C developer to leverage some new feature, instead of writing an interpreted script and interacting with it via pipes/plugins/etc. If your current system is dependent on some feature that doesn't yet exist in systemd, you now have to build it in, instead of using what worked with sysvinit before.

No, you just use D-Bus, from whatever your favorite language is, using that language's D-Bus bindings. Systemd has a stable D-Bus API for this purpose that's guaranteed to work in the future.

> Not only is systemd broken by design, it's still in its infancy, and does not support the same features that have been built into other systems for years.

Out of curiosity, what features are you referring to specifically?

> This additional load of development for many orgs serves no purpose, and is essentially ignored by the leadership that decide to change init systems by fiat.

This would be more convincing if it were just Lennart & co. making the decisions, but in the absence of evidence to the contrary, I'm going to go out on a limb and assume that the technical leadership of Fedora, RHEL, OpenSUSE, Arch, and now Debian do not all make decisions by fiat for no reason.


First, it's less complex because the technology involved (shell scripts) are simpler to learn, design, implement, customize, debug, etc. Second, it's less complex because it's based on known technology and there's no learning curve. Third, it's less complex because the components it depends on are independent and small/simple, versus monolithic and interdependent. Fourth, troubleshooting sysvinit is incredibly simple, and you don't have to know anything about the tools involved to debug them. Fifth, it's less complex because there's less specialized operation to take account of.

Keeping track of child processes does not mean that the system is under control, nor that I understand it better, which is necessary for me to have control. And cgroups are not necessary at all to manage child processes. They simply make it more compartmentalized, and help deal with zombie parents, which also can be dealt with without cgroups.

So your language has to have D-Bus bindings, which hopefully are maintained for your language. And you have to learn how to use those bindings. More complex, more things going on under the hood, steeper learning curve.

The features I refer to are site-specific. I've worked for many companies that heavily modify their services and system init, logging, etc to fit their needs. I've worked on embedded applications, big server farms, and custom tailored products. They all customize various parts of the operating system, and have written custom features for them. Now it all has to be rewritten for systemd.

I can't comment on the exact decision-making process behind adopting systemd, but I can say almost all of what I have read about it is based on a mob mentality that tries to impose its will on the organization rather than work out compromises.


FLAWLESS VICTORY

(Yeah yeah, Reddit, I know.)


The scariest part no one is talking about is the feature creep. All software eventually dies; systemd will be no different. Some folks hope that happens immediately, some hope we're "stuck" with it for a couple generations, maybe 30 years. Regardless of exactly when, its going to be replaced someday, and massive feature creep seems intentionally designed to make the eventual changeover as painful as possible.

Someday in the future, either 2016 or 2066 someone is going to want to rearchitect the "thing that the kernel starts first" and is going to freak out at the insane feature creep... so you're telling me that I not only have to create something to start services which is no big deal, but also an entire logging system, and a hotplug handler, and a cgroup server, and ... WTF does this blasted thing have an MP3 player, a web server, and an IRC client in it too?

Its an awful design but probably survivable, at least temporarily, and once the flamewars die down a sensible replacement with a good design can be rolled out to replace it. But mid-flamewar with all manner of business / marketing money flying everywhere on all sides is not the ideal blue sky redevelopment time. As much as it pains me to say it, rolling out an init that has almost the worst possible inherent design possible does have some value to the ecosystem as a counterexample, here's what not to do with 2.0. Obviously its replacement will be better, I just hope it doesn't swing too far in the opposite direction as a reaction. The point of this paragraph is the most important thing to do at this time is to lay the seeds for the replacement, while not over-reacting to the disaster of systemd.

So, the replacement for systemd, to be rolled out sometime between 2015 and 2065, should ... (insert feature list here).


Great, now what will think the unluky sould that'll need to replace Linux?

The problem we are seeing is the framework vs. libraries dicotomy. Cut the features in too many packages, and you won't be able to use them anymore. Integrate them too much, and you won't be able to maintain them anymore.

Anyway, just like the kernel, some day we must be able to say "launching the basic processes is a solved problem", and actualy solve it in a way that does not need to be replaced. We are not there, but we are in a point where the feature set is quite stable, so I guess we can improve usability a bit, at the expense of maintanability.


I get it now. The distro ecosystem has been moving towards the other half of Sutherland's wheel of reincarnation - in this case, from modular systems towards monolithic ones. First it was stuff in the desktop space - now it's creeping into other areas.

You can state the particular point of inflection as a terrible thing, or a good thing, but seen as a technology learning cycle, it becomes _necessary_ even if the implementation chosen is shit and you hate everything about it. At the other end of it, it still gets broken up again into a different set of modular components whose design wouldn't have been apparent at the beginning.

Since the initial discussion stage is over, I would say, if you really think this is the wrong solution(and I don't consider myself expert enough on inits to cast much judgment here), your work is best spent on the code environment immediately surrounding systemd - not direct competitors, but stuff that it has to deal with at some point. That will put pressure on the design to either break entirely or evolve to support whatever you've come up with - and when that happens you get to walk in with a patch to show and kick off another firestorm.


The argument re: larger surface for updates is true. Systemd is smaller than the things it replaces, and consistent in itself, but all the logic is in PID 1.

I fail to see a better alternative though.

Old, disparate init systems repeat logic for:

- starting services (the same init script copied 90 times, often badly)

- journalling (unmaintained syslogd facilities means apps all report themselves as local0 and five different fixes for that)

- a billion different 'sentinel / watch' type apps

- various hacks on hacks for dependencies, parallel init, etc.

If you rip out all the redundancy and replace them with something unified, that unified functionality can (and should) exist at low level.

Maybe the argument is that a better way of handling systemd updates is needed?


These problems are all solved without destroying the Unix philosophy completely:

starting services - daemontools.

journaling - daemontools

billion different sentinel/watch type apps - daemontools.

dependencies - just don't do it. This is the old age of power sequencers again. It's coupling.

Daemontools is a set of tiny programs that talk to each other, are fully privilege separated and guarantee reliability. Why they can't do this for systemd, I don't know. If you remove the DJB path conventions from it (the major objections), it's the right tool for the job.


> starting services - daemontools

Daemontools is great, but please don't pretend it's a proper general-purpose init system. How do you start a certain service before another one? I suppose you can hack that inside the run script but then you're yet again duplicating logic, badly.

> journaling - daemontools

What? Daemontools provides a nice utility for logging a process stdout/stderr to a file. That's it. That doesn't come close to the problem that journald tries to solve.

> billion different sentinel/watch type apps - daemontools.

This is about the only thing that daemontools solves.

> dependencies - just don't do it. This is the old age of power sequencers again. It's coupling.

And here comes the mythical "coupling is bad" statement while completely ignoring the problem. No, I don't want my NFS daemon to be started before my network connection is up, thank you very much.

Your post shows a complete lack of understanding of the problem.


> Daemontools is great, but please don't pretend it's a proper general-purpose init system. How do you start a certain service before another one? I suppose you can hack that inside the run script but then you're yet again duplicating logic, badly.

I'm not saying its a replacement. The model is suitable as a replacement i.e. inspiration should be taken from it.

What? Daemontools provides a nice utility for logging a process stdout/stderr to a file. That's it. That doesn't come close to the problem that journald tries to solve.

It guarantees that the logs are stamped externally, are committed to disk and that the logs haven't been tampered with by isolating them from other processes and user accounts that can modify or write to them. Don't need journaling on top, just separation of concerns.

Indexing - don't need it. No one does. That's an external problem solved by syslog collectors like Splunk. As someone who deals with up to 500Gb of logs a day, I know my shit. Systemd doesn't solve a thing here. In fact it adds overhead to a solved problem.

And here comes the mythical "coupling is bad" statement while completely ignoring the problem. No, I don't want my NFS daemon to be started before my network connection is up, thank you very much.

Well what the hell does your NFS daemon do when your network goes down, your adapter gets hotplugged after a failure or someone falls over the cable? There is so little of the problem solved by systemd it's unbelievable. The RIGHT solution is to make the processes resilient to failure conditions like this.


> It guarantees that the logs are stamped externally, are committed to disk and that the logs haven't been tampered with by isolating them from other processes and user accounts that can modify or write to them. Don't need journaling on top, just separation of concerns.

What if you want to consolidate all your logs in a single place instead of scattered over many files? What if you want the logs to be shipped to a remote host without storing them locally?

If your answer is syslog: welcome back to the problem.

> Well what the hell does your NFS daemon do when your network goes down, your adapter gets hotplugged after a failure or someone falls over the cable? There is so little of the problem solved by systemd it's unbelievable. The RIGHT solution is to make the processes resilient to failure conditions like this.

That has got nothing to do with the init system. Just because the services themselves can recover from dependencies that temporarily go down, doesn't mean that I want to see tons of useless error messages during startup. I want my NFS daemon to start after the network is up, so that it doesn't bother me with useless "network is down" messages. Recoverability is a completely independent (though desirable) property.


Its not different as both can and must be solved by network daemons listening for network change events. So solving reliability solves the startup issue.


> Indexing - don't need it. No one does. That's an external problem solved by syslog collectors like Splunk. As someone who deals with up to 500Gb of logs a day, I know my shit. Systemd doesn't solve a thing here. In fact it adds overhead to a solved problem.

Does systemd's journal still use a binary format? That is a no-go for me. Inevitably it will get corrupted, and then you rely on the journal reader being able to recover your logs, or fix bugs in it until it does.

I'd be more willing to consider systemd as an alternative if it kept using a human-readable log format, that can be read with cat/tail in a worst-case scenario. If it wants to index it, it can keep an index on the side, no big deal if that gets corrupted/out of date, as it can always be rebuilt.


Yep. It still uses a binary format. An ugly fucker of one too. You can have it pipe to a real syslogd, but at that point, I'd rather just have it send to one and get that ugly monstrosity that is systemd out of the way.


I don't buy this argument. SQLite and PostgreSQL's databases are binary too, but that's ok?


A database can afford to fsync to ensure consistency, and they've been heavily tested to ensure it works properly. And even then its still possible that the file becomes corrupt, if the OS, or the disk lies about fsync: https://www.sqlite.org/howtocorrupt.html

Has journald been tested on how well it copes with sudden reboots, kernel panics, powerloss, etc.?

When something goes wrong you usually want to be able to still read your logs to figure out what happened, and you may not even be able to boot the system properly.


> Has journald been tested on how well it copes with sudden reboots, kernel panics, powerloss, etc.?

Are you assuming it hasn't been tested in these situations just because you don't know?


Databases have been around a lot longer than systemd, so I assume they are better tested than journald in this regard. I don't use systemd - because Debian doesn't use it (yet) - so it was rather a question for those who do use systemd.

I'm not saying that I'd be happy to have a database as journald backend, I'd be just less concerned.


OK, I see. Thanks for the clarification.

systemd's test suite[1] doesn't seem to cover those cases anyway.

[1]: http://cgit.freedesktop.org/systemd/systemd/tree/test


I think it has not been tested in these situations enough. It's not in production anywhere major yet.

The assertion is valid IMHO.


I wouldn't put my system logs into either of those.


> Well what the hell does your NFS daemon do when your network goes down, your adapter gets hotplugged after a failure or someone falls over the cable?

Let's say it crashes or hangs. In that situation, systemd will notice it, clean up, notice that network is not up, wait until it is, and restart it.

Half of what makes systemd so great is that it allows the daemons of the system to be much less robust than it is, and do the right thing so that the entire system will remain robust.


Dependencies between services exist, and for good reason (e.g. various services depending on a database). How do you propose to deal with them?


It's pretty basic software design. A few solutions:

1. Web servers return error pages when databases are unavailable. They don't go down.

2. Postfix doesn't die if your virtual maps database server is not available. It waits patiently.

This isn't rocket science.


I'd rather Postfix fail early, than wait until $TIMEOUT to discover that a maps source isn't available.


I think what parent is trying to say in so many words is "I have no use case for tracking service dependencies, therefore it is pointless and should be cut out". An unfortunately common attitude among technical people.


No. I'm saying that relying on startup sequencing is a bad idea full stop.


That's why systemd does dependency tracking and socket activation instead of sequencing.


It needs to do neither of those as well.


You're certainly entitled to your opinion about dependency tracking being an unnecessary luxury, and I'm sure you will find a number of Linux distributions and/or BSDs within your spartan requirements. Not to mention that you can at least for the time being stay with sysvinit (though this won't solve your issue on Debian, since it reads dependencies via a horrible hack in the comments of the init scripts).


I'm a FreeBSD and Windows person so yes you are right.

sysvinit dependency management is horrible as well so I cannot complain about your point there.


But you're not making a good point. If the network goes down, anything which depends on it will also go down. That's a much stronger guarantee than "let's hope this application handles network failure graciously and will kindly start working again as soon as the network is restored".


So the system will be busy starting and stopping services in case of faulty network, instead of having them sit there patiently while the line continues to flap? Great idea - not sure when Cisco will copycat that one, though...


It doesn't really matter either way, because the symptom will be that you can't send mail; either that's how you find out or you have a proper monitoring solution, in which case it doesn't matter whether Postfix actually died or is patiently waiting for the resource to become available. To sum up, I don't see why dependencies need to exist, it seems better to write daemons in such a way that they can deal with temporary absence of resources.


It does matter. Why are you starting postfix if you have no network? You have plenty of use cases where boot speed is actually important.


On a mail server, it is no great improvement to rapidly boot to a state where mail is not working.


Obviously, this was just an example. If you're making an appliance, boot speed matters. If you're aiming for a fast desktop experience, boot speed matters.


This probably hinges on how you define the word "great," but you can diagnose a nonfunctioning mail server a lot easier if it's actually booted up.


Linux is not a service oriented architecture. That is really more of an economic model than a viable model for engineering fault tolerant designs.


This is a little off topic, but DJB is obviously incredibly talented, but frequently out of step (rightly or wrong) with the general Linux community.

I think it would be excellent if DJB had his own OS - maybe with a Linux kernel, but with a completely different userspace like Android. Specifically:

- the init

- the packaging system and file heirarchy

- the daemons

I'd love to see and use it.


Please suggest something a bit better. Daemon tools doesn't even support limits. As a result, I had to rip it out of a ~2k node cluster if favor of monit.


I'm not suggesting daemontools. I'm suggesting "not systemd" and "something that includes the valuable lessons that daemontools taught us".

The first of which is not to fuck up the Unix philosophy by producing a monolithic pile of junk.


I'm not sure which limits you are talking about. These, or others?

> http://cr.yp.to/daemontools/softlimit.html


You can also use 'runit', which has quite similar design, but comes pre-packaged for a lot of systems, and can run as init, too (only if you want to).


> This is the old age of power sequencers again. It's coupling.

Is this referring to power transmission or something?


No. Back in the bad old days of big Unix machines and mainframes we used power sequencers to bring physical bits of hardware up in order. This entire problem has now moved into the software space. With hardware, dependencies were painful, expensive and relied on lots of voodoo and dependency resolution. The same with software. Eventually the hardware manufacturers and the OS people got together and threw this out of the window.

Now Linux is about to fail to learn from those mistakes like it failed to learn about audio subsystems, firewalls, network configuration, desktop environments, graphics architecture etc.


Maybe this is naive, but can't that repeated logic be put into a shared library (or a shared helper script)? A common, shared service for starting services reliably, logging them coherently, managing service dependencies, and restarting failed services seems like a great idea. But PID 1 doesn't seem like the right place for them.


You're right. But there are several concerns here. The whole thing is a bunch of concerns crudely bundled into one concern inside a single package in systemd.

This is already done in another operating system where it is successfully broken down into components which are exposed through a system-wide common RPC and component architecture so notifications and instructions can be moved around seamlessly. The whole thing is covered by ACLs, privilege separation, has immutable logging and a configuration database to drive it all. Oh and it's about 20 years old now.

And it works really well.

I'll let you guess the vendor (and no it's not Apple).


They can, and init systems already do that. Init scripts import a common library with functions for starting and stopping daemons etc. However the result is still a big mess. Libraries are not frameworks, so you still end up with tons of boilerplate code everywhere. Take a look at /etc/init.d/* and see whether you can make sense of things at a glance.

Furthermore, SysV init is not designed for service supervision, so SysV init scripts cannot implement that functionality.


The major problem is the requirements and the system dependencies it introduces on a core component. http://harmful.cat-v.org/software/dynamic-linking/

systemd introduces a major PITA for the OS packager by putting a lot of dependency on a lot of parts with "moving components" making it both more vulnerable as 1 and more likely to fail (it is not complex it is complicated).

Systemd also tends to become a system requirements for a lot of packages making it unsuppressable.

It makes the problem of dependency solving logically totally messy without justifying this cost by any real improvement.

It is introducing the DLL hell for the most crucial part of the system (PID 1).


Systemd makes my life difficult. Case in point: I want to stop cups, so I run "sudo service cups stop". This appears to work, but then it prints out a warning along the lines of:

  cups was stopped, but it can still be activated by:
  - cups.sprocket
  - cups.latch
What are these things and why are they there at all? How do I shut them off? It doesn't tell, so stopping a service has become pointless. If I tell it to stop, make it stop!


> The desktop is quickly becoming irrelevant. The future platform is going to be mobile and is going to be dealing with the reality of running untrusted applications

This is off topic (tho from the article :p), but, is it really becoming irrelevant? What will replace such things as: typing lots of text, photoshopping/gimping, programming IDE, browsing with multiple browser windows with dozens of tabs each, etc...?

Would Linux users abandon the desktop, if so, for what?

P.S. unless I can plugin my phone into something with a keyboard, mouse and big monitors (or more productive equivalent), it is as fast as my current desktop, and has desktop oriented software in that mode, I don't see myself abandoning the desktop


The desktop isn't hip, and it's not a source of growing amount of sells; but most people I know still spend at least 6 hours a day in front of a desktop computer, with no indication that it'll change.

So my own answer is a firm "no".


Desktops are not really going away, they're just taking a roundabout. Instead of desktops people are now buying laptops, and attaching external monitors, keyboards, mice.

The next step could be people buying tablets, and attaching the external devices for power use.

The difference is subtle, as it seems to be nothing is materially changing, but the current tablet/mobile ecosystem is dominated by application container operating systems like Android and iOS, and it seems unlikely that that is going to change.

This is the movement that is obsoleting traditional Linux and Windows, at least on consumer hardware.

So to answer your question: "typing lots of text, photoshopping/gimping, programming IDE, browsing with multiple browser windows" will be replaced by functionality in containerized applications or 'apps'. The apps are bought/downloaded from app repositories that are loosely maintained by commercial entities, and they could very well be untrustworthy.

As to your P.S.: yes, exactly, so you already know you are going to abandoning the desktop.


because I really don't need actual Photoshop or development tools that have been refined for many decades, I just need shitty little Android apps written in the last few years.

No, that is wrong.


What makes you think that Photoshop or development tools can't be either apps or web applications?


> As to your P.S.: yes, exactly, so you already know you are going to abandoning the desktop.

Oh, hmm, would I guess also really depend on what OS and software is available for it and how open the whole ecosystem is, I mean, something unix-like that you can modify, not something single-closed-source-app-at-a-time-like :)

And if a desktop machine will allow you to stuff many more powerful CPUs and GPUs in it I may keep using one anyway :D


Discussion on reddit: https://pay.reddit.com/r/programming/comments/1xggg8/broken_...

On that note, nothing screams FUD like a screenshot of Windows behaviour on an article like that. Is there a Linux-Windows equivalent to Godwin's law?


the picture is very adequate when paired with this sentece:

> Unfortunately, by moving large amounts of functionality that's likely to need to be upgraded into PID 1, systemd makes it impossible to upgrade without rebooting. This leads to "Linux" becoming the laughing stock of Windows fans, as happened with Ubuntu a long time ago.


This reeks of the same kind of argument that was used to push for microkernels. Arbitrarily cutting up processes into smaller parts is not the same as decoupled design.


It's not arbitrary. There are logical security, reliability and convention boundaries that systemd doesn't consider which are raised in this article.

Microkernels are a different subject altogether.

To be honest, the only service management solution I've seen that I actually like for Unix platforms is BSD's simple rc and daemontools on top of it for process management.


I don't think that's the point of the argument - after all, systemd is already arbitrarily cut up into smaller processes, it's just that they're all tightly coupled and nonfunctional on their own.


Stuff like this is why I am increasingly getting back into BSD and recently Minix3. I think this is just part of a larger issue with the *nix ecosystem, where lots of philosophical and design assertions are proving to no longer work, while some are falsely being said to no longer work but still do. With the linux kernel now at over 10 million lines of code, the idea of peer review of source code seems too big of a job for random unpayed volunteers. I think we need to move back to simplicity and decentralization as much as possible, because the problem with centralization is that it makes for a weak system.


This was posted earlier today, under the exact same title, but with a URL that differed solely by a trailing backslash. It hit the front page, too! Way to go, HN duplicate detector.

https://news.ycombinator.com/item?id=7207655


Lot's of FUD there on ewontfix.com.

A better source of information:

https://wiki.debian.org/Debate/initsystem/systemd


I guess this post has some relevance in this thread: http://0pointer.de/blog/projects/the-biggest-myths.html


Some of those jokes are very funny. I really liked the one about how systemd is modular, just in the same way that the Linux kernel is modular, because you can compile it with all different options.


I do not really understand what you're saying.

1) This is actually modular. I mean... those things are even called "modules".

2) Even if it wasn't modular, how would it matter? You're comparing to the Linux kernel. If that one isn't modular then according to you, systemd also isn't modular. Allright, so be it, did you have any problems with missing modularity in the Linux kernel lately?


The main systemd binary (version 208 for x86_64 as shipped by Arch Linux) is 1020 KB. I get that the systemd package is really several binaries, but still, does the binary for PID 1 need to have that much in it?


That's about 1/8000th of the RAM in my laptop. Is 1020 KB really such a big deal?


The problem is that software complexity grows exponentially with code size. Well, maybe not exponentially, but more than linearly. And the article established that it's important to keep PID 1 simple.


systemd reminds me of 90's era Microsoft's "release a new service/api/whatever that is almost the same, but different enough to create incompatibility and lockin, include small addition or improvement to encourage adoption, make everything require it to force adoption by dissenters."

I'm not joking.


This feels like the Tanenbaum–Torvalds debate all over again.


Why not copy SMF?


systemd shares much of its design with launchd and SMF, with the obvious exception of using XML for service config. The same arguments against it would apply.




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: