At the risk of sounding heretical, I kind of find myself in the middle ground regarding systemd - I was initially highly skeptical of it, and I still think it's problematic that it is so Linux-centric and will cause problems maintaining software to run both on Linux/systemd and on *BSD. The way it was pushed on distros was problematic, in my opinion.
But having used a couple of Linux systems running systemd - Raspian Jessie and openSUSE - I have to admit it's not that bad. In practice - on laptops and desktop systems (assuming one counts the Pi as a desktop system; for my use case, I do) - I had no problems with it. Enabling and disabling services is a lot easier. I do not think is as great as its proponents claim, but it's not as bad as some people think, either. Personally, I have come to appreciate journald, even though I still agree that binary logs are a bad idea. At least there is still the option of installing a syslog daemon.
I am also in the middle ground, but I have moved from the opposite direction. I used to like systemd a lot, especially for its simple unit files (compared to ugly System V shell scripts) and the fact that it could properly track processes and restart them on failure.
I have become a bit more skeptical, because most of the problems that I recently had seemed to be related to systemd. Including some networking problems, long boot delays because systemd decides to wait 90 seconds be default on some conditions that it considers to be errors, and problems such as having to restart systemd-logind manually because of some d-bus update [1]. Before the update, logging in via SSH blocked for a large amount of time.
The most annoying part is that some of the problems take quite a bit of work to debug due to the opaque nature of modern systemd/d-bus/...-based systems.
I think systemd would benefit from a slower adaptation.
The network and boot issues that practically everyone have encountered are mostly due to incorrect default configurations. This shows that the maintainers are not ready to use systemd as they don't yet fully understand it.
Like PulseAudio in its early years, systemd bothers a lot of people because it was pushed out before it was quite ready, and therefore breaks things that used to work. But also like PulseAudio, it solves a whole lot of problems for which the solutions were becoming increasingly hacky and unstable. I don't believe a slower roll-out would have helped in either case, because many of the issues were undetectable without a broad user base.
Anti-Lennart partisans would say here that both pieces of software are broken by design and leave in a huff. I sympathize with their aversion to complexity, but I'll take a complex init system and simple configuration over a simple init system with baroque configuration.
From a users point of view there is little difference between software that crashes because its garbage and software that crashes because it is in early stage of development but otherwise a good solution.
Lennart should write more robust software and maintainers should do more QA when doing this type of system-wide changes.
I agree very much with the points that you are making.
It would also help if small parts of a system would be replaced at a time. Replacing daemon/process supervision, login handling, logging, network configuration, etc. all at the same time in distributions that are used by millions of users is quite risky.
> The most annoying part is that some of the problems take quite a bit of work to debug due to the opaque nature of modern systemd/d-bus/...-based systems.
This is a reasonable criticism of systemd. The other points seem mostly to be a criticisum of the various distro implementations that use systemd. The sort of thing that gets tidied up over time anyway.
The bigger question is whether an init should be a continuously developed project with rapidly increasing scope as now various distributions are on differing versions of systemd with widely varying compatibility and feature sets.
Generally agreed. I know that my priorities for what should be done in systemd are not going to match someone else's but there are also some clearly problematic things that just seem to go unaddressed no matter the scale (e.g. rebooting nspawn containers [1], problems with DNS/resolved [2]).
systemd can do a lot of really useful things but when I can't reliably reboot machines or struggle with resolving things using DNS...it's hard to be optimistic.
The whole systemd for containers thing seems like a massive cause of "because we can". Also allowed them a better pitch towards devops and RHEL than "faster boot"...
Plenty of the "separate" components share core code at compile time. If they were truly separate they could be downloaded piecemeal and compiled independently.
It's kind of annoying reading people lecturing projects they clearly don't know the internals of on how much better their code would be if they did X.
This applies not just to your comment here but to a ton of comments that pop up on HN all the time. Why not in Go? Why not in Rust? Why not in React and node and electron and why don't you use my library that's still in alpha? It's not OOP. You're not using tabs. MIT is too permissive. GPL is too restrictive. And obviously, this should be a set of tiny separate libraries, how dare you work on free software with different ideas than mine.
I once found a project called Razor-qt. It was a desktop environment that included a bunch of interdependant binaries all under the same repo. I didn't like that very much. I joined it, worked on it, ended up leading the project and merging it with LXDE into LXQt. When came the time for the reorganization, I did push to create tiny components that "could be downloaded piecemeal and compiled independently". And we did end up going that route.
You know what I didn't do? I didn't go on HN and complain about a project I didn't know the internals of at the time.
Ah, but did you force others to use your project? People find that sort of thing annoying at best, infuriating at worst, and they will tend to act out in various ways (to include finding fault with the thing they never asked for that's being shoved down their throat).
All my life I've been forced to use things such as Windows, Flash, Skype and Photoshop. Forced because of lock-in strategies in a field with difficult alternatives.
"Forced" is not a term that applies to systemd. People say "forced" because their distro adopted it - guess why their distro adopted it? Because they researched it and found it was good. That is the common theme.
Nobody forced you. If you're an arch linux user, for example, one of the distros that switched the earliest, you'd have been more than welcome to discuss counter-points to systemd on the mailing list.
Of course, most of those that attempted doing so were ridiculed out because in free software, or at least on the Arch ML, there is very little tolerance for bullshit. Most (MOST, not all) of the arguments against systemd are in fact bullshit. Hell, even on HN I've seen people crap on systemd because "it's lennartware".
PS: To be clear, there is plenty wrong with systemd. It's far from perfect, it's still very young and I'm not particularly fond of its "all in one" tendencies myself. But most people in this thread have clearly zero ground to comment on its internals, yet many do.
1) You write "Most (MOST, not all) of the arguments against systemd are in fact bullshit".
2) And yet you (wrongly) write "there is plenty wrong with systemd"
To be in line with your 1) statement you should write 'it is only small things with systemd which are wrong. Because everything - MOST = small, and not 'plenty'.
Yet you dont do, and you write 'plenty'. Why? Because you know, the moment you write 'there is only a small amount of things wrong with systemd' you will be easily challenged.
So, from logic perspective, you're wrong and you contradict yourself.
If you're going to try to use "the logic perspective" and be hostile for the sake of being hostile, at least have the decency of running your own post through that filter. "Everything - most = small" is a mathematically nonsensical statement.
Of course, you could also tone the hostility down before reading my post. Maybe that'd have helped you catch the fact that a flaw in systemd does not necessarily equal an argument against systemd, and vice versa.
6 years on Hacker News and this is what you've learned?
The first phrase of the first comment you offered on this thread was about how annoying in their ignorance you thought the comments of the people you were replying to were, so you should probably not lecture people on civility at length.
I was talking about hostility, not civility. Not that it matters, my original comment was neither hostile nor uncivil. Harsh I'll give you, though if you're going to go around a systemd thread spreading thinly-veiled attacks I expect you to have a thick enough skin to be called out on it.
Let's summarize your post:
1. ad hominem (6 yrs) - completly irrelevant to discussion
2. you don't see difference between disagreement and hostility - is it english, or ego, I dont know
3. you accuses interlocutor who disagrees with you (being me) with hostility - disagreement is not hostility. In fact anybody who thinks that is unable to openly and politely discuss topics.
4. you decdided to skip main argument (while I assume you understand what I meant) and 80% of your post is about attacking me as a person (or math) instead of discussing the essence.
Try to discuss the essence, and don't wrongly accuse people who disagrees with you with 'hostility' plus your ad hominem. Still, take a rest and have a great 2017, I value your contributions to open source.
> you don't see difference between disagreement and hostility
Yeah I do though. Consider that maybe you don't see why your post was extremely hostile.
I have very little patience for comments that refuse to do any charitable interpretation and instead end up being "but your post is wrong if taken literally!".
I have even less patience when such a poorly chosen tactic is applied incorrectly.
I've given you the benefit of the doubt and did reply to you (you've again disregarded it). I'll repeat it for clarity: a flaw in x does not necessarily equal an argument against x, and vice versa. Furthermore, "Everything - most = small" is a mathematically nonsensical statement.
Ad for my "6 years" comment, that was not an ad hominem, it was an observation. If you were a fresh account, I would have dismissed your comment as a troll. If you were a fairly new account, I would have thought that maybe you're not used to discussing things on an online social platform. But you have been here, communicating with others for six years; so when you write with such a confrontational tone, you get a confrontational tone back.
OTOH, sharing code between different components developed under an umbrella project is not bad per se - if they require the functionality, re-implementing it from scratch for each component would not be a good idea, either. Duplicating and manually syncing the code also has its share of problems.
Is there any chance -- any chance at all -- that we cannot do the same dance of ineptly-phrased-objection followed by rebuttal-that-misses-the-point when it comes to this particular facet of the discussion?
Both sides of this are wrong. The people complaining about "monolithic" are groping in the dark for ideas such as low coupling. The people saying "but count the binaries!" are not addressing the questions of fully documented interfaces between said binaries, composability, and interoperability.
The "uselessd guy" explained this quite well, as have many other people. Can you please advance to not dancing this same old dance every time?
It's not only about whether you do it, it's also about HOW you do it. I once saw a slide from FOSDEM which listed lots of CLI tools that systemd replaced. Some of them seemed quite trivial and if they all depend on low-level systemd, I believe it really is bad design.
I recently wanted to use the "systemd-journal-gatewayd" component in Ubuntu 16.04, which ships with systemd v229. Yet, the feature I needed was only available in v231.
Although I'm only interested in a newer version of "systemd-journal-gatewayd" there is no way to upgrade just this one component, it seems.
I don't get this. How do you know that the v231 for that component will work with older versions of everything else? If you do, why not just compile it yourself? If you don't like that, why not upgrade everything to v231?
That is the point - you often can't just recompile journald part, because it is tied to systemd interfaces and you might not be able to easily upgrade systemd, because it is a production system you can't simply reboot like your home server. The only way to do that is backporting specific patches to the older version.
If it were a separate component, it would have a bunch of ifdefs covering several versions of systemd, possibly with some features disabled if older version doesn't support it.
However, that somewhat increases code complexity for developers and systemd devs refuse to do that.
That is the point of 'monolithic' criticism - despite being many binaries, you can't easily just build single one and make that work.
I'm a bit more on the positive side. I actually really like systemd. When it was first introduced I spent 10-15 minutes learning how it operated and after that it has been so much better than the old init systems.
And the best part is that it's on all the major distros so I don't have to keep relearning the init system if I try something new.
My guess has been that the majority of users enjoy it and a few very vocal opponents have such hate for anything new that they will latch on to every small bug as if it's the worlds end.
(Anecdotally, this is the first time I have written a comment about systemd as other discussions have been non-productive flame fests without substance).
If your effort to learn how systemd functions took you ten minutes it's unlikely you're involved enough in the internals of it to understand the debate.
> The way it was pushed on distros was problematic, in my opinion.
The technical term for this is "precommitment", and it comes from Tom Schelling. It's actually quite advantageous: once you're precommitted to a particular direction, you're emotionally invested in getting the most out of it. By precommitting you to systemd, the distros hope to get the community to leverage systemd's benefits.
I'd probably respect SystemD a lot more if it measured itself against a modern init system like Gentoo's OpenRC instead of pretending it's invented dependency handling.
I'm on Gentoo and remember the switch to OpenRC. I was nervous. But all the old stuff worked! And my custom init scripts? Changed only one or two lines to make them work and a few more edits to make them fully OpenRC like. Backwards compatible FTW!
And, now with OpenRC it still "just works". Fast, predictable boot with logs in normal places that I can easily debug, manage and modify.
Systemd isn't an init system, it's a service (a.k.a daemon) management daemon. Its primary purpose is to restart and diagnose failing daemons cleanly.
Systemd won for one simple reason: it's the only tool that accomplishes this task without bugs. We've been running daemontools for almost a decade in production, and it's a nightmare of bugs. Very glad to be finally switching to systemd.
> Its primary purpose is to restart and diagnose failing daemons cleanly.
If this is true, and speaking as a systemd user for close to five years now, it universally sucks at its primary purpose.
Specifically, whenever a service fails, I've lost count of the number of times systemd has barked out useless errors with 200 lines that boil down to "service has entered failed state". Whenever a systemd service fails, odds are better than even I have to spend two hours debugging why by enabling internal logging in that service, running in debug mode etc.
Like when I tried to switch to networkd, and had the wrong password for a wifi I was connecting to, networkd never told me this in any way I could find. Had to go back to the old solution (after an hour of pulling my hair out) before I realised the password was wrong.
Networkd does not handle WLAN authentication. This is the job of wpa_supplicant, which is the defacto standard on linux in every setup until maybe iwd from Intel takes over.
Yeah, but I'm sure you agree networkd should propagate errors from wpa_supplicant such that they reach the user, instead of piping them to /dev/null (not literally, but you get my point)?
systemd-networkd doesn't know about wpa_supplicant. You would start wpa_supplicant@wlan0.service and see wpa_supplicant errors there.
networkd only springs into action when wpa_supplicant succeeds in establishing the layer 2 connection and the interface becomes UP. I like the wpa_supplicant+networkd combo precisely because of this decoupling between network layers. One day, I'll get off my lazy ass and replace NetworkManager by wpa_supplicant+networkd on my notebook.
How can you tell when a programmer has graduated from "completely new at this" to "has some valuable experience"? That point comes when they stop assuming success.
Check for error and do something useful with the returned value.
Write tests yourself.
Fail gracefully.
Log status, so you know what was happening just before it failed.
Set reasonable timeouts on external processes.
Systemd is written from the perspective of a laptop user who will hand over the whole thing to a support tech when things go wrong. This is antithetical to the spirit of UNIX, which is not "write programs with one purpose that chain together well".
The spirit of UNIX is this: At any time, a user on the system may decide to become a developer or a sysadmin. The tools and information they need should be available.
well ifconfig (net-tools) actually is not pre-installed for fedora > 19 or 20, basically because not everybody uses them and ip addr gives a more simple output for (most users)
systemd works somewhat ok for me, because I decided to stick to a very limited subset of the functionality it attempts to provide.
I've also used runit (which follows the daemontools model) as a service manager and I've never had an issue with that. I may just have been lucky though.
For me systemd fails because of some bugs I have repeatedly experienced:
- systemd stops reaping zombies for some reason; the OS's PID table becomes full and it's impossible to create new processse -> need to hard reset the machine
- systemd overtakes halt/reboot commands which is ok when it works I guess -- but for some reason I sometimes get "operation timed out" (likely because of some bug in systemd or dbus). At this point I have to hard reset the machine to get back to a usable OS.
Imagine if #2 happens to you on a remote machine. In my case, I had to call someone and ask them to reset the machine.
#1 is unthinkable for me, because it's the second thing init is supposed to do (the first one being bringing up the various services). I've never had this happen to me on older Linux or other Unices because let's face it, it's not that hard to do. At that point I honestly thought "how can I expect systemd to provide all the features it boasts when it can't do the easy things well ?".
I've also had daemons that systemd lost track of, apparently because of a wrong setting in the .service file. Now that's not a systemd bug, but it was very difficult to debug because I couldn't "trace" the process starting. On the other hand, with daemontools/runit it's quite simple: manually execute the ./run script and see where it fails. With classical init, run the /etc/init.d/service with sh -x and you see exactly where it fails.
That session manager, logind, is a particular mess.
Here you have a daemon that ties into PAM that tries to second guess the kernel regarding what constitutes a session.
Effectively systemd is becoming something akin to Android. It may be using the Linux kernel, but it is not the GNU/Linux we have grown familiar with over the years.
What bugs are you seeing with daemontools? I've been running it in production for 15 years now (shortly after djb released it) and it's been rock solid the entire time.
OpenRC is still more difficult to write init scripts for than systemd; and offers less flexibility in scheduling inits. For example, I can add a 'before' clause to a custom systemd service unit, and I don't need to modify the subsequent units to depend on my custom unit. This is especially useful when you want a custom forking or oneshot service to reliably run before a standard service shipped with a package in your distro.
In a similar vein, systemd targets are considerably more useful than sysv/OpenRC runlevels. A typical sysadmin can read a couple paragraphs of manpage and set up a new target (analogous to runlevel) in systemd; and it can be used for debugging or recovery in the same fashion that runlevels are in other init systems.
On top of this, there are powerful system services developed in tandem with systemd, in the same repository, which offer well-integrated standard alternatives to things which were superficially different on ever distro only four years ago.
OpenRC's the only init system I've actually liked. It's one of the things I really miss from when I used Gentoo, and I don't know why more distros didn't adopt it years ago.
You should check out SMF, if you have a couple of hours to play with Solaris.. it handles things that systemd does (socket activation, process management/watching etc;) but in an orthogonal way that makes sense.
I like openrc, I also like runit (for different reasons) but SMF is the gold standard for me and I recommend more people check it out. Even if it has XML elements :(
OpenRC was definitely my favorite init system before systemd existed. Definitely beat out the defaults of distributions who were using the debian/redhat style sysvinit.
> The justification for storing logs in a binary format was speed and performance, they are more easily indexed and faster to search.
Are there any benchmarks for this? I don't know why, and I may be doing things wrong, but journalctl -lu <unit> --since yesterday in our prod env takes a couple of seconds before I see any output, while a (z)grep on a date rotated, potentially compressed log file on a non-journald system is generally instantaneous.
On a side note, I really like machine parsable, human readable logs, and I've never had any speed or performance issues with it when dealing with volumes of hundreds of millions of log entries, though that may be because I don't know any better.
My question would be "is searching worth optimizing for anyway?". Log files are typically only read when things go wrong or by automated systems where a few seconds here or there doesn't make much of a difference.
Faster to search often means slower to write too, I'd prefer faster to write and slower to search if I had to choose.
Of course nobody benchmarked. Rarely are there programmers in OSS these days that benchmark anything. (not nearly as common as those who claim "speed")
There was a bug though where the journaling was so slow it caused the whole server to crash. (sorry, can't find it right now; something about them using mmaped files and the access pattern was.. or something, interesting stuff:)
Binary logs should be faster, in theory, and should/could be as robust as text logs (if not more), in theory. edit: note that GNU grep is insanely optimized.
There is the object of this discussion, systemd, that has always been about "starting things in parallel" to make it "boot faster" but there has never been a proper benchmark (the distrowatch guy said, when asked, that he didn't notice any difference; i didn't see a proper benchmark anywhere by anyone). The kdbus proponents that were active on lkml at the time of increased drama (including the devs) also never did a proper performance evaluation. It was Linus who had to actually run perf on dbus to (easily) show the real reason why it is slow... I also remember commenting en passant once or twice about bad and/or biased performance testing. So that is my "source", the relative lack of proper benchmarks on blogs when they talk about performance of a piece of software. At least this is how it seems to me, as i did not email a bunch of programmers to feel around (innuendo not intended, but funny so i'l leave it).
On the other hand some OSS projects (the various databases, for example) do have posts/writings on performance. There are even some really in depth ones.
I'm fine with things being bloated (or just plain cpu slow) as that careless coding lets people make some nice things (a big part of KDE, for example). But i am not fine with people who publicly claim that their code is fast(er) without doing proper benchmarking. Hell if it was hard i wouldn't say anything, but it is not (for C code at least).
Objectively most programs i use (and most people use, even without knowing) can afford to be slow and eat much more memory then they should. Because most programs just sit there doing nothing for the grand majority of their run time (FF and sublime.., why do you send those msgs that use up 1-4% cpu ? i just don't understand). (honestly i'm worried that they are lowering(raising?) the performance baseline. but that's just predictions of future, voodoo, spirit animals showing the flow of chi or something)
PS Sry for the dumb rant. I like the topic of performance.
At first I didn't care much for the idea of having to learn yet another init system but as I had to write ansible automation stuff for services on Centos 7 it was kind of required that I have some basic understanding of systemd. I have to say now that I'm more familiar with it it has begun to grow on me.
There is something subjectively nice to me about running systemctl status and seeing a nice, clear picture of what the current state of services are on a system, being able to control the life cycle of a service including having systemd monitor and restart it, not having to deal with improperly written init scripts, etc.
The idea of systemd isn't bad. What's annoying is the friction involved in using it.
It's just a shade too ornate, a little too magical, in both cases only by a small degree, but it's an important one.
Creating a workable systemd init script is actually pleasant. Getting it running is easy. Checking for errors with status is nice, but searching the logs is annoying.
pm2 (https://github.com/Unitech/pm2) has a neat feature where you can watch logs easily, someting that systemd should totally steal and pack into journalctl, like "systemctl logs sshd" shows it in real-time, an alias to the obnoxiously verbose "journalctl -u sshd -f"
I keep forgetting why I hate systemd (and as a FreeBSD user for all things, I have never used it) and then an article like this reminds me ... binary logs.
Seriously - just reading it is painful. binary logs
Meh. You get used to journalctl awfully quickly. For someone who never uses it, it's going to have some friction. You need to remember the command and how to spell it, which I'll admit isn't trivial compared to "just dig around in /var/log" style we're all used to. But you get over that hump in an hour.
For the straightforward "?!$! something happened just grep for it in the log file" it's no harder or slower. And you start noticing things like -f and --since that have no good analogs in the text log world, and it seems like it has value.
And every once in a while you need to pull magic like "show me EVERYTHING that was happening in the 3 seconds before this event, including kernel logs", and that's where it actually makes sense.
I hated it too when I started using it. I don't mind so much now.
> For someone who never uses it, it's going to have some friction.
Remember some people never used the previous init and logging systems. They're new to Linux or UNIX, they may be even young (and growing up with Systemd).
Not just in this particular case but more broadly it is important to note there's going to be opponents who are used to X, Y, and Z and don't want to relearn or adapt. It is akin to the never ending progressive vs conservative argument. Just how large is the group though? Is there backwards compatibility? You see, I just use 3 Raspberry Pi's and I am apparently still able to use /var/log.
journalctl -u <service name> is an amazing improvement on dealing with logs. You get the immediate output of exactly what service you're interested in.
Eh, like the other commenter said, you get used to it.
Everything in life is a compromise in one way or another. Using journalctl instead of less? That took me about 5 minutes to commit to memory, and then...that's it.
I don't even think twice anymore about using one command or another to read the contents of the log.
But you still need less, so now you've added one more command to remember for minimal benefit; less and grep are standard tools for working with everything, journalctl is a one-off.
> someting that systemd should totally steal and pack into journalctl, like "systemctl logs sshd" shows it in real-time, an alias to the obnoxiously verbose "journalctl -u sshd -f"
I'm not sure how you could write this with a straight face. Systemd has friction, but writing init scripts doesn't? Are you kidding me? Have you ever had to write init scripts for production servers? Writing systemd unit files is entirely more straightforward and simple than any init script hackery. And how is "journalctl -u sshd -f" not straightforward? Systemd has its issues sure, but your comment is pure FUD.
"journalctl -u sshd -f" is not straightforward because it's a _redundant_ new command when I already know grep and tail. Small tools handling text is a wonderful way to make a system easy to learn and powerful to use, and we're just throwing that out.
You still have it good. I'm already flipping forward and backward between tail/less on the legacy systems, journalctl on the current systems, and `kubectl logs` on the new systems.
Writing init scripts is as hard as you want it to be.
and 99% of sysadmins are not writing their own, they're modifying the ones supplied by distro maintainers.
If you're a distro maintainer then init scripts are something you write, something you understand because you're handling the state of your distribution from them.
Of course you can just abstract away all the code to another bash script and have your init as 2 lines (which is what openBSD actually does).
> and 99% of sysadmins are not writing their own, they're modifying the ones supplied by distro maintainers.
and even here, systemd makes things easier, more straightforward and comprehensible by allowing to modify system-provided units via drop ins. No need to copy, I get updates to the original units as I'm supposed in a clean and well-defined fashion.
A) Yes I have had to write init scripts for production servers, and I've had to write whatever the hell they were before we had sysvinit. It wasn't always pleasant. There were times when you'd just edit things like /etc/rc.d/network and hack in some more commands.
B) I'm talking specifically about systemd init files, or whatever they're called. Given that they initialize things, that they replace conventional init files, the term applies.
Russ Allbery's investigation [1] into alternative init systems, in the Debian's project to select the best one, also viewed this systemd property in a similar way -- a surprising and unexpected benefit.
Systemd's native understanding of cgroups, process hierarchy, and logging makes this view possible.
* it replaces significant part of the OS and creates a new APIs. It's not really problem of systemd itself, but once other applications start depend on the API you would need systemd to use it. This is especially bad for non-linux systems, like BSD for example, which need to write tools that will emulate such API. It doesn't help that Lennart openly is against any non-linux system so he won't make things easier
* it's buggy, this kind of reminds me of Pulse Audio (also created by Lennart) although eventually things will improve, but it sucks on early versions of RedHat/CentOS 7.
For what it's worth, I don't think Lennart is against non-Linux systems; he just doesn't care. And as an avid OpenBSD user, as well as an avid Linux user, I can't blame him.
Linux has great APIs like cgroups, inotify (in my opinion considerably more useful than the equivalent use of kqueue), and others. OpenBSD is the other unixy system which I think offers something valuable and unique, and it has its own APIs to offer that.
I'm interested to hear how you got the impression that systemd is/was buggy. I have run systemd on a variety of systems, including pre-systemd-support Debian a few years ago; I was always shocked by how reliably it would bring up the system, consistently name the devices, and provide a convenient interface to the logs.
I have been running systemd consistently on my workstations since Archlinux shipped it in 2012, and I have never run into a bug or poorly-documented feature as long as I have been using it.
I think inotify is broken. If 'broken' is too strong a word, then it's at the very least misnamed.
It stands for inode notify, except there are edge cases where it doesn't actually report inode events. Consider this scenario:
cd /tmp
mkdir test
touch test/somefile
ln test/somefile linktothefile
inotifywait -m test
Basically, you're monitoring a directory that contains a single file. There's also a hard link to the file, but it's outside of the watched directory. Since it's a hard link, they have the same inode number (you can verify this with ls -i).
If you append data to the file using the "test/somefile" reference, inotify reports the events as expected. If you do the same through the hard link, you get nothing.
IMO this is wrong since you accessed the inode. inotify is more of a "report events on a file/directory" rather than a "report events on inode" mechanism.
OT: well that problem is outlined in some articles about inotify. Probably most people don't care. Since they only want to watch a directory and their subdirectories and not edits from hard links.
that's not a very charitable view. He simply does not get bogged down in debates about what constitutes the true "unix approach", and prefers to focus on building the best system that can be built using the facilities Linux has to offer.
This (creates a new APIs) I feel is very much at the root of many people's dislike. But it's also exactly what was most important for advancing star-nix.
Historically we've had very little for managing processes. Start stop restart is about all we get for public interfaces, and they all involve shelling out via system().
The entire rest of the computing world has gotten immensely rich and powerful and grown. It's grown because there have been APIs. OpenStack, AWS, Kubernetes- once other applications start depend on the API (as you say) things start getting really good and powerful. The API makes it usable, gives the thing having an API programmability.
Process management has not had a decent API until systemd. Now we can write programs that watch for units being started, watch for them failing. Everything, the whole world, suddenly has a consistent well integrated API. It's appalling to me that people would disfavor this, when the past has been a bunch of hand jammed /etc/init.d/ scripts with varying capabilities and little introspectability.
I agree that being non-Linux is absurd and unfortunate. There really can't be that much special sauce that was relied on. Umass-PLASMA group has systemgo which is an interesting alternative, imo. https://github.com/plasma-umass/systemgo
Best thing of systemd is that it's putting people behind alternate systemd-free Linuxen. Devuan, Alpine, Gentoo, Void (and particularly the BSDs) come along nicely.
I can tolerate systemd on desktops as long as I don't have to deal with it. The moment it craps out with Java-esque error traces in binary logs I'll install Slackware (or is there a modern desktop Linux without systemd I'm not aware of?).
Other than for desktops with complex run-time dependencies (WLAN, USB devices, login sessions), what are the benefits of systemd for servers that warrant a massive, polarizing, tasteless monolith which puts you up for endless patches and reboots, especially in a container world? (Re-)starting and stopping a daemon isn't rocket science; it's like 10 lines of shell code.
SystemD was so polarizing for me, I was a Fedora user and RHEL user for work- but it started consuming everything and giving really bizarre issues... I tried reaching out and explaining to people that it wasn't working in the way I expected or, asking them to point me towards the docs so I can at least learn how to use journalling properly so it doesn't hide issues from me.. and was met with some hostility.
When I asked to disable binary logging entirely because it kept getting corrupted and was opaque I was met with "Unlearn your old ways old man"-style responses and "You can just log to rsyslog too".. No, I want it disabled.. I don't have a desire to log twice..
It was then I realised I was at the mercy of systemd, they can push whatever and reject whatever and I'm completely out of control- I cannot introspect their service manager effectively, it's non-deterministic. It just reeks of hubris from the maintainers. It does mostly the right thing for most people, and they compare it to sysvinit which admittedly needed love. (and, was not a process manager, was only a process starter).
So, I adopted *BSD on the server. And christ is it wonderful.
I still use Arch/SystemD on my desktop at home, because it's actually pretty useful on laptops/desktops. But I swore a vow never to manage a server with SystemD on it.
I'm still runing RHEL6 at work, for the next OS, I'm pushing my very large corp to adopt FreeBSD as an alternative. That's a fairly large amount of money that Red Hat will lose and I don't particularly feel bad about it.
If you're deep into Fedora/RHEL, maybe you can make sense of the fakesystemd situation [1]. Supposedly fakesystemd helps to provide at least a systemd-free Docker container. I'd definitely hate to see CentOS/RHEL go away (which I've come to appreciate as a predictable, if conservative, Linux distro in the past).
It's 10 lines of shell code to do it badly. Poor Man's Dæmon Supervisors written as shell scripts are inevitably flawed in one way or another. It is, however, easy to do with one of the many toolsets that have been around since the 1990s for doing this.
One of the exhibits in the systemd House of Horror results from people taking multiple Poor Man's Dæmon Supervisors written badly in shell script, nesting them inside one another, nesting the result inside another service manager, and then recommending against service management in toto because this madness goes badly wrong.
You certainly have a point re bad shell scripts. But imho advocating a wholesale service manager monolith like systemd isn't the answer (fallacy of the excluded middle and all).
Unix admins must come to learn what they're doing somehow. The way they learn it is by going one step after another, beginning with simplistic shell scripts controlling isolated functionalities, then improving it etc. Do one thing, and do it well, Unix philosophy, whatever. For novice Unix admins, systemd is too much of a black box, without any kind of didactic curriculum leading to it or away from it. Folks cannot grow into becoming a senior Unix admin.
I've seen it first hand just recently where a customer of mine went all-in (on puppet in this case). The "admins" were merely clicking around and doing trial and error kind of things; they had absolutely no idea what they're doing, and when things are going south won't be able to even diagnose what's going on. They hated their job and went out the door at 5pm. They stayed and were very eager to learn, however, when I gave them an ad-hoc Unix command line survival guide.
Systemd and its ilk is not how you get responsible and competent Unix admins that take pride in their work.
Because on servers, even the huge repository isn't that much of a gain. But running "apt-get upgrade" beats rebuilding a machine by orders of magnitude. Even more for rented VPS where I must either add an instance or get downtime.
I'll put my opinion in the middle ground with systemd as well. I like a lot about it, and I dislike a lot about it. I really think the best thing they could have done would have been to make it modular. If people could just turn off the "features" that they don't want, there wouldn't be so much bitching about it.
Instead they keep trying to shove everything into one giant pile, and don't understand why people get upset.
>the best thing they could have done would have been to make it modular.
They did, systemd is very modular in nature. The only things that aren't modular that spring to mind is the journal and systemd the init requires dbus (sort of).
systemd the project has a little bit of everything that you need in an operating system but systemd the init is stripped down to being able to parse unit files, supervise services, and generate and walk the dependency graph from where you currently are to where you're aiming to be.
All of the random features that you always hear about are all individual self contained services. So for example, systemd the init knows nothing about the format of /etc/fstab or SysVInit scripts. It only understands how to work with unit files, the way this works is that there's separate self contained generators that parse /etc/fstab and all of the old init scripts and creates unit files that only exist in a tmpfs directory.
If you don't want something like systemd-networkd then you don't have to have it. You can just compile systemd without it, turn it off at runtime, or just configure systemd to not start said service. You can take systemd and make it suitable for everything from embedded use, to servers, to a multiseat desktop.
You'll find that there's pretty much nothing you can't outright disable, except for journald. journald needs to be running, but you can turn off the binary logging and redirect everything to syslog.
systemd itself is a collection of system daemons, as well as small programs to interact with those daemons, and almost all of them are disabled by default. That's my experience on Arch, and they adhere strictly to upstream defaults.
If that weren't enough you could simply leave out the daemons you don't like at compile time.
Every time that people write that they tell other people who do know systemd that they do not know it. The journal cannot be turned off. Making it be stored in files in /run/log/journal/ instead of in files in /var/log/journal/ is not turning it off. It's making it non-persistent so that it doesn't last across system restarts, delegating the job of writing persistent logs to post-processing services that (nowadays) read the journal using its systemd-specific database access facilities. Ironically, making it not be stored in any files at all would actually prohibit the post-processing services from working, as they would have nothing to read and to process into their own formats.
>they tell other people who do know systemd that they do not know it.
I appreciate the concern, thanks.
>The journal cannot be turned off.
The persistent binary logging is turned off, which is what people bitch about. Obviously, many of systemd's monitoring features are tied to the journal, and as stated journald still needs to be running, obviously writing to a non-persistent journal for these and forwarding logs (if specified.)
>delegating the job of writing persistent logs to post-processing services that (nowadays) read the journal using its systemd-specific database access facilities.
It's true that syslog-ng pulls messages from the journal, whereas syslog implementations that are not aware are provided with them over a compatibility socket, but this is a performance optimization, to reduce system overhead. Not really an important distinction.
The context was turning persistent logs off and switching to on-disk and persistent text logs with syslog. If you really wanted to nitpick: dbus and udev are totally non-optional.
The repository and build system are set up to allow you to disable almost everything (except core services like evdev, the "systemd" init process, journald, and D-Bus).
Distributions have just found it so pointless to strip it down that they've left it largely as one package. You're entirely welcome to use your distribution's packaging scripts to produce a reduced package, if one doesn't exist yet.
As far as I'm concerned, I'm happy that it's a single repository tracking consistent, interoperable versions of each of the systems in systemd. This means that there are few dependencies to track.
People get upset because they're too lazy to run the build system themselves. People who are unwilling to run the build system and read the mailinglist are unfit to ship anything short of the whole thing anyway. In my opinion, if these people are complaining, it is meaningless. They are not the sort of people who can figure out how to fit their system into 8MB of flash on a tiny SoC, so them complaining that the systemd package is more than 10 megs is like a child complaining that their bedsheets are too barbie and not buzz lightyear enough.
> core services like evdev, the "systemd" init process, journald, and D-Bus
Oh, so just most of the things that people hate about it. No biggie then. /sarcasm
> People get upset because they're too lazy to run the build system themselves. People who are unwilling to run the build system and read the mailinglist are unfit to ship anything short of the whole thing anyway. In my opinion, if these people are complaining, it is meaningless. They are not the sort of people who can figure out how to fit their system into 8MB of flash on a tiny SoC, so them complaining that the systemd package is more than 10 megs is like a child complaining that their bedsheets are too barbie and not buzz lightyear enough.
I can't say I've ever seen anyone complain about that. I have seen, however, people complain about broken logs, unreliable processes, butchered output, hacky fixes, obscure errors, etc.
And this is the attitude that I have an issue with. Systemd has a lot of good sides, but it also has a lot of warts. Pretending that people's legitimate concerns are nonsense is a waste of everyones time, and are what makes people who have issues want to complain at every turn.
Personally, I love Systemd for Web development as it makes the job of managing Node.js projects a breeze. I simply pack a Systemd unit file into a project's repository as part of its multi-environment configuration and construct deployment tasks to copy, enable, then start the unit file.
1. Copy <unit.service> to the sever, which looks like:
Systemd ideas I'm all for but it only hints at linux lack of clean abstraction power. sysvinit was full of redundancy; apparently BSD found a way to make a thin abstraction layer to make init files clean.
Bash isn't "good" at hinting proper abstractions, I rarely see talks about this, maybe gurus can see through the rain.
I keep seeing a place for a tiny lambda friendly intermediate layer .. Just so you can compose properties of your services like clojure ring middleware.
ps: the idea is that (retry 5) is a normal language function, and not a systemd opaque built-in, you can write your own thing, and the most used can go upstream. Hoping to avoid the sysvinit fragility and redundancy.
Shell and bash are actually excellent at this, but people don't like writing shell scripts. This is just process chaining. Take a look at DJB's or the more modern runit for init toolkits that compose.
Isn't it just repeats the command 5 times instead of retrying?
IMO Bash with it's multitude of annoying quoting and field splitting rules, many irrelevant features focusing on interactive use, and error handling as an afterthought is just wrong choice for writing robust systems. It's too easy to make mistakes. And it still works only in the simplest cases, until somebody evil deliberately pass you newline delimited string or something with patterns which expands in unexpected place, etc. Properly handling those cases will make your script ugly mess. Actually I find the mental burden when writing shell scripts is very akin to programming in C.
> Bash with it's multitude of annoying quoting and field splitting rules
So don't use the Bourne Again shell. After all and to start with, if you live in the Debian or Ubuntu worlds, your van Smoorenburg rc scripts have not been using the Bourne Again shell for about a decade.
There's no reason at all that run programs need be written in any shell script at all, let alone in the Bourne Shell script. Laurent Bercot publishes a tool named execline that takes the ideas of the old Thompson shell ("if" being an external command and so forth) to their logical conclusions, which is far better suited to what's being discussed here. One can also write run programs in Perl or Python, or write them as nosh scripts.
> sysvinit was full of redundancy; apparently BSD found a way to make a thin abstraction layer to make init files clean.
They are completely different systems of init scripts. Just for starters, BSD does not have run levels, so the rcn.d directories and the maze of symlinks do not exist.
BSD init scripts themselves are much simpler mostly because there is a good system of helper functions which they all use, rather than every single script inventing its own wheel in sysvinit. Of course, this is just a discipline or convention, and sysvinit could be vastly improved if anybody was similarly industrious.
BSD init scripts also have completely different dependency management (PROVIDE and REQUIRE instead of numerical priorities). And the mechanism for disabling and enabling a script is via dead simple definitions in the single file /etc/rc.
Why would I specify those in an init script? openssh has been doing The Right Thing(TM) for about two decades.
Also, openssh has protocol-specific (and secure by default) configuration options around connection lifecycle, etc. that no general purpose init system should try to replicate and then blindly apply to unrelated services.
OpenSSH is the unusual case. Most the the services I run don't do the right thing. That's why, for instance, Debian wrote the start-stop-daemon utility years before anyone ever dreamed of systemd.
I've been greatly enjoying systemd for the ability to write services and startup jobs in languages that don't have great libraries for doing all the right things, like Python and shell, because it gives me all those Right Things as effectively a library. I don't have to manage pid file handling and daemonization and restarts on my own; systemd will do it, and will do it well and Right. (I used to do this with start-stop-daemon, and it did it poorly and only okay.) It will also get out of the way of OpenSSH, which does it well and Right, and get in the way of the dozens of services that think they're doing it well and Right but aren't.
Easy things should be easy, and hard things should be possible. systemd supports both. If that's the extent of what runit can in fact do, it only supports the former. (SysV-style init, of course, just makes easy things hard and hard things confusing, but as everyone else is saying, it's very not the standard of comparison here.)
I feel like at some point, news aggregation sites will need moratoriums on this topic. Systemd is an architectural decision with very philosophical effects, so by its very nature be very divisive. I'm, personally, not a fan at all of systemd, but am tired of seeing these stupid arguments everywhere all the time.
There are plenty of places to find competent summaries of both the technical and political arguments for and against systemd and we don't need yet another.
I don't feel they should. These are arguments I haven't heard before either and I'm glad to see more of these weird cases laid out.
I run Gentoo (OpenRC) and Void (Runit) on my own systems, and although I do like both of them, I find the total lack of alternatives to the systemd ecosystem troubling.
As a package maintainer, I do like being able to create rpms/deb files and only needing one standardized init script, so there is are a few advantages to systemd, but not many. It's complexity makes it difficult to create drop in replacements (work has stopped on uselessd and others).
There needs to be more community support for distros like Void and real alternatives. Articles like this encourage that kind of thinking.
Where systemd is complex, it's because the use case is complex. If you know so much about these use cases, and you think systemd is unnecessarily complex, why don't you do it yourself and see where that takes you?
If a forum can't handle a discussion of UNIX software, then it should probably stick to kitten gifs. The art of moderation and community management is to allow discussion while containing hostility.
Personally I feel that the sheer controversialness of systemd and its chief maintainer is a bug in and of itself, which the maintainers should address.
Why would you run the Exec command through systemd-escape? The help text says that's for the NAME of the unit... Am I missing something?
Also, I'm failing to think of a scripting purpose that requires you to place non-trivial bash directly in a systemd unit that couldn't be solved by writing out the script somewhere and just invoking the script in the unit.
> Now agreed, if your workflow demands that you embed a Bash while loop in a unit, you’re already in a bad place, but there are times where this is required for templating purposes.
It's right there in the post. I have indeed had to do something like this in the wild to run a Docker container with special needs.
I don't see how "templating purposes" answers the question. I deploy services on VMs that need to parameterized and I can parameterize via templating a written-to-disk shell script that is then just simply executed in the unit... or I can have more dynamic parameterization and use environment variables and Environment= lines in the unit. The latter solution means the substitution is effectively happening in the same place as if they were inlined parameters, so I can't imagine a scenario when it wouldn't be workable.
I agree, Upstart was great for most things and was simple to understand. Though I ran into some difficult fork, exec issues for certain processes. I sometimes had to run a wrapper to use upstart, but those seemed to be edge cases.
I'd prefer if we stuck with Upstart and improved on it, though it already seems like a distant dream.
The problem with systemd is not that it is "bad". There's a lot worse software out there. Systemd is relatively competently programmed. And it's even useful!
The problem with systemd has always been that it forces you to do everything the systemd way, and usually to use its tools. It goes against everything that has helped GNU/Linux become a great system: that it wasn't really one system. It was bits and pieces, and you could add them together however you wanted to get something you liked.
Systemd is the opposite. It is inflexible and clunky, monolithic and proprietary, binary and difficult. Everything has to be made to work with it, not vice versa. It doesn't follow any of the old conventions that made it simple to combine one tool with another. I can compare it to Windows, but I feel like that would be an insult to Windows' usability.
If this one single fact was different, everyone would love systemd, because it has plenty of useful features. But because it is designed specifically to please a single quirky user, people hate it.
It is indeed proprietary because it does indeed have an owner. But I use the word in the context of its design and implementation, which is basically that of the not so benevolent dictator, or the possessive or territorial.
> It is indeed proprietary because it does indeed have an owner.
You are using the term proprietary incorrectly, probably out of simple ignorance. The software does not have an "owner" who has power of the users; the LGPL means all users have complete control over the software.
Please read this carefully before spreading more misinformation:
> It is inflexible and clunky, monolithic and proprietary, binary and difficult. Everything has to be made to work with it, not vice versa. It doesn't follow any of the old conventions that made it simple to combine one tool with another.
Could you share a few specific examples? That hasn't been the case at all for me. I have worked with systemd and have had the opposite experience. I simply write textual config files describing how to launch the service, which can be done in a variety of ways. Could you share more about how you tried to use systemd?
I'd encourage you to read Russ Allbery's analysis of systemd [2], which includes a description of his experience converting one of his packages to use systemd. He remarks on the well-done integration and its compatibility:
* Integrated daemon status. This one caught me by surprise, since the
systemd journal was functionality that I expected to dislike. But I was
surprised at how well-implemented it is, and systemctl status blew me
away. I think any systems administrator who has tried to debug a
running service will be immediately struck by the differences between
upstart:
lbcd start/running, process 32294
and systemd:
lbcd.service - responder for load balancing
Loaded: loaded (/lib/systemd/system/lbcd.service; enabled)
Active: active (running) since Sun 2013-12-29 13:01:24 PST; 1h 11min ago
Docs: man:lbcd(8)
http://www.eyrie.org/~eagle/software/lbcd/
Main PID: 25290 (lbcd)
CGroup: name=systemd:/system/lbcd.service
└─25290 /usr/sbin/lbcd -f -l
Dec 29 13:01:24 wanderer systemd[1]: Starting responder for load balancing...
Dec 29 13:01:24 wanderer systemd[1]: Started responder for load balancing.
Dec 29 13:01:24 wanderer lbcd[25290]: ready to accept requests
Dec 29 13:01:43 wanderer lbcd[25290]: request from ::1 (version 3)
Both are clearly superior to sysvinit, which bails on the problem
entirely and forces reimplementation in every init script, but the
systemd approach takes this to another level. And this is not an easy
change for upstart. While some more data could be added, like the
command line taken from ps, the most useful addition in systemd is the
log summary. And that relies on the journal, which is a fundamental
design decision of systemd.
And yes, all of those log messages are also in the syslog files where
one would expect to find them. And systemd can also capture standard
output and standard error from daemons and drop that in the journal and
from there into syslog, which makes it much easier to uncover daemon
startup problems that resulted in complaints to standard error instead
of syslog. This cannot even be easily replaced with something that
might parse the syslog files, even given output forwarding to syslog
(something upstart currently doesn't have), since the journal will
continue to work properly even if all syslog messages are forwarded off
the host, stored in some other format, or stored in some other file.
systemd is agnostic to the underlying syslog implementation.
I think Russ's passage exhibits the opposite sentiment from the one you wrote. Russ says you can get the benefits of systemctl and journalctl if you want, but you can also use syslog in the old-fashioned way if you want. This has been my experience with systemd as well. I still have a habit of writing "service postfix restart", which still works but is not the systemd-idiomatic way. I don't even have the "correct" way memorized because the old way that I'm used to using works.
So what I'm asking is, have you tried to achieve something with systemd and found that you were forced to do it differently? I'd be curious to learn more about what those cases were. I'm not affiliated with systemd project, but I have a general interest in wanting to see it improve.
(Please note that, barring conventions of some kind that span multiple tools, every tool naturally requires you to do things that tool's own way. If you are holding systemd to the standard that its job should be able to be performed by two different tools, each with the same interface, while supporting advanced features, then you should have other examples of this as well.)
As are most Bash init files with InitV... At some point we should consider that the init system as a whole (including the configuration scripts for each daemon) _is_ complicated and often confusing. In this regard, writing something like SystemD to elegantly handle most of the usecases is actually a good idea.
Many comments seem to only compare systemd with SysV init. But there is a whole world of process supervisors out there, particularly in the daemontools[1] family.
systemd is a process supervisor plus additional features; sysv init is not a process supervisor (though the "init" program as configured in /etc/inittab is a supervisor). At a minimum, a process supervisor starts processes and restarts them should they exit.
Some modern daemontools-family systems with additional features beyond basic supervision that put them in the same category as systemd are nosh[2] and s6-rc[3]. These two systems (and all the daemontools family), in contrast to systemd, are not monolithic, but composed of multiple smaller programs.
Notably, they depend on the shell (any script or executable, but traditionally small shell scripts) to perform actions like setting up environment variables, switching to a less privileged user, redirecting output, etc. while systemd has created a large and growing set of directives[4] to accomplish these. This set of directives is one of my annoyances with systemd. As someone familiar with the shell, I find it annoying to learn yet another language (systemd directives) for expressing the same thing, though I recognize not everyone is familiar with shell.
You forget that sysvinit also required fun extras like supervisord to run simple executables because it relied on applications demonizing themselves.
Writing systemd unit files for my applications is a breeze compared to the shit I had to do for sysvinit - especially when I factor in apps I DIDN'T write that I need unit files for (teamcity, etc) and there's less surprise with SELinux policy transitions (and unlike most setenforce 0 is not the first thing I run on a server, I work in healthcare and a solid MAC layer is important to me).
What do you mean which one? It's /bin/sh. /bin/sh has always been the shell used, because it's the only one guaranteed to be on all unixen. No system ships their init scripts written in ash, csh, tcsh, ksh, or anything besides /bin/sh (except Linux, which may have used /bin/bash, since /bin/sh was symlinked to /bin/bash)
You might have made your personal init scripts in some other shell (until you learned about design failures of csh), but I'm willing to bet your Unix distributor did not.
I call this to drop into an immediate repl at the point of invocation in a language that doesn't hate me (Ruby):
require 'pry'; binding.pry
This is, in fact, 2017. One might call it the current year. The stone knives and bearskins of shell scripting have not kept apace with, like...anything else in our industry. You might be comfortable with them, but that doesn't make it easy, it means you have frayed countless synapses learning it.
Unlike all of the above, runit isn't complicated. You write a tiny, often one-line script and it executes, if it exits, it waits a reasonable amount of time and restarts it.
Not only is it not complicated, but it runs beautifully on top of an existing init system. So now wherever I go I have one tiny script that will start my python based web services and similar. I run the same basic script on freebsd rc, ubuntu upstart and arch systemd boxes.
Who said the unit philosophy was the be-all, end-all? Unix doesn't even believe it's own philosophy because it's too damn painful. Why does ls do sorting? Why does grep do -R recursive searching? How is that "Do one thing and do it well"?
Unix is just a collection of random decisions made by various people over the years. The Unix philosophy is really more like "I've always done it that way so don't you dare change it".
fork/exec is garbage. Signals are garbage. You basically can't make any API calls after fork or in a signal handler because threads came after. The interaction of fork and threads is bananas and many hacks have been required over he years to paper over the problems.
The layout of /usr/bin, /usr/sbin, /usr/local/bin, et al isn't a good design. It's dogshit but it was necessary because early Unix file systems couldn't span multiple volumes and early disk systems were small.
The C compilation model of separate header files is not a good design. People have retroactively determined some of the side-effects are not only good but The One True Way. In reality the design was a result of extremely limited RAM and slow CPUs. The preprocessor itself was never designed, just grafted on ad-hoc.
Unix file permissions are shit. Every unique combination of permissions requires a group. Owners are by simple integers. NFS legitimately gives people nightmares.
Let's not even get into everything is a file, except when it's not, and some files are more equal than others.
What about dependency hell? How's that "simple" model working out?
"Who said the Unix philosophy was the be-all, end-all?"
I didn't and I don't know anyone that has. I was comparing systemd and SysV.
"Unix is just a collection of random decisions made by various people over the years."
"Linux has never been about quality. There are so many parts of the system that are just these cheap little hacks, and it happens to run." - Theo de Raadt
"The layout of /usr/bin, /usr/sbin, /usr/local/bin"
I don't agree with you. Having base in a directory and having user installed bins in another makes sense. I don't understand why most modern Linux distros install on only one partition by default. You lose the ability to mount with flags ie noexec,nosuid,nodev.
NFS is horrible. Unix/Linux is a multi-user operating system why would you not want to have groups?
"The C compilation model of separate header files is not a good design"
This is probably just your lack of experience not having worked on 50+ million LOC compiling for 12 hours and not having anything else as a better option. There is a reason these things exist.
I've worked on a project that combined C++ and C# in approximately equal amounts (say 2MLOC each). The C++ project compiled and linked for 20 minutes , C# compiled and linked in under 2 minutes. Go figure.
I agree: C and C++ compilation model is not a good design. It's a patch for not having a decent module system. Heck, even Borland Pascal compiled faster in 90-ies than C++ does now on an orders of magnitude faster machine.
For as long as C has existed, other languages have provided alternatives where symbol information is extracted from the main source files automatically by the compiler and optionally cached for next time, or if you don't want to ship users of a library source, for example. In other words: This has been solved in a better way since the 70's.
Yes, yes, yes! Never has someone articulated what I believe so damn well. There is so much blind worship of tradition and heroes in the UNIX world, so glad to see someone else believes what I've always believed.
Most of the statements aren't really conducive to rebuttals because they are lacking substance.
But I can imagine what xenadu02 might have meant, if you like, and provide some counter arguments.
Signals aren't "garbage" (whatever that means).
Signals can call APIs (the set of async-signal-safe APIs). They can't call non-async-signal-safe APIs not because of threads, but because signals can interrupt a routine at any point (necessary for asynchronous notification of certain events which must be handled before the normal instruction control flow can be resumed) and that interrupted routine may not have been written to be reentrant.
This is true even without threads in the picture.
The fork/exec model is not "garbage". It is actually a fairly nice alternative to the "provide one API to start a child process and give it a large number of parameters for all possible situations". And you can call plenty of APIs between fork and exec in the child safely, just like from signal handlers.
I haven't dealt with dependency hell ever since shared libraries got sonames.
The rest of the comment doesn't list anything of substance. If you want rebuttals for "the file system layout is a bad design" or "the C compilation is a bad design" or anything else, provide some reasons why those are bad designs; some of those reasons may be valid criticism, and some may not be, but one can't just make vacuous statements like that and expect a reasonable discussion to follow.
Unix signals have been called garbage by some and "unfixable" by others [1]. The article [1] explains the evolution of signal handling, from sigvec(), sigaction(), to signalfd() -- a rocky history fraught with problems, an article in the series "Unfixable designs".
> So while signal handlers are perfectly workable for some of the early use cases (e.g. SIGSEGV) it seems that they were pushed beyond their competence very early, thus producing a broken design for which there have been repeated attempts at repair. While it may now be possible to write code that handles signal delivery reliably, it is still very easy to get it wrong. The replacement that we find in signalfd() promises to make event handling significantly easier and so more reliable.
Another critic makes the case that "signalfd is [also] useless" [2]:
> "UNIX[] signals are probably one of the worst parts of the UNIX API, and that’s a relatively high bar."
Signals came up recently on HN when someone remarked that not even memset() is signal-safe! [3]
All in all, working with signals correctly requires mastering a tremendous degree of complexity. Other platforms have provided simpler APIs, such as Structure Event Handling (SEH) [4].
I'm not going to defend everything xenadu02 said, but I think there were some points that resonated with me even though I agree they could be expressed more constructively.
> Why does ls do sorting? Why does grep do -R recursive searching? How is that "Do one thing and do it well"?
I think these are valid examples of how Unix itself fails to follow the "Unix philosophy" of "Do One Thing and Do It Well".
> The fork/exec model is not "garbage". It is actually a fairly nice alternative to the "provide one API to start a child process and give it a large number of parameters for all possible situations". And you can call plenty of APIs between fork and exec in the child safely, just like from signal handlers.
fork-exec complicates the implementation of threads (see atfork handlers). Rather than "a large number of parameters for all possible situations", another alternative would be to have (1) a call which given executable name and arguments returns an opaque handle (or file descriptor) representing the process to be started (2) a bunch of further calls to set attributes on that handle – new features could add new APIs acting on the handle, or an extensible API like ioctl could be used – if there is a handle to represent the current process, then you only need one API call to set it for the current process or a child to be started (3) finally, a start call which turns the process-to-be-started handle into a running process handle.
> Unix file permissions are shit
The user-group-other model is arguably too limiting. ACLs are a better idea, but then should you use POSIX ACLs or NFSv4 ACLs?
The distinction between primary group ID and supplementary group IDs is silly.
Why must every file have both a UID and a GID? For files owned by a single user, you end up creating a dummy group like "staff" or so on just to obey the rule that every file must have a GID. For shared files, e.g. project files, files generally end up owned by their creator, even though in a business sense they really belong to the project not to whoever created them. It would make more sense if the owner could be either a user or a group, and then also have zero or more non-owning groups associated with it.
In most cases permissions should only exist on the directory, and then automatically apply to any files in the directory. (In most cases every file in the same directory should have the same permission; Unix bases its design on the exception rather than the rule.) Of course, hard links make this impossible, but I think hard links were a mistake.
The executable permission bits actually do double duty as a file type indicator. That's rather ugly. If Unix had explicit file types (rather than just a naming convention of file extensions), then certain file types could be declared to be executable. Executable permission would then mean "you are allowed to execute this if it is an executable" instead of "this is an executable". Stuff like the +x vs +X distinction in chmod would never have been necessary.
> Let's not even get into everything is a file
Unix would have been much better if everything were a file descriptor, rather than having stuff like pid_t. Linux at least is evolving in this direction. Plan9 does it better. Even the WindowsNT philosophy of "everything is a handle" is better than the traditional Unix approach.
Regarding ACLs, I'd say that there's little choice here: it has to be NFSv4.
The rationale for this is that POSIX ACLs are firstly too simple to model what we need. And they are also non-standard (POSIX .1e ACLs are a DRAFT specification which was never ratified).
NFSv4 ACLs are vastly more featureful, already implemented to support NFSv4 in kernel, though not available in userspace AFAICT. On FreeBSD and other platforms using ZFS, they are also used by ZFS and are directly exposed to userspace, making rich ACLs usable as the default permissions model system-wide when running on ZFS. Linux, unfortunately, doesn't yet do any of this, even when using ZFS.
Programs have features because they are useful. Some features may not fit your view of what the philosophy should dictate, and that's OK. Having a recursive ls doesn't bother me for example.
Fork-and-exec isn't complicated by threads. Only fork-and-keep-executing is.
UNIX doesn't have a naming convention using file extensions.
Some of your points are valid opinions that are shared by others, but I don't know how much they have to do with the UNIX philosophy.
Some APIs can be improved, sure. And some are being improved. It takes time because of unix's success and most systems' desire to remain backward compatible (especially in source form).
> Fork-and-exec isn't complicated by threads. Only fork-and-keep-executing is.
Another issue is that fork-and-exec doesn't work well with languages with complicated runtimes, e.g. multithreaded garbage collection. It forces you to use a lower level language (such as C) to write all the code between fork and exec. An API based on process handles with a separate "start" call to convert a not-yet-started handle into a running process wouldn't have that deficiency.
Another issue is that it is very hard to implement robust error handling without race conditions in the fork-exec model. What if the child process encounters an error between the fork and the exec? How does it notify the parent process of exactly what error it got (e.g. "setsid failed"?) You need some sort of IPC mechanism between the child and the parent. And such an IPC mechanism is prone to race conditions. By contrast, the process handle-based API I suggested doesn't have this problem since it doesn't introduce more concurrency into the system than is absolutely necessary.
> UNIX doesn't have a naming convention using file extensions.
Yes it does. The average Unix system is full of file extensions like .c, .h, .so, .html, etc. Even in Unix V1 file extensions were used as a convention - http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1
> Some of your points are valid opinions that are shared by others, but I don't know how much they have to do with the UNIX philosophy.
Is there a clear definition of what the "UNIX philosophy" is? Is any criticism of Unix systems as actually implemented a valid criticism of the "Unix philosophy"? Or do you want to define the "Unix philosophy" so vaguely as to put it beyond any possibility of criticism?
> Another issue is that fork-and-exec doesn't work well with languages with complicated runtimes
How are you doing fork-and-exec in a language with a large runtime? You are either using the language-provided APIs to do it, in which case they should document the restrictions on what you can call (and you should follow those), or you are dipping down into the C or system call layer to do your own fork-and-exec, in which case yeah, you still need to keep to the safe list of routines you can call between fork and exec, and you may have extra limitations since you are mucking around underneath your language's runtime (like you may have to unignore signals on your own, close file descriptors, etc). No surprises there.
> Another issue is that it is very hard to implement robust error handling without race conditions in the fork-exec model.
I don't think it is. You just print an error to stderr (write() is safe to call), and you return a bad error code (fork has built-in IPC for error codes via wait() in the parent).
> Is there a clear definition of what the "UNIX philosophy" is?
I don't know, ask the person who first invoked that phrase in this thread. They claimed it meant "do one thing and do it well" to them, and then they complained about things that didn't seem related to me (like file extensions, what does that have to do with programs "doing one thing"?).
> > Another issue is that fork-and-exec doesn't work well with languages with complicated runtimes
> How are you doing fork-and-exec in a language with a large runtime? You are either using the language-provided APIs to do it, in which case they should document the restrictions on what you can call (and you should follow those), or you are dipping down into the C or system call layer to do your own fork-and-exec, in which case yeah, you still need to keep to the safe list of routines you can call between fork and exec, and you may have extra limitations since you are mucking around underneath your language's runtime (like you may have to unignore signals on your own, close file descriptors, etc). No surprises there.
Let's say I am using JNA – https://github.com/java-native-access/jna – under Java. It is safe to call posix_spawn from Java code using JNA. It is safe to call the Windows API equivalent (CreateProcess). It would be safe to call the handle/descriptor-based API I proposed. It is not safe to call fork. This is an undeniable deficiency of the fork-exec approach which competing approaches don't have. Furthermore, whatever compensating advantages fork-exec may have, the handle/descriptor-based API I proposed has the same advantages without this disadvantage.
> > Another issue is that it is very hard to implement robust error handling without race conditions in the fork-exec model.
> I don't think it is. You just print an error to stderr (write() is safe to call), and you return a bad error code (fork has built-in IPC for error codes via wait() in the parent).
But that isn't robust. How can the parent process reliably distinguish output sent by the child process prior to the exec from output sent by the child process post the exec? Likewise, how can the parent process reliably distinguish an error return value from the child process prior to the exec from an error return value from the exec'd program? It can't.
For truly robust error handling, you'd actually need to do something like this: (1) have a pipe between parent and child process with FD_CLOEXEC set on the child side; (2) the child sends the parent a message "I'm about to exec" before calling exec; (3) the child sends the parent a message saying "exec failed with errno=.." if the exec call fails; (4) if the exec call succeeds, the child process will close its end of the pipe without sending any message post "I'm about to exec". This is my point, actually robustly handling errors in the fork-exec model is quite complex. In a handle/descriptor based API it would be much simpler.
(And the above approach using a pipe isn't perfectly robust – what if the child process crashes for some reason between sending the "I'm about to exec" message and actually calling exec()? It is very difficult for the parent process to reliably distinguish that scenario from some failure in the program being exec()'d.)
> Let's say I am using JNA. [...] It is not safe to call fork.
Are you calling fork() from Java, from C, or using the system call number?
Because I'd agree calling it from Java might be unsafe (depends on how Java and JNA interact), but I believe calling it from C or the system call is perfectly fine. And this is in line with what I've written previously.
> But that isn't robust.
It's not supposed to be robust in the way you are describing.
The fork-exec model is low level. It is supposed to be low level. Doing high level things with it is supposed to take some work by the application. That's not a deficiency.
If you build too many things into the low level code, you run into trouble because now you've got 10x as many ways to fail (building your pipes, writing your error messages, marshalling error state, cleaning up, you name it).
Also, some programs will want to do some of those higher level things differently, so instead of baking them into the API and having tons of parameters and paying for some of that overhead (like creating a pipe and writing error messages to the parent for every single fork and exec) you only do that when you want it.
The Windows NT model built on operating systems design thought that happened in the 1980s, that took far too many years to trickle into the other operating systems whose designs such thought was looking at.
However, FreeBSD has had process descriptors since roughly 2010. They have the slightly odd semantics of terminating processes when all descriptors to them are closed. But they can be used as descriptors with kqueue() and the like.
I think systemd needs to do most of the things it does. Russ Allbery's analysis of systemd [1], written as part of Debian's evaluation of whether to switch to systemd, explains the benefits. Journal integration is something that Russ was also skeptical about, until he realized its value:
* Integrated daemon status. This one caught me by surprise, since the
systemd journal was functionality that I expected to dislike. But I was
surprised at how well-implemented it is, and systemctl status blew me
away. I think any systems administrator who has tried to debug a
running service will be immediately struck by the differences between
upstart:
lbcd start/running, process 32294
and systemd:
lbcd.service - responder for load balancing
Loaded: loaded (/lib/systemd/system/lbcd.service; enabled)
Active: active (running) since Sun 2013-12-29 13:01:24 PST; 1h 11min ago
Docs: man:lbcd(8)
http://www.eyrie.org/~eagle/software/lbcd/
Main PID: 25290 (lbcd)
CGroup: name=systemd:/system/lbcd.service
└─25290 /usr/sbin/lbcd -f -l
Dec 29 13:01:24 wanderer systemd[1]: Starting responder for load balancing...
Dec 29 13:01:24 wanderer systemd[1]: Started responder for load balancing.
Dec 29 13:01:24 wanderer lbcd[25290]: ready to accept requests
Dec 29 13:01:43 wanderer lbcd[25290]: request from ::1 (version 3)
Both are clearly superior to sysvinit, which bails on the problem
entirely and forces reimplementation in every init script, but the
systemd approach takes this to another level. And this is not an easy
change for upstart. While some more data could be added, like the
command line taken from ps, the most useful addition in systemd is the
log summary. And that relies on the journal, which is a fundamental
design decision of systemd.
And yes, all of those log messages are also in the syslog files where
one would expect to find them. And systemd can also capture standard
output and standard error from daemons and drop that in the journal and
from there into syslog, which makes it much easier to uncover daemon
startup problems that resulted in complaints to standard error instead
of syslog. This cannot even be easily replaced with something that
might parse the syslog files, even given output forwarding to syslog
(something upstart currently doesn't have), since the journal will
continue to work properly even if all syslog messages are forwarded off
the host, stored in some other format, or stored in some other file.
systemd is agnostic to the underlying syslog implementation.
I wrote another comment recently [2] to explain why I value systemd's approach and appreciate its declarative style. I can launch my service at the appropriate time during boot with configuration as simple as:
[Unit]
Description=Demo service
[Service]
Type=forking
ExecStart=/usr/sbin/my-daemon
Now let's say that I didn't author this daemon, but I'd like to run it with a private network, private temp folder, or a private /dev namespace. Or perhaps the daemon needs to run as root, but I want to drop all capabilities it doesn't need. It's as simple as adding these lines to the service's configuration:
The fact that systemd supports these configuration options means that there's a simple and standard way to employ them with any service. The service itself doesn't need to support them, and needn't complicate its own daemonization logic to do so correctly. Indeed, I don't need to trust the service to daemonize or drop capabilities, since I can tell the init system do that before launching the service.
I can drop capabilities with CapabilityBoundingSet=, or limit resource usage with CPUSchedulingPriority=, IOSchedulingPriority=, etc. I could even tell systemd to open the listening socket for me so the service doesn't need CAP_NET_BIND_SERVICE! Moving these options into the init system makes a ton of sense, because it gives administrators the ability to employ these features from outside applications, not just by enabling them within applications that bother to explicitly support them via command line arguments. Systemd better encourages the principle of least privilege: if a system daemon does not need the ability to "ptrace" other processes, or bind to ports <1024, then as the administrator I can take those away with CapabilityBoundingSet= in the unit file. Chrooting the service is as easy as RootDirectory=. This is a huge step forward compared to the world where every service must be relied upon to expose these settings, and must be trusted to implement them correctly.
I thought that was the part of systemd that was universally liked? The best arguments against systemd seem to be that everything becomes implemented in or tightly bound to it.
Pid 1 runs its own DNS server now. It also launders kernel calls for non-setuid xorg (breaking rootless x on non-systemd boxes, since the kernel can't be bothered to consistently check process uid or gids, apparently). In turn, that means it must have some baroque authentication subsystem too. There is no way you need that in init. The amount of ancillary damage systemd causes far outweighs any possible benefits there are to improving init.
Also, I have never seen a systemd box emit log lines like that for a failed service. It invariably points at some useless logfile with obscure systemd messages in it instead of the stderr of the failed process. This is on clean ubuntu and debian installs. Maybe it is user error, but I doubt it. (Though there is no command line in the examples...)
Anyway, I'm happy to cleanse with fire instead of RTFM at this point. On a related note, I just learned the solaris init system and started using openbsd's again.
I prefer them both to systemd. They are at opposite ends of a spectrum. The openbsd approach is well curated shell scripts. I think systemd was heavily inspired by the solaris thing.
> PrivateTmp=yes
> PrivateUsers=yes
> PrivateNetwork=yes
>The fact that systemd supports these configuration options means that there's a simple and standard way to employ them with any service.
What exactly does 'PrivateUsers' do? What uid do I have? When I write that uidin a db, what value does it keep? Between invocations, does the uid change or is it per unit? If a file is owned by a private uid, what do other processes on the system see? Is PrivateUsers for this unit file only, for the unit files in this group of unit files, across the entire system, across the entire cluster? If I want two different programs to share this PrivateUsers concept, how do I do that?
It turns out that gluing random shit to the side of a monolith gives you the illusion of convenience, but since the monolith will not do that thing well -- for example, identity management -- you will end up with some programs that adopt the half-assed solution, and some programs that are forced to do things a different way because their use case is complex. Now you have two problems.
PrivateUsers= is documented in the manual page for systemd.exec [1]. I hope the section I linked will answer your question, but for the sake of simplicity I edited my comment to use PrivateDevices= as the example instead.
You missed the point completely, I'm afraid. The point was that it's a leaky abstraction composed of half implemented concepts that devs have to add to their brain. It does not replace the existing functionality or improve it.
Respectfully, actually, I think you missed the point. Linux itself offers a number of powerful features such as namespaces to isolate programs, capabilities that can be dropped, etc., including the User Namespace feature we're discussing presently [1].
Systemd's job and goal is to provide a simple configuration file format that makes it easy to enable these features with installed system daemons.
You may overlooking the parameters supported by systemd and their benefits. Admins get a single way to manage their services and dependencies, and a way to do that that works consistently across all services. These features can be enabled even if the services were not designed for it (e.g., chroot). With systemd you can employ these settings from the outside with any service, and that's a big advance. Difficult to achieve otherwise.
> The point was that it's a leaky abstraction composed of half implemented concepts
I am unclear what part you want to criticize. The Linux kernel is what provides the User Namespace feature. Systemd helps users take advantage of it. What part do you consider half-implemented or a leaky abstraction?
Please also consider whether you might have the wrong mental model for the feature or its employment. In particular, the documentation for PrivateUsers= says: "This is useful to securely detach the user and group databases used by the unit from the rest of the system, and thus to create an effective sandbox environment." The usage you had in mind when you wrote your comment may not be compatible with the purpose of PrivateUsers. I recommend reading up systemd.exec params [1] before criticism. PrivateUsers= is intended for scenarios like transient sandboxed environments, so I'd suggest we discuss a simpler example like PrivateDevices=
With all due respect, I'm afraid you're addressing a series of unrelated points and indulging, hopefully unintentionally, in a rhetorical smokescreen, which includes editing your original post to hide an important topic.
I'm aware of what systemd claims to be. I'm aware of the benefits that its fans claim it, and it alone has.
My criticism is not with the Linux kernel (!?!) or user namespacing as a concept. My criticism is that systemd takes all of the rich complexity of user namespacing and, in response, adds the flag 'PrivateUsers=yes' -- a boolean. That's not what user namespacing is for and now we have two problems: systemd, which has no business making that decision and has done it the wrong way, and the continuing need, which has not been solved by a boolean flag, for daemons to have competent, complex user namespaces. Now devs have to know both ways: the half-assed way, and the real way, instead of just having a tool that gives them the real way.
That's what we in the software design business call a shitty design that would make Guy Fieri blush. But I guess we're all in Flavortown now.
Systemd has a lot of attention and people working on it and it will eventually become good enough for the vast majority.
But there were questionable tie-ins with various pieces like udev, consolekit and even gnome that allowed Systemd to become defacto init. The call for a kernel bus promotes a similar lock in with Systemd and this makes the use or development of alternatives and choice difficult.
There are things like predictable network names which are useful for 1% of users and are anything but predictable. Binary logging makes sense for the security industry Redhat serves but again has no use for the 99% others who anyway have to put up with it. There is a pattern of forcing things onto everyone that make sense for a tiny minority.
The big problem is open source funding. No one is interested in just supporting projects they benefit from. Acquisitions or hiring developers put these projects and developers under the control of companies like Redhat. Redhat has become a cathedral and a cathedral by sheer size and nature is always interested in securing and furthering its own influence and interests. When you allow such forces to become too powerful they will subsume the public interest to their interest.
The sad thing is, you do not really have a second option, yes I know some distros say they have alternatives, but SystemD becomes the "preferred" init system nearly everywhere now.
I just hope Debian to get rid of SystemD and return to whatever else. I know I'm biased, just failed to find a reason to love SystemD, tried a few times.
I've found that devuan is a joy to use on my laptop.
I'm also rotating BSDs and open source solaris variants on to the home network machines. (Switching to solaris to get rid of systemd would be overkill; I switched to joyent triton for better containers and zfs...)
More and more systemd is becoming symptomatic of deeper divide within the Linux "community".
The split being between those that embraced Linux for being a free, in both senses, _nix unburdened by AT&T and running on commodity hardware, and those that got to know it after the dot-com crash as the L in LAMP.
The former cares for Linux as a _nix, the latter could not care less about _nix and may see it as a vestigial appendage that should have been amputated long ago.
Let me just say that I disagree: I am squarely in the former camp – I started with SunOS 4 in 1992, learned to use the Internet before the WWW, and am a long-time GNU advocate, and I am also a professional system administrator. I like systemd just fine. In fact, I started using it at home and at work just as soon as the package became available in Debian 7 (wheezy).
I think, however, that you have a point in that some people ascribe some mystical properties to the design of _nix and _nix-like systems. I have never done this, perhaps because I remember the early days of OS/platform wars (Amiga or Atari? PC or console? Unix or VMS?), first on BBSes, then on Usenet. I also, early on, read the Unix-Haters Handbook, and I found it enlightening, even though it is not always spot-on.
I'm not so sure if it really has that much to do with heritage. A lot of us like the design principles you can see in Unix and that are pretty much absent in systems like Windows NT. They very much reflect the conclusions I came to after over fifteen years of building and debugging systems.
But, yes, I have the impression that a lot of Linux users would just as gladly use a modern BeOS or a ReactOS, regardless of the underlying design.
And a lot of systemd criticism comes from people who are just averse to any kind of change and maybe long for the bygone days of HP-UX and Irix.
I've never particularly liked init systems that restart jobs when they die. Normally, I don't want daemons that crash to restart. They should die, be caught by monitoring and the server bypassed. I would accept a single restart but after that, there's clearly a problem, and the systems should fail, rather than restarting the process again and again and again.
As a single user with a number of personal/hobbyist machines, I actually find systemd annoying to work with for precisely this reason. If a daemon is misconfigured and fails to launch several times in a row, it inevitably triggers the systemd "too many retries" error, after which systemd will refuse to start the daemon until some timeout has expired or the counter is cleared. This makes troubleshooting more difficult and frustrating.
Funnily enough, just dealt with a bug caused by this at work. The small PC would boot before the cellular modem had an Internet connection. Services that communicate via MQTT would immediately attempt to reach the broker, and throw an exception. My quick fix was just to add "RestartSec=5" so that the faulted state would never be entered.
One of the things I like about systemd is you can get a graph of the boot time along with critical path via the following: systemd-analyze plot >boot.svg
Most people I hear having problems with Systems is either on Redhat or Debian - not realising the both are mangling systemd badly. RHEL took in systemd way too early and are missing a lot of needed functionality. Also the have an ungodly amount of patches on top - so much you could argue it should be named redhatd instead. Debian just chooses to take the worst possible middle position due to politics. All of the disadvantages from systemd, but not really any of the great stuff as systemd units often just start shellscripts due to compatibility.... If you want to try systemd in all its glory try Arch or something downstream from it... I for one think that systemd has made my admin-life so much better ..
I am running systemd on Gentoo. (like Sabayon) While some of the features are nice, it is a royal pain to upgrade due to intertwined deps and forced restarts.
And it doesn't do anything OpenRC couldn't do, in fact its journald is a pain which had to be worked around. OpenRC way of doing socket activations and dbus activations also works slightly better (in case something crashes) It does parallelism just as well, service dependencies too.
Gentoo does not mangle anything related to systemd unlike the mentioned distributions.
Yeah, Gentoo is quite nice in that regard, but isn't it still focussed on openrc? I was a ten plus years gentooo user before I switched to Arch (mostly because I got fed up with compiling)
(from tfa): "Now, let us turn our attention to the benefits that systemd brings us. I believe that these are the reasons that all Linux distributions have adopted systemd."
- Handwaves real criticisms and instead tries to look like an objective writing by presenting a trivial issue
- Have no idea about advances in sysvinit in the last decade that allowed support for parallel boot and dependency relations (and which was supported natively on mainstream distribution)
- Totally ignores the other "modern" and widely used init system and instead compares systemd with sysvinit of 80's (which wasn't used by any mainstream distro in that form)
- Overrates process supervision features which were already as equally as easy to use with supervision suites like runit (IMHO even easier)
Meh, what did I expect? Bandwagon is going full speed.
But having used a couple of Linux systems running systemd - Raspian Jessie and openSUSE - I have to admit it's not that bad. In practice - on laptops and desktop systems (assuming one counts the Pi as a desktop system; for my use case, I do) - I had no problems with it. Enabling and disabling services is a lot easier. I do not think is as great as its proponents claim, but it's not as bad as some people think, either. Personally, I have come to appreciate journald, even though I still agree that binary logs are a bad idea. At least there is still the option of installing a syslog daemon.