But having used a couple of Linux systems running systemd - Raspian Jessie and openSUSE - I have to admit it's not that bad. In practice - on laptops and desktop systems (assuming one counts the Pi as a desktop system; for my use case, I do) - I had no problems with it. Enabling and disabling services is a lot easier. I do not think is as great as its proponents claim, but it's not as bad as some people think, either. Personally, I have come to appreciate journald, even though I still agree that binary logs are a bad idea. At least there is still the option of installing a syslog daemon.
I have become a bit more skeptical, because most of the problems that I recently had seemed to be related to systemd. Including some networking problems, long boot delays because systemd decides to wait 90 seconds be default on some conditions that it considers to be errors, and problems such as having to restart systemd-logind manually because of some d-bus update . Before the update, logging in via SSH blocked for a large amount of time.
The most annoying part is that some of the problems take quite a bit of work to debug due to the opaque nature of modern systemd/d-bus/...-based systems.
The network and boot issues that practically everyone have encountered are mostly due to incorrect default configurations. This shows that the maintainers are not ready to use systemd as they don't yet fully understand it.
Anti-Lennart partisans would say here that both pieces of software are broken by design and leave in a huff. I sympathize with their aversion to complexity, but I'll take a complex init system and simple configuration over a simple init system with baroque configuration.
Lennart should write more robust software and maintainers should do more QA when doing this type of system-wide changes.
It would also help if small parts of a system would be replaced at a time. Replacing daemon/process supervision, login handling, logging, network configuration, etc. all at the same time in distributions that are used by millions of users is quite risky.
This is a reasonable criticism of systemd. The other points seem mostly to be a criticisum of the various distro implementations that use systemd. The sort of thing that gets tidied up over time anyway.
systemd can do a lot of really useful things but when I can't reliably reboot machines or struggle with resolving things using DNS...it's hard to be optimistic.
Damn it, the LFS team basically adopted eudev, of Gentoo fame, because extracting udev from systemd required manual intervention over and over.
To be clear, they're all under the same umbrella project, but separate components and not everything is in PID 1. People seem to be confused by that.
This applies not just to your comment here but to a ton of comments that pop up on HN all the time. Why not in Go? Why not in Rust? Why not in React and node and electron and why don't you use my library that's still in alpha? It's not OOP. You're not using tabs. MIT is too permissive. GPL is too restrictive. And obviously, this should be a set of tiny separate libraries, how dare you work on free software with different ideas than mine.
I once found a project called Razor-qt. It was a desktop environment that included a bunch of interdependant binaries all under the same repo. I didn't like that very much. I joined it, worked on it, ended up leading the project and merging it with LXDE into LXQt. When came the time for the reorganization, I did push to create tiny components that "could be downloaded piecemeal and compiled independently". And we did end up going that route.
You know what I didn't do? I didn't go on HN and complain about a project I didn't know the internals of at the time.
But hey, maybe you know better. ¯\_(ツ)_/¯
"Forced" is not a term that applies to systemd. People say "forced" because their distro adopted it - guess why their distro adopted it? Because they researched it and found it was good. That is the common theme.
Nobody forced you. If you're an arch linux user, for example, one of the distros that switched the earliest, you'd have been more than welcome to discuss counter-points to systemd on the mailing list.
Of course, most of those that attempted doing so were ridiculed out because in free software, or at least on the Arch ML, there is very little tolerance for bullshit. Most (MOST, not all) of the arguments against systemd are in fact bullshit. Hell, even on HN I've seen people crap on systemd because "it's lennartware".
PS: To be clear, there is plenty wrong with systemd. It's far from perfect, it's still very young and I'm not particularly fond of its "all in one" tendencies myself. But most people in this thread have clearly zero ground to comment on its internals, yet many do.
1) You write "Most (MOST, not all) of the arguments against systemd are in fact bullshit".
2) And yet you (wrongly) write "there is plenty wrong with systemd"
To be in line with your 1) statement you should write 'it is only small things with systemd which are wrong. Because everything - MOST = small, and not 'plenty'.
Yet you dont do, and you write 'plenty'. Why? Because you know, the moment you write 'there is only a small amount of things wrong with systemd' you will be easily challenged.
So, from logic perspective, you're wrong and you contradict yourself.
Of course, you could also tone the hostility down before reading my post. Maybe that'd have helped you catch the fact that a flaw in systemd does not necessarily equal an argument against systemd, and vice versa.
6 years on Hacker News and this is what you've learned?
Try to discuss the essence, and don't wrongly accuse people who disagrees with you with 'hostility' plus your ad hominem. Still, take a rest and have a great 2017, I value your contributions to open source.
Yeah I do though. Consider that maybe you don't see why your post was extremely hostile.
I have very little patience for comments that refuse to do any charitable interpretation and instead end up being "but your post is wrong if taken literally!".
I have even less patience when such a poorly chosen tactic is applied incorrectly.
I've given you the benefit of the doubt and did reply to you (you've again disregarded it). I'll repeat it for clarity: a flaw in x does not necessarily equal an argument against x, and vice versa. Furthermore, "Everything - most = small" is a mathematically nonsensical statement.
Ad for my "6 years" comment, that was not an ad hominem, it was an observation. If you were a fresh account, I would have dismissed your comment as a troll. If you were a fairly new account, I would have thought that maybe you're not used to discussing things on an online social platform. But you have been here, communicating with others for six years; so when you write with such a confrontational tone, you get a confrontational tone back.
Both sides of this are wrong. The people complaining about "monolithic" are groping in the dark for ideas such as low coupling. The people saying "but count the binaries!" are not addressing the questions of fully documented interfaces between said binaries, composability, and interoperability.
The "uselessd guy" explained this quite well, as have many other people. Can you please advance to not dancing this same old dance every time?
I recently wanted to use the "systemd-journal-gatewayd" component in Ubuntu 16.04, which ships with systemd v229. Yet, the feature I needed was only available in v231.
Although I'm only interested in a newer version of "systemd-journal-gatewayd" there is no way to upgrade just this one component, it seems.
If it were a separate component, it would have a bunch of ifdefs covering several versions of systemd, possibly with some features disabled if older version doesn't support it.
However, that somewhat increases code complexity for developers and systemd devs refuse to do that.
That is the point of 'monolithic' criticism - despite being many binaries, you can't easily just build single one and make that work.
And the best part is that it's on all the major distros so I don't have to keep relearning the init system if I try something new.
My guess has been that the majority of users enjoy it and a few very vocal opponents have such hate for anything new that they will latch on to every small bug as if it's the worlds end.
(Anecdotally, this is the first time I have written a comment about systemd as other discussions have been non-productive flame fests without substance).
For me, systemd randomly fails to execute a reboot, then, because of this, fails to do halt, and I have to force power off the machine.
And this is just one.
The technical term for this is "precommitment", and it comes from Tom Schelling. It's actually quite advantageous: once you're precommitted to a particular direction, you're emotionally invested in getting the most out of it. By precommitting you to systemd, the distros hope to get the community to leverage systemd's benefits.
> OpenRC provides a number of features touted as innovative by recent init systems like systemd ...
And, now with OpenRC it still "just works". Fast, predictable boot with logs in normal places that I can easily debug, manage and modify.
Systemd won for one simple reason: it's the only tool that accomplishes this task without bugs. We've been running daemontools for almost a decade in production, and it's a nightmare of bugs. Very glad to be finally switching to systemd.
If this is true, and speaking as a systemd user for close to five years now, it universally sucks at its primary purpose.
Specifically, whenever a service fails, I've lost count of the number of times systemd has barked out useless errors with 200 lines that boil down to "service has entered failed state". Whenever a systemd service fails, odds are better than even I have to spend two hours debugging why by enabling internal logging in that service, running in debug mode etc.
Like when I tried to switch to networkd, and had the wrong password for a wifi I was connecting to, networkd never told me this in any way I could find. Had to go back to the old solution (after an hour of pulling my hair out) before I realised the password was wrong.
networkd only springs into action when wpa_supplicant succeeds in establishing the layer 2 connection and the interface becomes UP. I like the wpa_supplicant+networkd combo precisely because of this decoupling between network layers. One day, I'll get off my lazy ass and replace NetworkManager by wpa_supplicant+networkd on my notebook.
If you want a network manager that does know about those and might give more helpful error messages if they fail, use for example NetworkManager.
repeat after me:
everything eventually fails.
How can you tell when a programmer has graduated from "completely new at this" to "has some valuable experience"? That point comes when they stop assuming success.
Check for error and do something useful with the returned value.
Write tests yourself.
Log status, so you know what was happening just before it failed.
Set reasonable timeouts on external processes.
Systemd is written from the perspective of a laptop user who will hand over the whole thing to a support tech when things go wrong. This is antithetical to the spirit of UNIX, which is not "write programs with one purpose that chain together well".
The spirit of UNIX is this: At any time, a user on the system may decide to become a developer or a sysadmin. The tools and information they need should be available.
Or actually all of them (including networkd) work fine, but are not the right tool for every usecase.
So basically, you're saying that systemd et al integrate with everything on my system, except for when it's useful?
Not that different from other tools like `ifupdown`, `NetworkManager`, `wicd`, `connman`, ...
Hint: Networkd doesn't run wpa_supplicant. So it can't "hide" anything about it. Or only as much as `ifconfig` "hides" errors from wpa_supplicant.
I've also used runit (which follows the daemontools model) as a service manager and I've never had an issue with that. I may just have been lucky though.
For me systemd fails because of some bugs I have repeatedly experienced:
- systemd stops reaping zombies for some reason; the OS's PID table becomes full and it's impossible to create new processse -> need to hard reset the machine
- systemd overtakes halt/reboot commands which is ok when it works I guess -- but for some reason I sometimes get "operation timed out" (likely because of some bug in systemd or dbus). At this point I have to hard reset the machine to get back to a usable OS.
Imagine if #2 happens to you on a remote machine. In my case, I had to call someone and ask them to reset the machine.
#1 is unthinkable for me, because it's the second thing init is supposed to do (the first one being bringing up the various services). I've never had this happen to me on older Linux or other Unices because let's face it, it's not that hard to do. At that point I honestly thought "how can I expect systemd to provide all the features it boasts when it can't do the easy things well ?".
I've also had daemons that systemd lost track of, apparently because of a wrong setting in the .service file. Now that's not a systemd bug, but it was very difficult to debug because I couldn't "trace" the process starting. On the other hand, with daemontools/runit it's quite simple: manually execute the ./run script and see where it fails. With classical init, run the /etc/init.d/service with sh -x and you see exactly where it fails.
# echo b > /proc/sysrq-trigger
I've been running runit in production for many years, and it does just this, flawlessly.
Here you have a daemon that ties into PAM that tries to second guess the kernel regarding what constitutes a session.
Effectively systemd is becoming something akin to Android. It may be using the Linux kernel, but it is not the GNU/Linux we have grown familiar with over the years.
It has some sort of sessions for processes, but that is just something sharing the same name, not the same concept.
In a similar vein, systemd targets are considerably more useful than sysv/OpenRC runlevels. A typical sysadmin can read a couple paragraphs of manpage and set up a new target (analogous to runlevel) in systemd; and it can be used for debugging or recovery in the same fashion that runlevels are in other init systems.
On top of this, there are powerful system services developed in tandem with systemd, in the same repository, which offer well-integrated standard alternatives to things which were superficially different on ever distro only four years ago.
The things it has are incomplete and too integrated, making them hard to replace with working alternatives that do what you want.
I like openrc, I also like runit (for different reasons) but SMF is the gold standard for me and I recommend more people check it out. Even if it has XML elements :(
Are there any benchmarks for this? I don't know why, and I may be doing things wrong, but journalctl -lu <unit> --since yesterday in our prod env takes a couple of seconds before I see any output, while a (z)grep on a date rotated, potentially compressed log file on a non-journald system is generally instantaneous.
On a side note, I really like machine parsable, human readable logs, and I've never had any speed or performance issues with it when dealing with volumes of hundreds of millions of log entries, though that may be because I don't know any better.
Faster to search often means slower to write too, I'd prefer faster to write and slower to search if I had to choose.
There was a bug though where the journaling was so slow it caused the whole server to crash. (sorry, can't find it right now; something about them using mmaped files and the access pattern was.. or something, interesting stuff:)
Binary logs should be faster, in theory, and should/could be as robust as text logs (if not more), in theory. edit: note that GNU grep is insanely optimized.
Please, don't release such a bold statement without any source. Please!
On the other hand some OSS projects (the various databases, for example) do have posts/writings on performance. There are even some really in depth ones.
I'm fine with things being bloated (or just plain cpu slow) as that careless coding lets people make some nice things (a big part of KDE, for example). But i am not fine with people who publicly claim that their code is fast(er) without doing proper benchmarking. Hell if it was hard i wouldn't say anything, but it is not (for C code at least).
Objectively most programs i use (and most people use, even without knowing) can afford to be slow and eat much more memory then they should. Because most programs just sit there doing nothing for the grand majority of their run time (FF and sublime.., why do you send those msgs that use up 1-4% cpu ? i just don't understand). (honestly i'm worried that they are lowering(raising?) the performance baseline. but that's just predictions of future, voodoo, spirit animals showing the flow of chi or something)
PS Sry for the dumb rant. I like the topic of performance.
There is something subjectively nice to me about running systemctl status and seeing a nice, clear picture of what the current state of services are on a system, being able to control the life cycle of a service including having systemd monitor and restart it, not having to deal with improperly written init scripts, etc.
Stockholm Syndrome perhaps?
It's just a shade too ornate, a little too magical, in both cases only by a small degree, but it's an important one.
Creating a workable systemd init script is actually pleasant. Getting it running is easy. Checking for errors with status is nice, but searching the logs is annoying.
pm2 (https://github.com/Unitech/pm2) has a neat feature where you can watch logs easily, someting that systemd should totally steal and pack into journalctl, like "systemctl logs sshd" shows it in real-time, an alias to the obnoxiously verbose "journalctl -u sshd -f"
Seriously - just reading it is painful. binary logs
That's a lot of things, but it ain't unix.
For the straightforward "?!$! something happened just grep for it in the log file" it's no harder or slower. And you start noticing things like -f and --since that have no good analogs in the text log world, and it seems like it has value.
And every once in a while you need to pull magic like "show me EVERYTHING that was happening in the 3 seconds before this event, including kernel logs", and that's where it actually makes sense.
I hated it too when I started using it. I don't mind so much now.
Remember some people never used the previous init and logging systems. They're new to Linux or UNIX, they may be even young (and growing up with Systemd).
Not just in this particular case but more broadly it is important to note there's going to be opponents who are used to X, Y, and Z and don't want to relearn or adapt. It is akin to the never ending progressive vs conservative argument. Just how large is the group though? Is there backwards compatibility? You see, I just use 3 Raspberry Pi's and I am apparently still able to use /var/log.
-f is `less +F`, no?
--since is neat. You could hack it up with a sort and an awk, but it's nice having it built in.
UNIX has had binary logs since forever. utmp etc. are traditional UNIX and utmpx is standardised by POSIX.
Everything in life is a compromise in one way or another. Using journalctl instead of less? That took me about 5 minutes to commit to memory, and then...that's it.
I don't even think twice anymore about using one command or another to read the contents of the log.
$ cat >> ~/.bashrc
alias logs="journalctl -f -u"
"journalctl -u sshd -f" is not straightforward because it's a _redundant_ new command when I already know grep and tail. Small tools handling text is a wonderful way to make a system easy to learn and powerful to use, and we're just throwing that out.
and 99% of sysadmins are not writing their own, they're modifying the ones supplied by distro maintainers.
If you're a distro maintainer then init scripts are something you write, something you understand because you're handling the state of your distribution from them.
Of course you can just abstract away all the code to another bash script and have your init as 2 lines (which is what openBSD actually does).
and even here, systemd makes things easier, more straightforward and comprehensible by allowing to modify system-provided units via drop ins. No need to copy, I get updates to the original units as I'm supposed in a clean and well-defined fashion.
B) I'm talking specifically about systemd init files, or whatever they're called. Given that they initialize things, that they replace conventional init files, the term applies.
Systemd's native understanding of cgroups, process hierarchy, and logging makes this view possible.
* it replaces significant part of the OS and creates a new APIs. It's not really problem of systemd itself, but once other applications start depend on the API you would need systemd to use it. This is especially bad for non-linux systems, like BSD for example, which need to write tools that will emulate such API. It doesn't help that Lennart openly is against any non-linux system so he won't make things easier
* it's buggy, this kind of reminds me of Pulse Audio (also created by Lennart) although eventually things will improve, but it sucks on early versions of RedHat/CentOS 7.
Linux has great APIs like cgroups, inotify (in my opinion considerably more useful than the equivalent use of kqueue), and others. OpenBSD is the other unixy system which I think offers something valuable and unique, and it has its own APIs to offer that.
I'm interested to hear how you got the impression that systemd is/was buggy. I have run systemd on a variety of systems, including pre-systemd-support Debian a few years ago; I was always shocked by how reliably it would bring up the system, consistently name the devices, and provide a convenient interface to the logs.
I have been running systemd consistently on my workstations since Archlinux shipped it in 2012, and I have never run into a bug or poorly-documented feature as long as I have been using it.
It stands for inode notify, except there are edge cases where it doesn't actually report inode events. Consider this scenario:
ln test/somefile linktothefile
inotifywait -m test
If you append data to the file using the "test/somefile" reference, inotify reports the events as expected. If you do the same through the hard link, you get nothing.
IMO this is wrong since you accessed the inode. inotify is more of a "report events on a file/directory" rather than a "report events on inode" mechanism.
Historically we've had very little for managing processes. Start stop restart is about all we get for public interfaces, and they all involve shelling out via system().
The entire rest of the computing world has gotten immensely rich and powerful and grown. It's grown because there have been APIs. OpenStack, AWS, Kubernetes- once other applications start depend on the API (as you say) things start getting really good and powerful. The API makes it usable, gives the thing having an API programmability.
Process management has not had a decent API until systemd. Now we can write programs that watch for units being started, watch for them failing. Everything, the whole world, suddenly has a consistent well integrated API. It's appalling to me that people would disfavor this, when the past has been a bunch of hand jammed /etc/init.d/ scripts with varying capabilities and little introspectability.
I agree that being non-Linux is absurd and unfortunate. There really can't be that much special sauce that was relied on. Umass-PLASMA group has systemgo which is an interesting alternative, imo. https://github.com/plasma-umass/systemgo
I can tolerate systemd on desktops as long as I don't have to deal with it. The moment it craps out with Java-esque error traces in binary logs I'll install Slackware (or is there a modern desktop Linux without systemd I'm not aware of?).
Other than for desktops with complex run-time dependencies (WLAN, USB devices, login sessions), what are the benefits of systemd for servers that warrant a massive, polarizing, tasteless monolith which puts you up for endless patches and reboots, especially in a container world? (Re-)starting and stopping a daemon isn't rocket science; it's like 10 lines of shell code.
When I asked to disable binary logging entirely because it kept getting corrupted and was opaque I was met with "Unlearn your old ways old man"-style responses and "You can just log to rsyslog too".. No, I want it disabled.. I don't have a desire to log twice..
It was then I realised I was at the mercy of systemd, they can push whatever and reject whatever and I'm completely out of control- I cannot introspect their service manager effectively, it's non-deterministic. It just reeks of hubris from the maintainers. It does mostly the right thing for most people, and they compare it to sysvinit which admittedly needed love. (and, was not a process manager, was only a process starter).
So, I adopted *BSD on the server. And christ is it wonderful.
I still use Arch/SystemD on my desktop at home, because it's actually pretty useful on laptops/desktops. But I swore a vow never to manage a server with SystemD on it.
I'm still runing RHEL6 at work, for the next OS, I'm pushing my very large corp to adopt FreeBSD as an alternative. That's a fairly large amount of money that Red Hat will lose and I don't particularly feel bad about it.
They forced my hand.
One of the exhibits in the systemd House of Horror results from people taking multiple Poor Man's Dæmon Supervisors written badly in shell script, nesting them inside one another, nesting the result inside another service manager, and then recommending against service management in toto because this madness goes badly wrong.
Unix admins must come to learn what they're doing somehow. The way they learn it is by going one step after another, beginning with simplistic shell scripts controlling isolated functionalities, then improving it etc. Do one thing, and do it well, Unix philosophy, whatever. For novice Unix admins, systemd is too much of a black box, without any kind of didactic curriculum leading to it or away from it. Folks cannot grow into becoming a senior Unix admin.
I've seen it first hand just recently where a customer of mine went all-in (on puppet in this case). The "admins" were merely clicking around and doing trial and error kind of things; they had absolutely no idea what they're doing, and when things are going south won't be able to even diagnose what's going on. They hated their job and went out the door at 5pm. They stayed and were very eager to learn, however, when I gave them an ad-hoc Unix command line survival guide.
Systemd and its ilk is not how you get responsible and competent Unix admins that take pride in their work.
I other words perfect fodder for Red Hat support contracts (anyone getting flashbacks of MSCEs?)...
I've been using Linux Mint, Debian Edition, as my main workstation OS for more than 3 years now.
It has systemd, but does not use it as init; it has a Makefile-style boot process as far as I know.
That's why half of the SysV-style scripts to restart call sleep. No such unreliable hacks with systemd...
Because on servers, even the huge repository isn't that much of a gain. But running "apt-get upgrade" beats rebuilding a machine by orders of magnitude. Even more for rented VPS where I must either add an instance or get downtime.
Instead they keep trying to shove everything into one giant pile, and don't understand why people get upset.
They did, systemd is very modular in nature. The only things that aren't modular that spring to mind is the journal and systemd the init requires dbus (sort of).
systemd the project has a little bit of everything that you need in an operating system but systemd the init is stripped down to being able to parse unit files, supervise services, and generate and walk the dependency graph from where you currently are to where you're aiming to be.
All of the random features that you always hear about are all individual self contained services. So for example, systemd the init knows nothing about the format of /etc/fstab or SysVInit scripts. It only understands how to work with unit files, the way this works is that there's separate self contained generators that parse /etc/fstab and all of the old init scripts and creates unit files that only exist in a tmpfs directory.
If you don't want something like systemd-networkd then you don't have to have it. You can just compile systemd without it, turn it off at runtime, or just configure systemd to not start said service. You can take systemd and make it suitable for everything from embedded use, to servers, to a multiseat desktop.
Every time that people write that they tell other people who do know systemd that they do not know it. The journal cannot be turned off. Making it be stored in files in /run/log/journal/ instead of in files in /var/log/journal/ is not turning it off. It's making it non-persistent so that it doesn't last across system restarts, delegating the job of writing persistent logs to post-processing services that (nowadays) read the journal using its systemd-specific database access facilities. Ironically, making it not be stored in any files at all would actually prohibit the post-processing services from working, as they would have nothing to read and to process into their own formats.
I appreciate the concern, thanks.
>The journal cannot be turned off.
The persistent binary logging is turned off, which is what people bitch about. Obviously, many of systemd's monitoring features are tied to the journal, and as stated journald still needs to be running, obviously writing to a non-persistent journal for these and forwarding logs (if specified.)
>delegating the job of writing persistent logs to post-processing services that (nowadays) read the journal using its systemd-specific database access facilities.
It's true that syslog-ng pulls messages from the journal, whereas syslog implementations that are not aware are provided with them over a compatibility socket, but this is a performance optimization, to reduce system overhead. Not really an important distinction.
The context was turning persistent logs off and switching to on-disk and persistent text logs with syslog. If you really wanted to nitpick: dbus and udev are totally non-optional.
> This system is mothballed.
From its Github page at https://github.com/jcnelson/vdev
Distributions have just found it so pointless to strip it down that they've left it largely as one package. You're entirely welcome to use your distribution's packaging scripts to produce a reduced package, if one doesn't exist yet.
As far as I'm concerned, I'm happy that it's a single repository tracking consistent, interoperable versions of each of the systems in systemd. This means that there are few dependencies to track.
People get upset because they're too lazy to run the build system themselves. People who are unwilling to run the build system and read the mailinglist are unfit to ship anything short of the whole thing anyway. In my opinion, if these people are complaining, it is meaningless. They are not the sort of people who can figure out how to fit their system into 8MB of flash on a tiny SoC, so them complaining that the systemd package is more than 10 megs is like a child complaining that their bedsheets are too barbie and not buzz lightyear enough.
Oh, so just most of the things that people hate about it. No biggie then. /sarcasm
> People get upset because they're too lazy to run the build system themselves. People who are unwilling to run the build system and read the mailinglist are unfit to ship anything short of the whole thing anyway. In my opinion, if these people are complaining, it is meaningless. They are not the sort of people who can figure out how to fit their system into 8MB of flash on a tiny SoC, so them complaining that the systemd package is more than 10 megs is like a child complaining that their bedsheets are too barbie and not buzz lightyear enough.
I can't say I've ever seen anyone complain about that. I have seen, however, people complain about broken logs, unreliable processes, butchered output, hacky fixes, obscure errors, etc.
And this is the attitude that I have an issue with. Systemd has a lot of good sides, but it also has a lot of warts. Pretending that people's legitimate concerns are nonsense is a waste of everyones time, and are what makes people who have issues want to complain at every turn.
1. Copy <unit.service> to the sever, which looks like:
sudo systemctl enable /srv/node/<name>/config/<unit.service>
sudo systemctl start <unit.service>
Bash isn't "good" at hinting proper abstractions, I rarely see talks about this, maybe gurus can see through the rain.
I keep seeing a place for a tiny lambda friendly intermediate layer .. Just so you can compose properties of your services like clojure ring middleware.
(path "/usr/bin/some-service" "arg0" ...)
(wrap :pre (lambda () ...)
:post (lambda () ..))
ps: the idea is that (retry 5) is a normal language function, and not a systemd opaque built-in, you can write your own thing, and the most used can go upstream. Hoping to avoid the sysvinit fragility and redundancy.
Here is a bash function that retries N times:
for i in $(seq $n); do
retry 5 echo hi
timeout 0.1 $0 retry 5 echo hi
Shell has a very forth-like quality to it, and Forth is sort of like a backwards Lisp as well (postfix rather than prefix).
IMO Bash with it's multitude of annoying quoting and field splitting rules, many irrelevant features focusing on interactive use, and error handling as an afterthought is just wrong choice for writing robust systems. It's too easy to make mistakes. And it still works only in the simplest cases, until somebody evil deliberately pass you newline delimited string or something with patterns which expands in unexpected place, etc. Properly handling those cases will make your script ugly mess. Actually I find the mental burden when writing shell scripts is very akin to programming in C.
So don't use the Bourne Again shell. After all and to start with, if you live in the Debian or Ubuntu worlds, your van Smoorenburg rc scripts have not been using the Bourne Again shell for about a decade.
There's no reason at all that run programs need be written in any shell script at all, let alone in the Bourne Shell script. Laurent Bercot publishes a tool named execline that takes the ideas of the old Thompson shell ("if" being an external command and so forth) to their logical conclusions, which is far better suited to what's being discussed here. One can also write run programs in Perl or Python, or write them as nosh scripts.
I totally agree with your second paragraph, that is why I'm working on fixing shell :)
This entry in particular is relevant to your concerns:
Then of course there are TCL and the Thompson shell.
They are completely different systems of init scripts. Just for starters, BSD does not have run levels, so the rcn.d directories and the maze of symlinks do not exist.
BSD init scripts themselves are much simpler mostly because there is a good system of helper functions which they all use, rather than every single script inventing its own wheel in sysvinit. Of course, this is just a discipline or convention, and sysvinit could be vastly improved if anybody was similarly industrious.
BSD init scripts also have completely different dependency management (PROVIDE and REQUIRE instead of numerical priorities). And the mechanism for disabling and enabling a script is via dead simple definitions in the single file /etc/rc.
ssh-keygen -A >/dev/null 2>&1 # Will generate host keys if they don't already exist
[ -r conf ] && . ./conf
exec /usr/bin/sshd -D $OPTS
Also, openssh has protocol-specific (and secure by default) configuration options around connection lifecycle, etc. that no general purpose init system should try to replicate and then blindly apply to unrelated services.
I've been greatly enjoying systemd for the ability to write services and startup jobs in languages that don't have great libraries for doing all the right things, like Python and shell, because it gives me all those Right Things as effectively a library. I don't have to manage pid file handling and daemonization and restarts on my own; systemd will do it, and will do it well and Right. (I used to do this with start-stop-daemon, and it did it poorly and only okay.) It will also get out of the way of OpenSSH, which does it well and Right, and get in the way of the dozens of services that think they're doing it well and Right but aren't.
Easy things should be easy, and hard things should be possible. systemd supports both. If that's the extent of what runit can in fact do, it only supports the former. (SysV-style init, of course, just makes easy things hard and hard things confusing, but as everyone else is saying, it's very not the standard of comparison here.)
There are plenty of places to find competent summaries of both the technical and political arguments for and against systemd and we don't need yet another.
I run Gentoo (OpenRC) and Void (Runit) on my own systems, and although I do like both of them, I find the total lack of alternatives to the systemd ecosystem troubling.
As a package maintainer, I do like being able to create rpms/deb files and only needing one standardized init script, so there is are a few advantages to systemd, but not many. It's complexity makes it difficult to create drop in replacements (work has stopped on uselessd and others).
There needs to be more community support for distros like Void and real alternatives. Articles like this encourage that kind of thinking.
Personally I feel that the sheer controversialness of systemd and its chief maintainer is a bug in and of itself, which the maintainers should address.
Also, I'm failing to think of a scripting purpose that requires you to place non-trivial bash directly in a systemd unit that couldn't be solved by writing out the script somewhere and just invoking the script in the unit.
It's right there in the post. I have indeed had to do something like this in the wild to run a Docker container with special needs.
You're not. Contrived example is contrived.
I'd prefer if we stuck with Upstart and improved on it, though it already seems like a distant dream.
Thus when i see some dev talk about exorcising edge cases because of user friendliness i see them preaching a fools errand.
The problem with systemd has always been that it forces you to do everything the systemd way, and usually to use its tools. It goes against everything that has helped GNU/Linux become a great system: that it wasn't really one system. It was bits and pieces, and you could add them together however you wanted to get something you liked.
Systemd is the opposite. It is inflexible and clunky, monolithic and proprietary, binary and difficult. Everything has to be made to work with it, not vice versa. It doesn't follow any of the old conventions that made it simple to combine one tool with another. I can compare it to Windows, but I feel like that would be an insult to Windows' usability.
If this one single fact was different, everyone would love systemd, because it has plenty of useful features. But because it is designed specifically to please a single quirky user, people hate it.
The other points can be debated, but please don't call it proprietary. It is free software as in "freedom" (LGPL).
You are using the term proprietary incorrectly, probably out of simple ignorance. The software does not have an "owner" who has power of the users; the LGPL means all users have complete control over the software.
Please read this carefully before spreading more misinformation:
Could you share a few specific examples? That hasn't been the case at all for me. I have worked with systemd and have had the opposite experience. I simply write textual config files describing how to launch the service, which can be done in a variety of ways. Could you share more about how you tried to use systemd?
I'd encourage you to read Russ Allbery's analysis of systemd , which includes a description of his experience converting one of his packages to use systemd. He remarks on the well-done integration and its compatibility:
* Integrated daemon status. This one caught me by surprise, since the
systemd journal was functionality that I expected to dislike. But I was
surprised at how well-implemented it is, and systemctl status blew me
away. I think any systems administrator who has tried to debug a
running service will be immediately struck by the differences between
lbcd start/running, process 32294
lbcd.service - responder for load balancing
Loaded: loaded (/lib/systemd/system/lbcd.service; enabled)
Active: active (running) since Sun 2013-12-29 13:01:24 PST; 1h 11min ago
Main PID: 25290 (lbcd)
└─25290 /usr/sbin/lbcd -f -l
Dec 29 13:01:24 wanderer systemd: Starting responder for load balancing...
Dec 29 13:01:24 wanderer systemd: Started responder for load balancing.
Dec 29 13:01:24 wanderer lbcd: ready to accept requests
Dec 29 13:01:43 wanderer lbcd: request from ::1 (version 3)
Both are clearly superior to sysvinit, which bails on the problem
entirely and forces reimplementation in every init script, but the
systemd approach takes this to another level. And this is not an easy
change for upstart. While some more data could be added, like the
command line taken from ps, the most useful addition in systemd is the
log summary. And that relies on the journal, which is a fundamental
design decision of systemd.
And yes, all of those log messages are also in the syslog files where
one would expect to find them. And systemd can also capture standard
output and standard error from daemons and drop that in the journal and
from there into syslog, which makes it much easier to uncover daemon
startup problems that resulted in complaints to standard error instead
of syslog. This cannot even be easily replaced with something that
might parse the syslog files, even given output forwarding to syslog
(something upstart currently doesn't have), since the journal will
continue to work properly even if all syslog messages are forwarded off
the host, stored in some other format, or stored in some other file.
systemd is agnostic to the underlying syslog implementation.
So what I'm asking is, have you tried to achieve something with systemd and found that you were forced to do it differently? I'd be curious to learn more about what those cases were. I'm not affiliated with systemd project, but I have a general interest in wanting to see it improve.
(Please note that, barring conventions of some kind that span multiple tools, every tool naturally requires you to do things that tool's own way. If you are holding systemd to the standard that its job should be able to be performed by two different tools, each with the same interface, while supporting advanced features, then you should have other examples of this as well.)
systemd is a process supervisor plus additional features; sysv init is not a process supervisor (though the "init" program as configured in /etc/inittab is a supervisor). At a minimum, a process supervisor starts processes and restarts them should they exit.
Some modern daemontools-family systems with additional features beyond basic supervision that put them in the same category as systemd are nosh and s6-rc. These two systems (and all the daemontools family), in contrast to systemd, are not monolithic, but composed of multiple smaller programs.
Notably, they depend on the shell (any script or executable, but traditionally small shell scripts) to perform actions like setting up environment variables, switching to a less privileged user, redirecting output, etc. while systemd has created a large and growing set of directives to accomplish these. This set of directives is one of my annoyances with systemd. As someone familiar with the shell, I find it annoying to learn yet another language (systemd directives) for expressing the same thing, though I recognize not everyone is familiar with shell.
90% of every file is boilerplate built around: stop, start, status (after all restart is just stop+start).
Proponents of SystemD can make some good points, but trying to claim init.d is complicated comes across as desperate.
Writing systemd unit files for my applications is a breeze compared to the shit I had to do for sysvinit - especially when I factor in apps I DIDN'T write that I need unit files for (teamcity, etc) and there's less surprise with SELinux policy transitions (and unlike most setenforce 0 is not the first thing I run on a server, I work in healthcare and a solid MAC layer is important to me).
Are you trying to make the argument that shell scripts are simple, easy to read, and maintainable?
Permit me to disagree.
I don't think you have as much experience with Unix as you would like us to think you do.
You might have made your personal init scripts in some other shell (until you learned about design failures of csh), but I'm willing to bet your Unix distributor did not.
If you disagree with me then post an example and I'll show you the bug in it.
Also, it's not the 90% that matters; it's the 9% that is difficult and the 1% that is crap.
require 'pry'; binding.pry
Unlike all of the above, runit isn't complicated. You write a tiny, often one-line script and it executes, if it exits, it waits a reasonable amount of time and restarts it.
Not only is it not complicated, but it runs beautifully on top of an existing init system. So now wherever I go I have one tiny script that will start my python based web services and similar. I run the same basic script on freebsd rc, ubuntu upstart and arch systemd boxes.
Unix is just a collection of random decisions made by various people over the years. The Unix philosophy is really more like "I've always done it that way so don't you dare change it".
fork/exec is garbage. Signals are garbage. You basically can't make any API calls after fork or in a signal handler because threads came after. The interaction of fork and threads is bananas and many hacks have been required over he years to paper over the problems.
The layout of /usr/bin, /usr/sbin, /usr/local/bin, et al isn't a good design. It's dogshit but it was necessary because early Unix file systems couldn't span multiple volumes and early disk systems were small.
The C compilation model of separate header files is not a good design. People have retroactively determined some of the side-effects are not only good but The One True Way. In reality the design was a result of extremely limited RAM and slow CPUs. The preprocessor itself was never designed, just grafted on ad-hoc.
Unix file permissions are shit. Every unique combination of permissions requires a group. Owners are by simple integers. NFS legitimately gives people nightmares.
Let's not even get into everything is a file, except when it's not, and some files are more equal than others.
What about dependency hell? How's that "simple" model working out?
The "Unix philosophy" can piss right off.
"Unix is just a collection of random decisions made by various people over the years."
"Linux has never been about quality. There are so many parts of the system that are just these cheap little hacks, and it happens to run." - Theo de Raadt
"The layout of /usr/bin, /usr/sbin, /usr/local/bin"
I don't agree with you. Having base in a directory and having user installed bins in another makes sense. I don't understand why most modern Linux distros install on only one partition by default. You lose the ability to mount with flags ie noexec,nosuid,nodev.
NFS is horrible. Unix/Linux is a multi-user operating system why would you not want to have groups?
Now that's backwards. He obviously mean a better permission system. Do you seriously think groups is the best way?
Access Control Lists are a better way.
This is probably just your lack of experience not having worked on 50+ million LOC compiling for 12 hours and not having anything else as a better option. There is a reason these things exist.
I agree: C and C++ compilation model is not a good design. It's a patch for not having a decent module system. Heck, even Borland Pascal compiled faster in 90-ies than C++ does now on an orders of magnitude faster machine.
Enlighten me. And while you're at it, explain why this is better than, say, Rust's module system, where we don't need separate header files.
Why wouldn't something like the C++ module system be of benefit in C also?
It would, but by the look of it, C++ evolves faster these days.
I'd love to see a rebuttal of the specific points made as opposed to just "Most of this comment is incorrect".
But I can imagine what xenadu02 might have meant, if you like, and provide some counter arguments.
Signals aren't "garbage" (whatever that means).
Signals can call APIs (the set of async-signal-safe APIs). They can't call non-async-signal-safe APIs not because of threads, but because signals can interrupt a routine at any point (necessary for asynchronous notification of certain events which must be handled before the normal instruction control flow can be resumed) and that interrupted routine may not have been written to be reentrant.
This is true even without threads in the picture.
The fork/exec model is not "garbage". It is actually a fairly nice alternative to the "provide one API to start a child process and give it a large number of parameters for all possible situations". And you can call plenty of APIs between fork and exec in the child safely, just like from signal handlers.
I haven't dealt with dependency hell ever since shared libraries got sonames.
The rest of the comment doesn't list anything of substance. If you want rebuttals for "the file system layout is a bad design" or "the C compilation is a bad design" or anything else, provide some reasons why those are bad designs; some of those reasons may be valid criticism, and some may not be, but one can't just make vacuous statements like that and expect a reasonable discussion to follow.
> So while signal handlers are perfectly workable for some of the early use cases (e.g. SIGSEGV) it seems that they were pushed beyond their competence very early, thus producing a broken design for which there have been repeated attempts at repair. While it may now be possible to write code that handles signal delivery reliably, it is still very easy to get it wrong. The replacement that we find in signalfd() promises to make event handling significantly easier and so more reliable.
Another critic makes the case that "signalfd is [also] useless" :
> "UNIX signals are probably one of the worst parts of the UNIX API, and that’s a relatively high bar."
Signals came up recently on HN when someone remarked that not even memset() is signal-safe! 
All in all, working with signals correctly requires mastering a tremendous degree of complexity. Other platforms have provided simpler APIs, such as Structure Event Handling (SEH) .
 article link from https://news.ycombinator.com/item?id=9564975
 An HN comment describing how it's simpler: https://news.ycombinator.com/item?id=13323870
P.S. Please note that the views quoted above are not necessarily my views.
> Why does ls do sorting? Why does grep do -R recursive searching? How is that "Do one thing and do it well"?
I think these are valid examples of how Unix itself fails to follow the "Unix philosophy" of "Do One Thing and Do It Well".
> The fork/exec model is not "garbage". It is actually a fairly nice alternative to the "provide one API to start a child process and give it a large number of parameters for all possible situations". And you can call plenty of APIs between fork and exec in the child safely, just like from signal handlers.
fork-exec complicates the implementation of threads (see atfork handlers). Rather than "a large number of parameters for all possible situations", another alternative would be to have (1) a call which given executable name and arguments returns an opaque handle (or file descriptor) representing the process to be started (2) a bunch of further calls to set attributes on that handle – new features could add new APIs acting on the handle, or an extensible API like ioctl could be used – if there is a handle to represent the current process, then you only need one API call to set it for the current process or a child to be started (3) finally, a start call which turns the process-to-be-started handle into a running process handle.
> Unix file permissions are shit
The user-group-other model is arguably too limiting. ACLs are a better idea, but then should you use POSIX ACLs or NFSv4 ACLs?
The distinction between primary group ID and supplementary group IDs is silly.
Why must every file have both a UID and a GID? For files owned by a single user, you end up creating a dummy group like "staff" or so on just to obey the rule that every file must have a GID. For shared files, e.g. project files, files generally end up owned by their creator, even though in a business sense they really belong to the project not to whoever created them. It would make more sense if the owner could be either a user or a group, and then also have zero or more non-owning groups associated with it.
In most cases permissions should only exist on the directory, and then automatically apply to any files in the directory. (In most cases every file in the same directory should have the same permission; Unix bases its design on the exception rather than the rule.) Of course, hard links make this impossible, but I think hard links were a mistake.
The executable permission bits actually do double duty as a file type indicator. That's rather ugly. If Unix had explicit file types (rather than just a naming convention of file extensions), then certain file types could be declared to be executable. Executable permission would then mean "you are allowed to execute this if it is an executable" instead of "this is an executable". Stuff like the +x vs +X distinction in chmod would never have been necessary.
> Let's not even get into everything is a file
Unix would have been much better if everything were a file descriptor, rather than having stuff like pid_t. Linux at least is evolving in this direction. Plan9 does it better. Even the WindowsNT philosophy of "everything is a handle" is better than the traditional Unix approach.
The rationale for this is that POSIX ACLs are firstly too simple to model what we need. And they are also non-standard (POSIX .1e ACLs are a DRAFT specification which was never ratified).
NFSv4 ACLs are vastly more featureful, already implemented to support NFSv4 in kernel, though not available in userspace AFAICT. On FreeBSD and other platforms using ZFS, they are also used by ZFS and are directly exposed to userspace, making rich ACLs usable as the default permissions model system-wide when running on ZFS. Linux, unfortunately, doesn't yet do any of this, even when using ZFS.
Fork-and-exec isn't complicated by threads. Only fork-and-keep-executing is.
UNIX doesn't have a naming convention using file extensions.
Some of your points are valid opinions that are shared by others, but I don't know how much they have to do with the UNIX philosophy.
Some APIs can be improved, sure. And some are being improved. It takes time because of unix's success and most systems' desire to remain backward compatible (especially in source form).
Another issue is that fork-and-exec doesn't work well with languages with complicated runtimes, e.g. multithreaded garbage collection. It forces you to use a lower level language (such as C) to write all the code between fork and exec. An API based on process handles with a separate "start" call to convert a not-yet-started handle into a running process wouldn't have that deficiency.
Another issue is that it is very hard to implement robust error handling without race conditions in the fork-exec model. What if the child process encounters an error between the fork and the exec? How does it notify the parent process of exactly what error it got (e.g. "setsid failed"?) You need some sort of IPC mechanism between the child and the parent. And such an IPC mechanism is prone to race conditions. By contrast, the process handle-based API I suggested doesn't have this problem since it doesn't introduce more concurrency into the system than is absolutely necessary.
> UNIX doesn't have a naming convention using file extensions.
Yes it does. The average Unix system is full of file extensions like .c, .h, .so, .html, etc. Even in Unix V1 file extensions were used as a convention - http://minnie.tuhs.org/cgi-bin/utree.pl?file=V1
> Some of your points are valid opinions that are shared by others, but I don't know how much they have to do with the UNIX philosophy.
Is there a clear definition of what the "UNIX philosophy" is? Is any criticism of Unix systems as actually implemented a valid criticism of the "Unix philosophy"? Or do you want to define the "Unix philosophy" so vaguely as to put it beyond any possibility of criticism?
How are you doing fork-and-exec in a language with a large runtime? You are either using the language-provided APIs to do it, in which case they should document the restrictions on what you can call (and you should follow those), or you are dipping down into the C or system call layer to do your own fork-and-exec, in which case yeah, you still need to keep to the safe list of routines you can call between fork and exec, and you may have extra limitations since you are mucking around underneath your language's runtime (like you may have to unignore signals on your own, close file descriptors, etc). No surprises there.
> Another issue is that it is very hard to implement robust error handling without race conditions in the fork-exec model.
I don't think it is. You just print an error to stderr (write() is safe to call), and you return a bad error code (fork has built-in IPC for error codes via wait() in the parent).
> Is there a clear definition of what the "UNIX philosophy" is?
I don't know, ask the person who first invoked that phrase in this thread. They claimed it meant "do one thing and do it well" to them, and then they complained about things that didn't seem related to me (like file extensions, what does that have to do with programs "doing one thing"?).
> How are you doing fork-and-exec in a language with a large runtime? You are either using the language-provided APIs to do it, in which case they should document the restrictions on what you can call (and you should follow those), or you are dipping down into the C or system call layer to do your own fork-and-exec, in which case yeah, you still need to keep to the safe list of routines you can call between fork and exec, and you may have extra limitations since you are mucking around underneath your language's runtime (like you may have to unignore signals on your own, close file descriptors, etc). No surprises there.
Let's say I am using JNA – https://github.com/java-native-access/jna – under Java. It is safe to call posix_spawn from Java code using JNA. It is safe to call the Windows API equivalent (CreateProcess). It would be safe to call the handle/descriptor-based API I proposed. It is not safe to call fork. This is an undeniable deficiency of the fork-exec approach which competing approaches don't have. Furthermore, whatever compensating advantages fork-exec may have, the handle/descriptor-based API I proposed has the same advantages without this disadvantage.
> > Another issue is that it is very hard to implement robust error handling without race conditions in the fork-exec model.
> I don't think it is. You just print an error to stderr (write() is safe to call), and you return a bad error code (fork has built-in IPC for error codes via wait() in the parent).
But that isn't robust. How can the parent process reliably distinguish output sent by the child process prior to the exec from output sent by the child process post the exec? Likewise, how can the parent process reliably distinguish an error return value from the child process prior to the exec from an error return value from the exec'd program? It can't.
For truly robust error handling, you'd actually need to do something like this: (1) have a pipe between parent and child process with FD_CLOEXEC set on the child side; (2) the child sends the parent a message "I'm about to exec" before calling exec; (3) the child sends the parent a message saying "exec failed with errno=.." if the exec call fails; (4) if the exec call succeeds, the child process will close its end of the pipe without sending any message post "I'm about to exec". This is my point, actually robustly handling errors in the fork-exec model is quite complex. In a handle/descriptor based API it would be much simpler.
(And the above approach using a pipe isn't perfectly robust – what if the child process crashes for some reason between sending the "I'm about to exec" message and actually calling exec()? It is very difficult for the parent process to reliably distinguish that scenario from some failure in the program being exec()'d.)
Are you calling fork() from Java, from C, or using the system call number?
Because I'd agree calling it from Java might be unsafe (depends on how Java and JNA interact), but I believe calling it from C or the system call is perfectly fine. And this is in line with what I've written previously.
> But that isn't robust.
It's not supposed to be robust in the way you are describing.
The fork-exec model is low level. It is supposed to be low level. Doing high level things with it is supposed to take some work by the application. That's not a deficiency.
If you build too many things into the low level code, you run into trouble because now you've got 10x as many ways to fail (building your pipes, writing your error messages, marshalling error state, cleaning up, you name it).
Also, some programs will want to do some of those higher level things differently, so instead of baking them into the API and having tons of parameters and paying for some of that overhead (like creating a pipe and writing error messages to the parent for every single fork and exec) you only do that when you want it.
However, FreeBSD has had process descriptors since roughly 2010. They have the slightly odd semantics of terminating processes when all descriptors to them are closed. But they can be used as descriptors with kqueue() and the like.
I can drop capabilities with CapabilityBoundingSet=, or limit resource usage with CPUSchedulingPriority=, IOSchedulingPriority=, etc. I could even tell systemd to open the listening socket for me so the service doesn't need CAP_NET_BIND_SERVICE! Moving these options into the init system makes a ton of sense, because it gives administrators the ability to employ these features from outside applications, not just by enabling them within applications that bother to explicitly support them via command line arguments. Systemd better encourages the principle of least privilege: if a system daemon does not need the ability to "ptrace" other processes, or bind to ports <1024, then as the administrator I can take those away with CapabilityBoundingSet= in the unit file. Chrooting the service is as easy as RootDirectory=. This is a huge step forward compared to the world where every service must be relied upon to expose these settings, and must be trusted to implement them correctly.
Also, I have never seen a systemd box emit log lines like that for a failed service. It invariably points at some useless logfile with obscure systemd messages in it instead of the stderr of the failed process. This is on clean ubuntu and debian installs. Maybe it is user error, but I doubt it. (Though there is no command line in the examples...)
Anyway, I'm happy to cleanse with fire instead of RTFM at this point. On a related note, I just learned the solaris init system and started using openbsd's again.
I prefer them both to systemd. They are at opposite ends of a spectrum. The openbsd approach is well curated shell scripts. I think systemd was heavily inspired by the solaris thing.
It doesn't. Systemd has its own resolver (systemd-resolved), which has other issues, but it does not run in PID 1. It's a completely separate process.
// I'm enjoying watching the points bouncing up and down, but if you disagree in some way, please comment :)
What exactly does 'PrivateUsers' do? What uid do I have? When I write that uidin a db, what value does it keep? Between invocations, does the uid change or is it per unit? If a file is owned by a private uid, what do other processes on the system see? Is PrivateUsers for this unit file only, for the unit files in this group of unit files, across the entire system, across the entire cluster? If I want two different programs to share this PrivateUsers concept, how do I do that?
It turns out that gluing random shit to the side of a monolith gives you the illusion of convenience, but since the monolith will not do that thing well -- for example, identity management -- you will end up with some programs that adopt the half-assed solution, and some programs that are forced to do things a different way because their use case is complex. Now you have two problems.
Systemd's job and goal is to provide a simple configuration file format that makes it easy to enable these features with installed system daemons.
You may overlooking the parameters supported by systemd and their benefits. Admins get a single way to manage their services and dependencies, and a way to do that that works consistently across all services. These features can be enabled even if the services were not designed for it (e.g., chroot). With systemd you can employ these settings from the outside with any service, and that's a big advance. Difficult to achieve otherwise.
> The point was that it's a leaky abstraction composed of half implemented concepts
I am unclear what part you want to criticize. The Linux kernel is what provides the User Namespace feature. Systemd helps users take advantage of it. What part do you consider half-implemented or a leaky abstraction?
Please also consider whether you might have the wrong mental model for the feature or its employment. In particular, the documentation for PrivateUsers= says: "This is useful to securely detach the user and group databases used by the unit from the rest of the system, and thus to create an effective sandbox environment." The usage you had in mind when you wrote your comment may not be compatible with the purpose of PrivateUsers. I recommend reading up systemd.exec params  before criticism. PrivateUsers= is intended for scenarios like transient sandboxed environments, so I'd suggest we discuss a simpler example like PrivateDevices=
I'm aware of what systemd claims to be. I'm aware of the benefits that its fans claim it, and it alone has.
My criticism is not with the Linux kernel (!?!) or user namespacing as a concept. My criticism is that systemd takes all of the rich complexity of user namespacing and, in response, adds the flag 'PrivateUsers=yes' -- a boolean. That's not what user namespacing is for and now we have two problems: systemd, which has no business making that decision and has done it the wrong way, and the continuing need, which has not been solved by a boolean flag, for daemons to have competent, complex user namespaces. Now devs have to know both ways: the half-assed way, and the real way, instead of just having a tool that gives them the real way.
That's what we in the software design business call a shitty design that would make Guy Fieri blush. But I guess we're all in Flavortown now.
As opposed to a multitude of implementations of varying quality and functionality for the same concept in each init script that needs it?.
Disclosure: I liked this so much that I gave the developer money.
So, not sure what you have to do to not be 'opaque'
But there were questionable tie-ins with various pieces like udev, consolekit and even gnome that allowed Systemd to become defacto init. The call for a kernel bus promotes a similar lock in with Systemd and this makes the use or development of alternatives and choice difficult.
There are things like predictable network names which are useful for 1% of users and are anything but predictable. Binary logging makes sense for the security industry Redhat serves but again has no use for the 99% others who anyway have to put up with it. There is a pattern of forcing things onto everyone that make sense for a tiny minority.
The big problem is open source funding. No one is interested in just supporting projects they benefit from. Acquisitions or hiring developers put these projects and developers under the control of companies like Redhat. Redhat has become a cathedral and a cathedral by sheer size and nature is always interested in securing and furthering its own influence and interests. When you allow such forces to become too powerful they will subsume the public interest to their interest.
I just hope Debian to get rid of SystemD and return to whatever else. I know I'm biased, just failed to find a reason to love SystemD, tried a few times.
I'm also rotating BSDs and open source solaris variants on to the home network machines. (Switching to solaris to get rid of systemd would be overkill; I switched to joyent triton for better containers and zfs...)
The split being between those that embraced Linux for being a free, in both senses, _nix unburdened by AT&T and running on commodity hardware, and those that got to know it after the dot-com crash as the L in LAMP.
The former cares for Linux as a _nix, the latter could not care less about _nix and may see it as a vestigial appendage that should have been amputated long ago.
I think, however, that you have a point in that some people ascribe some mystical properties to the design of _nix and _nix-like systems. I have never done this, perhaps because I remember the early days of OS/platform wars (Amiga or Atari? PC or console? Unix or VMS?), first on BBSes, then on Usenet. I also, early on, read the Unix-Haters Handbook, and I found it enlightening, even though it is not always spot-on.
I leave all die-hard advocates with this quote:
“All Software Sucks, All Hardware Sucks”
(Source: (http://www.absurdnotions.org/page75.html). Start of storyline: (http://www.absurdnotions.org/page74.html))
But, yes, I have the impression that a lot of Linux users would just as gladly use a modern BeOS or a ReactOS, regardless of the underlying design.
And a lot of systemd criticism comes from people who are just averse to any kind of change and maybe long for the bygone days of HP-UX and Irix.
I know almost nothing about systemd, but you can define environment variables inline instead of in a file: `Environment=ENV_VAR=value`.
And it doesn't do anything OpenRC couldn't do, in fact its journald is a pain which had to be worked around. OpenRC way of doing socket activations and dbus activations also works slightly better (in case something crashes) It does parallelism just as well, service dependencies too.
Gentoo does not mangle anything related to systemd unlike the mentioned distributions.
ahem slackware has not adopted systemd
- Handwaves real criticisms and instead tries to look like an objective writing by presenting a trivial issue
- Have no idea about advances in sysvinit in the last decade that allowed support for parallel boot and dependency relations (and which was supported natively on mainstream distribution)
- Totally ignores the other "modern" and widely used init system and instead compares systemd with sysvinit of 80's (which wasn't used by any mainstream distro in that form)
- Overrates process supervision features which were already as equally as easy to use with supervision suites like runit (IMHO even easier)
Meh, what did I expect? Bandwagon is going full speed.