Make lets you specify all 3 kinds of dependencies. Requirement is the default, want can be had by prefixing the commands with - (ignore errors), and ordering can be had by putting the prerequisites after a |.
On SUSE Linux a few versions used Makefiles for parallelizing init scripts under sysvinit before they switched to systemd.
I've been using make to drive test scripts myself recently, although i often regret it because other people editing them often make quite a mess later.
I wonder if any others here think the whole idea of requiring an "init system" seems overly complex and almost un-Unix-like for the average use-case of a personal Linux system, or even a single-purpose server? I've built a minimal but quite functional system (including networking, a GUI, etc.) with only "init=/bin/bash" and everything else that needs to be run at startup in /etc/profile.
In that world, the dependencies are easily expressable in the shell script itself: requirement is &&, want is ;, and ordering is simply... the order in which the commands appear in the file. It feels very much in line with the Unix philosophy. Thus the relative complexity of all the other "init systems", even including the original sysvinit, seems rather excess to me --- was it just a case of Linux/Unix originating with multiuser servers and retaining all of the baggage from that when it became a personal OS?
> and everything else that needs to be run at startup in /etc/profile.
Those are unmanaged services, if they crash for any reason, they won't be restarted? For embedded use cases this is probably fine, but for desktop it is a problem.
> In that world, the dependencies are easily expressable in the shell script itself [...] It feels very much in line with the Unix philosophy.
This is why I use runit and let the program determine it's own dependencies. If it fails to start, runit will try again in 1 second... forever. This actually works great in practice, but it does present a problem for "on-demand" services.
They can still be built, but I've always avoided them for the problems they present. For typical 'inetd' applications, I find ucspi-tcp to be an ideal solution for everything else I try to find some reasonable compromise which for most of the cases where I need a true managed set of services is very rare.
> and retaining all of the baggage from that when it became a personal OS?
It's worse in my mind, they're pantomiming the competition which seems to be a complete misunderstanding of the platform they're using.
If you want the service supervised, there's supervisord or daemontools or while true ; do flakeyserver; done.
On the other hand, some servers are expected to work, and if they crash, it should be investigated, because it's probably a big deal. In my experience auto-restart goes hand in hand with never investigate why it restarts. Sometimes that's appropriate (for software you basically have to run, but don't control), but more often it's terrible.
1) can you use systemd to supervise a single process without adopting most of systemd?
2) based on the train of horrors for every single network daemon they write, do you trust that their systems level code is somehow good?
3) well their record on systems level code isn't that great either, username parsing really didn't go well
4) is systemd demonstrating engineering excellence of as simple as possible, but no simpler?
5) out of the box, my experience with Debian + systemd + a crashy service (from packages) was that the specific crashes I was getting didn't result in a restart, because it got a segfault instead of simply exiting. --- also, it changed the boot process so I couldn't ctrl-c out of slow things, so that was net negative.
This is how things used to be done in the old days, for example on SunOS 4.x. At least for any software you added to the system, where you'd add a line to /etc/rc.local.
It was simple, but it was also error-prone because you're re-inventing the logic every time. And it made it really difficult to automatically install software because instead of dropping a file into a directory, you had to programmatically edit a common file. Installer scripts would do all kind of horrible things like "cat myscript >> /etc/rc.local". The situation forced software maintainers into doing something (automated editing of code) that they weren't skilled at, with tools (Unix commands) that weren't very well suited for it.
Having a more data-driven, standardized approach split up into multiple files was a huge improvement from a maintainability standpoint. Yes, it's more complicated, but in my opinion the complexity was easily worth it.
With that method you can only express 1-1 relationships. Usually in real systems you have N-M. Your method is also linear, it has to wait for one line to complete before running the next even though they don't depend on each other. Parallel execution is one of the reasons systemd boots so much faster than it's predecessor.
Not familiar with insserv or startpar but there is a difference between supports and supports. Most systems support everything in the sense that it's turing complete but it should be easy to do it without creating spaghetti also. Not saying systemd is the best or only system that can do this, it was just an example, but older ones usually make it very messy.
This is exactly how other init systems work such as s6 which uses chain-loading with execline or arsv minitools init that uses a minimal custom shell call msh.
It's one that repeats two common errors, though. The first error is obvious, as it is a major point claimed. The second error is forgetting that another of the actual major responsibilities of process #1 is system management, in particular various necessary initialization and finalization tasks, that (pertinently for this discussion) init=/bin/bash does not perform but that proper emergency modes do.
If you want the best of both worlds meaning supervision and unix philosophy then have a look at runit and Void Linux which is a prominent user (void was started by an ex netbsd developer). Also full nix package manager support although I have yet to try it.
I saw the original article when it came up at https://news.ycombinator.com/item?id=18303019 a couple of days ago. M. Siebenmann is articulating part of one of the points that I was going to raise.
For what it's worth, the nosh doco does try to articulate the difference between the dependencies of wants/, conflicts/, et al. and the ordering relationships of before/ and after/. The former control the extent of the set of jobs that system-control constructs for an action, whilst the latter control the order that the jobs in that set are then executed in.
If you reread the opening paragraph - the author is responding to part of a blog post which questioned why a separation between ordering and dependencies was necessary, and provides an example.
While on the topic of underspecified dependencies, another thing usually missed when writing init dependencies is that it's not enough to just start the depending service, it also has to perform some actions before the actual dependency is fulfilled. This can some times take several seconds and many services don't feedback to the init system when it's done.
With wants dependencies this isn't so critical but if you have requires it must be factored in.
Which is why systemd discourages daemonizing in the service and introduces both socket units (so that you can always connect to the server socket, you do not need to wait for the child to call bind(2)) and a simple notification protocol.
Socket activation is nice for certain things, like webservers. Unfortunately for other services socket activation can cause you far more headaches. For instance, Docker's default configuration used to be socket activated under systemd -- this was changed because it meant on-reboot that you wouldn't have any containers running until someone tried to do an administrative command (that used docker.sock).
Socket activation is more meant to allow for on-demand starting of services without losing connections -- and the design shows that this is the main purpose. For many services, "having a connection" doesn't mean the service is ready yet.
I think that having a notification system for a service to tell the init system it is ready would be nice, though the "obvious" way of doing it (rt signals) wouldn't work for unprivileged services.
But in systemd there's no reason you can't have a service enabled on boot and socket activation. It's a little redundant but it can be good to catch some corner cases: like you can always issue docker commands even if Docker isn't ready.
Systemd can wait for dbus services to be fully ready instead of just process start. Iirc there is also a way to use the watchdog apis for this if you have other types of ipc than dbus.
The socket stuff isn't much of a win for correctness though, in fact it has new failure modes. It is one of these things that is confidently advertised as a great benefit of systemd, but it is actually a quite subtle matter and requires thinking. (Remember inetd?)
What if something else relies on the socket being open to mean that the service is fully running including e.g. side effects on disk? What if a service made on-demand with socket activation is supposed to have low latency on the first request? There is a reason why inetd disappeared long before systemd appeared. The saved memory (that is how long ago it was) wasn't worth the brittleness and complication anymore.
Then there are cases where a service is intentionally shut down for a critical operation of some kind but a client connects to the socket, auto-starting the service in the middle of the critical operation.
Debian hit this some years ago in its conversion to systemd. It took the existing van Smoorenburg rc notion of temporarily stopping services during a package upgrade and translated that in the straightforward manner to systemd units. People started hitting services that got restarted in the middle of a package upgrade, causing various unwanted side-effects. So the Debian mechanism had to be changed to detect the existence of systemd and stop/start the socket units as well as, or sometimes instead of depending from the service, the service units.
Package upgrades are not the only places where one hits this. It is important that socket/path/automount/timer units be stopped as part of the shutdown process, not just service units, otherwise one can end up with shut down services being started back up by the sockets/paths/automounts/timers late in the shutdown process, potentially deadlocking it, and at least lengthening it by a significant amount.
> So the Debian mechanism had to be changed to detect the existence of systemd and stop/start the socket units as well
Huh. You’d think the more correct thing to do would be to do “systemctl --runtime --now mask $service; $DO_THING; systemctl --runtime unmask $service; systemctl start $service”. Or, even more correct, first detect if the service is running, then do the above, except only start the service again if it was running previously.
It isn't, of course, for the reason already given.
In fact, the Debian people created two extra programs named deb-systemd-helper (with the same enable/disable/mask/unmask syntax as systemd's own systemctl) and deb-systemd-invoke (with the same start/stop/restart syntax as systemd's own systemctl) for performing all of these actions over package upgrades, employing a parallel set of service symbolic link farms.