We still run daemontools even in systemd systems. The OS packaged cruft runs in systemd. Everything important runs in daemontools. Still the best "services manager" out there.
Also, daemontools suffers from the same problem as other non-systemd service managers; it can't manage processes that daemonize themselves for good reason (like nginx or Unicorn). systemd can handle it because it creates a unique cgroup for every service; if the service needs to be stopped, all processes in the cgroup are terminated.
There's plenty of successors to daemontools following the same principles, but more advanced: s6, nosh, perp, runit, etc.
That said, from what I know systemd doesn't necessarily handle double-forking daemons any better. That is to say, if you set the service to Type=forking, it'll still need to use a PID file (with its inherent race conditions) or employ a PID guessing heuristic, which can fail.
This is a universal problem that no hack can truly solve. The bottom line is if you're running under a service manager, you don't daemonize. You delegate to the service manager to daemonize for you. Any deviation from this will be finicky on most Unix-likes.
The purpose of cgroups here is instead process tracking, i.e. reliably killing all children. But for some services this is exactly what you don't want, and then there's nothing stopping you from allocating cgroups yourself if needed, or talk to a cgroup hierarchy management daemon like cgmanager. Ultimately, you just want some unit of isolation here, so you can use whatever your platform has, i.e. jails or contracts.
One little known feature added in Linux 3.4 is the PR_SET_CHILD_SUBREAPER prctl(2) [0] that lets you designate the process as being a "subreaper" and it will be notified of any descending subprocess exits, just like init would be.
As a bit of an aside, cgmanager got kinda sidelined thanks to the cgroups kernel maintainer at one point insisting on there being only one cgroup management process. And as systemd did both init and cgroup management, it ended up being favored...
It is terrible only because you say it is. It's been a massive help for us in adopting it, and accepting the help it offers makes custom software systems a lot easier to write in a robust way.
It's only "terrible" if you have axe to grind about the "right" way and then define that as daemontools.
I use systemd to start runit which is like daemontools. I consider daemontools/runit it right because they gives me all the benefits I'd want out of systemd (not doing anything special to daemonize my process, supervision, log handling) while being much simpler and more well tested.
The sum of the init functional that can deliver what systemd is doing (and it is good to do those things) is more complex than a suite of tools for launching daemons in an unhelpful environment. Sure.
I submit the universal configuration file tooling is fantastic in a day and age where we provision and configure machines with template-based tools like salt and ansible rather than synchronized command tools like puppet.
I'm not sure I understand what you're saying. Is it that it's nice to be able to copy over a systemd service template file with ansible vs running commands to start services?
If so, how is that different from copying runit service files instead? If not, can you elaborate / reword?
Sorry, that post was on mobile. Honestly, I do not relish posting on this terrible corner of the internet. But I'll respond, and it's better late than never.
There are 2 things:
1. Systemd's boot process does more (and shows better performance and in some cases better resilience). So while many people complain it is more complicated and this is true; it is still very simple for what it is doing (which is much more than daemontools).
2. Runit and monit and stuff all eventually appeal to launcher shell scripts. You do not 'copy' these over with ansible in a real operational deployment, you 'generate' them with profile-specific variables (e.g., beta vs production, datacenter-specific values, etc). Template generation of shell scripts is MUCH more subtly error prone than generating configuration files with simpler grammar and static correctness checks.
Systemd makes more functionality available as a configuration option as opposed to functionality available via shell scripting. I think that makes it much better.
I have an axe to grind about the way it was introduced. Fedora and Red Hat massively mis-managed it. Previously, they introduced various things in tech preview releases, and it either stayed or went with subsequent releases.
systemd was thrust upon us by an axe-grinding developer.
While I'm not a fan of systemd (same thing can be achieved with upstart/supervisord for example), there's just no point to run daemontools on top of the new inits.
deamontools provides: logging stdout, restart, keeping things running. That's pretty much all. The exact thing is already provided by systemd/upstart/supervisord, with additional benefits: resource limits, namespaces, different behaviour on multiple failures (daemontools will just keep restarting your process as fast as possible, taking 100% CPU if possible and flooding the logs), support for syslog/journal rather than just local files.
By putting daemontools on top of systemd, you're just adding simple system on top of a complex one. You're losing features, but not gaining anything instead.
You've got some of the daemontools details incorrect.
> resource limits
Daemontools includes the softlimit [1] helper.
> namespaces
It's not clear to me why this need be built in to a supervisor instead of being applied through generic helper programs such as unshare(1).
> daemontools will just keep restarting your process as fast as possible, taking 100% CPU if possible
supervise [2]: "It restarts ./run if ./run exits. It pauses for a second after starting ./run, so that it does not loop too quickly if ./run exits immediately." Simple and not configurable, sure, but should not hammer the CPU.
> support for syslog/journal rather than just local files
Daemontools svscan [3] "optionally starts a pair of supervise processes, one for a subdirectory s, one for s/log, with a pipe between them". The "s/log" script can do anything you wish, reading logs on stdin. If you use the included "multilog" program, then it's certainly geared toward writing local files, though it does include the ability to run arbitrary post-processor during rotation (which might, for example, fire off a job to copy the logs to an aggregator). Or you can not run multilog and just send the logs to syslog or whatever.
What I do myself is send save the logs locally using s6's multilog analog, s6-log, and also pipe them to a local syslog that forwards them to an aggregator outside my control.
And, in spite of newer supervisors adding more and more features beyond what daemontools provides, daemontools is still an awesome system that was 14 years ahead of its time.
Xe also has the logging problems entirely backwards. It is systemd where one has problems with log flooding. One example of a person suffering from this is http://unix.stackexchange.com/questions/208394/ . The daemontools convention, ironically, is to have multiple separate log streams, usually one per service, which cannot flood one another.
Restart behaviour sometimes is configurable, by the way. (-:
Re. resource limits, I meant more than just softlimit - cgroups CPU and network throttling.
Re. namespaces - sure, it doesn't have to be built in. But many people use it and it's convenient when it is.
I used the daemontools many years ago and remember fast cycling to be an issue which was worked around by manual pauses in run scripts. If it was fixed later - I'm glad it works.
But my main point was that daemontools worked great when we had simpler inits. Running it with inits which could restart makes sense. Running it with modern inits just doesn't give you anything interesting apart from another idle system process.
That can't possibly work correctly with runsv, given the way in which Unicorn performs configuration reloads. Eventually the original process that runsv forked will die, and runsv will try to restart it and fail (because the original process launched a new generation that continues to listen to the socket). Then 'sv stop' will fail to work because the new generation of master and workers isn't under runsv's supervision anymore.
nginx doesn't die. It just reloads the config file and applies the changes. I can't remember what Unicorn does because we almost never change its config.
> processes that daemonize themselves for good reason (like nginx or Unicorn)
Is there a good reason? I assume you're thinking of the ability to fork and exec a new master process, running, e.g. a new version of nginx, then handing control to that new master without dropping any connections, possibly even waiting around a bit in case the new master dies and the original master needs to take over again.
I suspect with something like s6-fdholder-daemon [1] could be used to orchestrate a similar process, though I'm not enough of an expert to know. Instead of inheriting file descriptors through forking, an entirely separate, supervised nginx-newversion could be started, get the file descriptors from fdholder, then coordinate with nginx-oldversion about who's going to accept new connections.
Certainly depending on such a service is arguably more complex then just having the functionality built-in. On the other hand, just as every service should not implement its own daemonization, one can argue each service should not implement its own hot restarting. If the technique was more common, the argument to not duplicate such code would be stronger.
Yes, that was exactly the idea behind s6-fdholder-daemon: set up a central server to keep fds open when you need to restart a process. The old process stores the fd into the fdholder, then dies; the supervisor starts the new process, that retrieves the fd from the fdholder, and starts serving.
And if you don't want to use a supervisor or a fdholder, you don't even need to coordinate, and you never need to fork: simply re-exec your executable with your serving socket in a conventional place (stdin is good). Daemons should be able to take a preopened listening socket and serve on it; hot-restarting is then a simple matter of one execve(). There's really no reason to make it more complicated.
This is a nice feature of systemd. I've been using s6 quite a bit (of daemontools heritage), and often want such a feature, but part of the reason I'm using s6 is so that I can use the same run scripts in darwin and Linux, and only Linux has cgroups.
It says a lot about HN's general self-styled adherence to meritocracy that a stupid, baroque, and ill-thought out technical decision driven by political spite is top-ranked on a system discussion.