However, I run Linux in three locations and here's where systemd fits for me:
1. Servers. No use whatsoever. There are two power states: off and on. None of this is really an improvement over SVR4 init. It's just another damn tool to learn.
2. Desktops: No use whatsoever. I run mine like servers simply because if I need remote access, trusting stuff like WOL and ACPI is just silly on Linux.
3. Laptops: No use whatsoever. This is simply down to the fact that the power management on Linux i.e. hibernate/sleep support is a bag of shit. I run mine as power states off and on, much as 1 and 2.
Regarding boot speed - my laptop boots in about 14 seconds (SSD). Who cares about making it faster?
I find that a shell script is far more useful i.e. it doesn't enforce constraints on you which have to then be added as features to systemd. Plus shell scripts are generic tools i.e. learning how to write them will be of more use globally than learning systemd guts.
So basically, screw it.
The problem that this outlines is that Linux's power management, event and ACPI state management is crappy. We don't need another layer of crap over the top of it to make it work properly.
1. It is way easier to write a systemd unit for an in house application than it is to write a correct sysvinit script.
2. Everything runs in a cgroup making it easier to add resource limits and see which process is started by which service.
3. Easy to check which daemons are running and not.
4. Reduced startup time for containers.
For example: we use backuppc for automated network and remote server backups. The backuppc data volume is a TrueCrypt-encrypted drive. Creating an init script that checked to make sure that the volume was present and mounted before launching the backup daemon was fairly straightforward. I understand that I could still do this in systemd, but the catch is that I would have to do it with a shell script -- the same way I am now, except for a different, alien interface -- because the native system doesn't have support for things like that. (This is assuming that some combination of unit config parameters couldn't do it; the commands for finding the drive and checking its status were a little fiddly, and I honestly haven't tried to do this the systemd way. Still though: once you understand shell scripting, you can do anything on Linux.)
1. Not really. Most sysadmins have a decent template init file sitting around or you just steal another one on the machine and modify it. They are also more flexible as some startup processes aren't quite as simple as systemd thinks (consider lockfiles, temp data purging, permissions etc).
2. ulimit / selinux - per process. cgroup whilst funky looking is YET ANOTHER disposable mechanism which will no doubt get canned in 5 years like ipchains ipfwadm etc.
3. service --status-all
4. Concurrent startup yes. That really doesn't make much difference on a server with large IO and CPU capacity.
2. cgroups can do way more things than ulimit.
4. Have not used enoug containers to know how much it matters in practice.
How? This is one of the things that have really annoyed me as all the systemd services have gone from service --status-all.
systemctl | grep running
systemctl --type=service | grep running
I'm not linux admin anymore, but I when I was, I used various service status software thingies only to find out whether the system thinks the service is running, because they tend to be wrong, and just annoy you (not starting crashed server because it "is already running" or something similar). I think debian doesn't even track service states, but I'm not sure.
Of course, with systemd it will be even more fun.
Learning a new tool which doesn't teach you anything new except how to use this particular tool, is mildly bad. That wasted your time.