1. It is way easier to write a systemd unit for an in house application than it is to write a correct sysvinit script.
2. Everything runs in a cgroup making it easier to add resource limits and see which process is started by which service.
3. Easy to check which daemons are running and not.
4. Reduced startup time for containers.
For example: we use backuppc for automated network and remote server backups. The backuppc data volume is a TrueCrypt-encrypted drive. Creating an init script that checked to make sure that the volume was present and mounted before launching the backup daemon was fairly straightforward. I understand that I could still do this in systemd, but the catch is that I would have to do it with a shell script -- the same way I am now, except for a different, alien interface -- because the native system doesn't have support for things like that. (This is assuming that some combination of unit config parameters couldn't do it; the commands for finding the drive and checking its status were a little fiddly, and I honestly haven't tried to do this the systemd way. Still though: once you understand shell scripting, you can do anything on Linux.)
1. Not really. Most sysadmins have a decent template init file sitting around or you just steal another one on the machine and modify it. They are also more flexible as some startup processes aren't quite as simple as systemd thinks (consider lockfiles, temp data purging, permissions etc).
2. ulimit / selinux - per process. cgroup whilst funky looking is YET ANOTHER disposable mechanism which will no doubt get canned in 5 years like ipchains ipfwadm etc.
3. service --status-all
4. Concurrent startup yes. That really doesn't make much difference on a server with large IO and CPU capacity.
2. cgroups can do way more things than ulimit.
4. Have not used enoug containers to know how much it matters in practice.
How? This is one of the things that have really annoyed me as all the systemd services have gone from service --status-all.
systemctl | grep running
systemctl --type=service | grep running
I'm not linux admin anymore, but I when I was, I used various service status software thingies only to find out whether the system thinks the service is running, because they tend to be wrong, and just annoy you (not starting crashed server because it "is already running" or something similar). I think debian doesn't even track service states, but I'm not sure.
Of course, with systemd it will be even more fun.