Hacker Newsnew | comments | show | ask | jobs | submit login

Regarding your last point, the main thing that systemd excels at is managing the mishmash of power events, other ACPI events, service management and integration with DBus (which everything seems to require these days).

However, I run Linux in three locations and here's where systemd fits for me:

1. Servers. No use whatsoever. There are two power states: off and on. None of this is really an improvement over SVR4 init. It's just another damn tool to learn.

2. Desktops: No use whatsoever. I run mine like servers simply because if I need remote access, trusting stuff like WOL and ACPI is just silly on Linux.

3. Laptops: No use whatsoever. This is simply down to the fact that the power management on Linux i.e. hibernate/sleep support is a bag of shit. I run mine as power states off and on, much as 1 and 2.

Regarding boot speed - my laptop boots in about 14 seconds (SSD). Who cares about making it faster?

I find that a shell script is far more useful i.e. it doesn't enforce constraints on you which have to then be added as features to systemd. Plus shell scripts are generic tools i.e. learning how to write them will be of more use globally than learning systemd guts.

So basically, screw it.

The problem that this outlines is that Linux's power management, event and ACPI state management is crappy. We don't need another layer of crap over the top of it to make it work properly.




To me systemd looks mainly useful for servers. Some of the nice server features:

1. It is way easier to write a systemd unit for an in house application than it is to write a correct sysvinit script.

2. Everything runs in a cgroup making it easier to add resource limits and see which process is started by which service.

3. Easy to check which daemons are running and not.

4. Reduced startup time for containers.

-----


Regarding (1), I'll echo everything meaty said, and add that basic shell scripting is a valuable and generally useful tool to have anyway, so I'm skeptical of the benefits of replacing them with another, slightly different configuration file format.

For example: we use backuppc for automated network and remote server backups. The backuppc data volume is a TrueCrypt-encrypted drive. Creating an init script that checked to make sure that the volume was present and mounted before launching the backup daemon was fairly straightforward. I understand that I could still do this in systemd, but the catch is that I would have to do it with a shell script -- the same way I am now, except for a different, alien interface -- because the native system doesn't have support for things like that. (This is assuming that some combination of unit config parameters couldn't do it; the commands for finding the drive and checking its status were a little fiddly, and I honestly haven't tried to do this the systemd way. Still though: once you understand shell scripting, you can do anything on Linux.)

-----


I disagree.

1. Not really. Most sysadmins have a decent template init file sitting around or you just steal another one on the machine and modify it. They are also more flexible as some startup processes aren't quite as simple as systemd thinks (consider lockfiles, temp data purging, permissions etc).

2. ulimit / selinux - per process. cgroup whilst funky looking is YET ANOTHER disposable mechanism which will no doubt get canned in 5 years like ipchains ipfwadm etc.

3. service --status-all

4. Concurrent startup yes. That really doesn't make much difference on a server with large IO and CPU capacity.

-----


1. I personally do not like copy pasting templates. Especially when it turns out there was a bug in the copy pasted template. So maybe this is just a question of personal preferences.

2. cgroups can do way more things than ulimit.

3. True

4. Have not used enoug containers to know how much it matters in practice.

-----


> 3. Easy to check which daemons are running and not.

How? This is one of the things that have really annoyed me as all the systemd services have gone from service --status-all.

-----


Have you tried either

  systemctl | grep running
or

  systemctl --type=service
The first one gives you a list of all systemd services/sockets currently running, the second one gives you a list of all systemd services, both running and exited. Or you can use

  systemctl --type=service | grep running
for the best of both worlds.

-----


htop?

I'm not linux admin anymore, but I when I was, I used various service status software thingies only to find out whether the system thinks the service is running, because they tend to be wrong, and just annoy you (not starting crashed server because it "is already running" or something similar). I think debian doesn't even track service states, but I'm not sure.

Of course, with systemd it will be even more fun.

-----


systemd tracks service status way more reliably than sysvinit. It also stops services more reliably. This due to the usage of cgroups.

-----


+1 for htop. It's actually really useful and intuitive.

-----


Learning new tools is bad?!

-----


Sure.

Learning a new tool which doesn't teach you anything new except how to use this particular tool, is mildly bad. That wasted your time.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: