Hacker News new | past | comments | ask | show | jobs | submit login

Are they though? Last time I played with them I had to create/edit two text files instead of one (service file and timer file), figure out if I needed to enable and start them both, figure out what to do if I want to temporarily disable the timer (systemctl stop? disable?), etc.



Which is much easier than fiddling around with crontab, in my experience.

Yes, you need a systemd service file and a systemd timer file. Creating both is a matter of ~30m. Then, you do `systemctl start` and you're good... As a great benefit, you can see the status of each timer, which is great for debugging quick problems, like permission issues/runtime errors/... when setting the timer up.


> Creating both is a matter of ~30m.

I can write a cron entry in a minute or two, depending on the complexity.


Well, it takes 30m the first time, when learning from scratch. When learning the arcane cron entry system, I'd guess it takes even longer than 30m, unless you have a simple time in mind (like "every minute" or something similar - "every second Monday" is definitely more complex to reason about in cron). Again, comparing not knowing systemd timer vs not knowing cron.


Sure, not saying you can just sit down and do it straight away. But the additional complexity of systemd, I think, makes it even more daunting. It’s much like how modern frontend has a million components. I understand there are or were reasons for all of it, but not everyone needs it, and it’s intimidating to a newcomer.

In comparison, cron (and *nix tooling in general) asks you to learn a small bit of that tool’s syntax, and that’s it. You can add complexity if you’d like – read the man pages – but for the most part, you can be productive very quickly.


start and enable, or they won't come up after reboot or target change


Handy shortcut:

   systemctl enable foo.timer --now
To enable and start the unit in one command.


That’s true. But once you get it set up, you can check the status, see when it last ran, when it will run next, the logs from the last run, if it failed last time, why, etc. You can stop it, start it, restart it, disable it.

Simeone just needs to write some CLI for making really simple timers.


Don't forget one of the best reasons timers are great: they won't cause long-running jobs to overlap. I can't tell you how many systems I've had to unbreak because a job took too long and spiraled out of control.


> Simeone just needs to write some CLI for making really simple timers.

That's a post-it or a simple-systemd-timers.txt though. Don't bother Simeone with that :).


lol. I must have been on mobile.


Open a terminal and then in bash:

    while true; do
        date;
        ./doit.sh;
        sleep 300;
    done | tee doit.log
if you want to get fancy, you can do:

    date "+%Y.%M.%dT%H:%m:%s-%Z`./doit.sh`"
to get it all on one line.


For doing things like this I like using the "watch" command.


Yeah, it is almost at feature parity. The following bloat is not needed at all:

1. Ensure commands are ran after you close the terminal you just opened.

2. Trigger actions when the process did fail (OnFailure=)

3. ensure the scheduling job survives system restarts

4. have proper scheduling, and handling of missed schedule doe to downtime (eg. for notebooks)

5. run the process in cgoups, under other user's context, or in a chroot, etc.

6. apply randomized jitter to the schedule to avoid crating overload and cascading failures resulting from that

7. Timing out tasks

8. Handle service dependencies...

just to name few from the top of my head...

cron is great, and systemd timers are also good, and even more refined solution to the task scheduling problem. Your propsal does not solve even the basics (configurable schedule). As other comment notes your script is more like the watch command.


Yes if you actually engage and stop playing helpless. The system established things one shouldn't have had to do for decades.


Can't agree more ! Having multiple files is almost never good for these types of things.


Yeah. You get as a bonus a lot of generic tools that work with systemd like journald and other systemd-family utilities. This allows you to log, group jobs, investigate failures, all with better ergonomics and easily transform it into production-grade automation, if necessary.

There's really no reason to use cron today if you have systemd.


I feel like I’m taking crazy pills. “A lot of other generic tools?” You mean like how every *nix tool works for crons, because they’re text files, and *nix tooling operates on text files?

There’s a huge reason to use cron: it’s dead simple, doesn’t require a ton of complexity around it to function, and the syntax hasn’t changed in decades.


No, I don't mean like how every *nix tool works. I mean tools that are specifically designed to work with the data generated and stored by systemd. To give some examples: stdout from the process ran by systemd is automatically collected, timestamped, rotated and interactively paged by journald. journald in turn allows various filtering of the stored logs. On top of this, systemd can be configured to aggregate logs over networks of computers. It has exporters into popular log databases that allow for easy integration with other data collection tools.

In a sense, you could say that Unix shell, or other tools like pipes or redirects constitute an interface that allows generic cooperation between tools written without knowledge of the other tools. So, you aren't too far off. However, the tradeoff here is that with very few very generic tools you must give up more complex things. Eg. you have to concede that everything is a string. When you can confine the interfaces to a smaller group of tools, you can take better advantage of the data they communicate (eg. compared to passing data through pipes, where you have to serialize to string and parse from string data that isn't inherently a string, you can work with structured data with many useful types).

> There’s a huge reason to use cron: it’s dead simple

systemd timers are still simpler. I.e. cron isn't complicated, but it has too many tools all over the place, many arbitrary conventions, its failures are difficult to diagnose. And it's OK. It was written at the time nobody even thought about systemd, so, they had to do some of that work themselves. Today, it's just an anachronism ;)


+100 if I could. Reasons why I generally don't cron:

    * I don't want to rewrite log or dependency management
    * I *really* don't want to troubleshoot when it fails to function, presumably the work was important
    * I barely want to set the job up at all! What service is deficient to beget this babysitting?
Most of my automated jobs do work that depends on other things. Services, mounts, or even the novel idea of a functional network. Things managed and exposed by the manager, systemd.

Go forth and recreate 'After=' and 'Requires=' for your cronjob and make it equally visible and robust, if you want, I guess. A command pipeline of all the right things and intimate knowledge/trust in their return codes will do. I'll go do something more productive or at least rewarding.

Embrace the two files, template them and never mind it again. I promise it can be worth it. I'd argue the Unix Philosophy is paradoxical, if not simply just argued in uncooperative/unproductive ways. Idealistic and impractical if lived truly.

Doing 'one thing' and doing it well actually requires doing the thing and telling others how it went... so really it's two things conflated, if not more! Enter 'systemd does too much': it's for our benefit, truly.

It's not a complete endorsement, of course. There are plenty of things I think are underdeveloped or out of bounds: spare the sniping


Grandstanding:

Pit me against some legacy holdout. Give us both a set of services and complex work to do in a real-world, representative, environment. I don't care what it is.

I guarantee my setup using "timers" will be delivered earlier and prove to be more robust; it's already been written and battle tested. It's ready as soon as I can SSH or clone my repository.

I just hear a bunch of self-selecting [read: made up] surface level complaints. Why is anyone editing these jobs so much to care if it's one file or two? Why not zero? Around two decades after 'cron' the concept of 'Configuration Management' was established.

What's more, interfacing to disable/stop on the regular isn't required with dependencies or relationships defined; the result is implicit. I didn't even mention 'PartOf=' or 'OnFailure=', more settings that allow for handling external influence.

The system will work for you, let it. I've soared through the ranks with this one simple trick: adaptation. The work isn't as unique as it is presented.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: