Locality of behavior is important. Note how this systemd-thingy is split into two separate files? It's like that a lot in systemd, whoever is doing the architecture doesn't seem to understand why locality of behavior would be desirable, instead taking the IDE approach of writing a bunch of tools to make managing the increased complexity more manageable. Don't glance at a crontab to see when things will next execute, run some other tool that will inspect everything for you. Make things more complex and difficult to read, and then write tools to make it easier to read again.
They could have just added "periodic" service type that accepts timer options, and at least then it wouldn't have to be two files.
Most of the advantages really don't seem like advantages to me. Templated unit files seems especially silly since it's equivalent to just copy and pasting a line in your crontab and changing an argument.
Seems to work for a lot of people, it's just not to my tastes I guess. Still, I find the whole thing pretty mystifying.
Even your own link talks about the Subjectivity of that opinion, and how LoB is often in conflict with DRY and SoC, both of which I support far more than LoB.
In this context systemd favors Separation of Concerns, the the service is what is being run, and the timer is when is it being run...
it is not that the devs do not "seem to understand why locality of behavior would be desirable" it is they disagree that is more desirable than having a logical separation of concerns...
To me I prefer greater separation of concern than locality
I think that's probably a trade off that large enterprises often make. It tends to work well for very large teams, and especially if you're trying to make your tech team fungible. Still, there's such a thing as taking it too far.
I think there's a middle ground. Sure, a single crontab makes it hard for packages to schedule things and makes it hard to assign ownership to individual packages/people/whatever. I can recognize that, and how it would be a problem for a large enterprise or even just for software developers who want to bundle some kind of cronjob with their package.
What you could do instead of a crontab is have one folder with a bunch of services/timers in it, where each file represents a single service that needs to be run. It would mean that you can still understand all of what's going on timer wise by looking carefully at one source, but you can still assign different services/timers to different people/packages/organizational-units/whatever.
Splitting each individual cronjobs into 2 different files is just crazy from where I am. Like you already have a bunch of different service types, at least give us the option to use a "periodic-oneshot" service type instead of this craziness. There's no need to have different packages/people responsible for a service and its timer, and if there is some edge case where it is you can still have ssperate timer services just they can live in a separate periodic-oneshot service file and use the original service as a dependency.
There's separation of concerns and then there's this. Take any principle too far and you get some craziness, and systemd has most of the infrastructure needed to solve this in a much more elegant way. Like at some point it has to just be bad design and not just different priorities, right?
>>Sure, a single crontab makes it hard for packages to schedule things and makes it hard to assign ownership to individual packages/people/whatever.
and makes is hard for admins to schedule things, and make is hard to figure out what is scheduled when, and makes it hard to test if the process will work under the schedule.
I absolutely hate cron's format, and pretty much everything else about it. systemd timers is a WONDERFUL and absolutely needed change to the world of linux administration.
cron was written in a time where verbosity had actual costs to the system, we are not in those times and the need for nostalgia to be cryptic and only use minimum chars is over and I for one am glad for it.
Both in programming and administration people need to be MORE VERBOSE not less
Well I'm not complaining about systemd's verbosity here, I generally agree on that point. Nor am I saying cron is particularly good, I think there's definitely room for improvement.
What I am saying is that splitting a basic timer between two different files is really bad for readability and understandability.
you are splitting a service and a timer, not all services need a timer. I think that is point being glossed over
services may be called by a target, a timer, or manually. or all 3 when the need arises.
So for example if I have a process that I need to run on startup, every Friday, and sometimes adhoc I can create a single service file and do all 3, and know it is executed in the same way every time
could not do that with cron or even older init systems
Right but you basically never run the service ran by timer manually, so on average it's just adding noise to equation.
Having option for separate timer is all and well (after all you might want to use it to periodically start service, for example start some sync service after midnight) but for cron-like behaviour it should really have all in one option, perhaps by just adding [Timer] block in .service or vice versa
I can't count the times I've had to debug cron scripts by copying the cron command line and running it through sudo because the script ran into an edge case, only to find out I didn't have the exact cron environment so manual runs didn't trigger the bug.
I much prefer the systemd solution. As an added bonus, systemd services allow for easy configuration of things like sandboxes and resource limits that would need to be hacked into a giant concatenated command line in cron, presumably without comments even.
With cron this is troublesome, because cron uses an environment different from what you get from the commandline. So things can work from the shell and not from cron.
So to really check that cron works you have to add a line to be called in 5 minutes, then check your mail to see what happened.
With systemd though, you can launch the service at any time and see what it does without all that fiddling.
Because a service may not always be triggered by a timer. Sometimes it could be triggered manually. Sometimes by a volume being mounted or unmounted. Sometimes by a request from a socket. By decoupling them they become reusable components. Actually, much like Unix tools which people love going on about.
that would be a convention or configuration debate
I prefer convention for systemd. The justification for have a separate timer file instead it being in the unit file would be making it easier for someone to quickly look at what services are running on timers vs running by targets.
if timers where in the services file you would have to open each one to find out what had a timer, with a separate file you just look at which service as a timer provided your naming is standardized)
In practice, I run `systemctl list-timers` to see all the timers that exist. That could be implemented with timers in a section of the unit file as well. But by being a separate file, you can have package that installs the unit and another that installs the timer. You can use symlinks to enable the timer and delete the symlink to remove the timer.
In general convention doesn't work so well if you're not really into the system. I.e the casual users.
Convention means that behaviour depends on not just what's present but also what's not present. So it is of course missing some of the explicitness. For the directories, you need to understand which directories to look into and in which order they are taken into account (for systemd this is hard enough that's it's best to be tool assisted).
Binary log files is another example of something that could have advantages in theory but never seems to work well in practice. What used to be "tail /var/log/nginx.log" is now
man journalctl
journalctl -u nginx
<G to skip to latest, ctrl+C because it's taking too long>
journalctl | grep nginx
<wait...>
go back to man page to look for more options to search
One of my earliest experiences with systemd's binary log files was on an embedded system I had ~~chrooted~~ systemd-nspawned into, and every time I tried to get the log output it would segfault (some issue with qemu-static probably). Very frustrating experience.
Really feels like "Embrace, extend, extinguish" but the thing they're extinguishing is unix-as-a-programming-environemnt.
Heh, well making your product worse for more revenue seems pretty warped, and increases the risk of lost market share. Even more warped is having 2 of your engineers join the Debian technical committee and use the bylaws to force a vote before anyone figures out exactly what SystemD is.
Were they coerced to vote for it? Why did they vote for it if they didn't know what it is? Honestly it's hilarious how systemd haters see conspiracies everywhere.
That's an ultra dumb take. It is a fking trivial binary format with an open spec, which is not particularly complicated. Is linux also EEE? Because those ELF files are quite a bit more difficult to decypher...
"tail /var/log/nginx.log" is now "journalctl -fu nginx" or "jouranlctl -eu nginx". Not that much of a difference, it really depends what you grow up with. We could argue "tail" is ugly because you never know if it's "tail /var/log/nginx.log" or "tail /var/log/nginx/nginx.log".
A nice feature is that journald gives you flags like "--boot" to see logs only emitted during a specific boot, or "--since '5m ago'" which is not that straightforward to do with the approach you favor. (if there is a way to do it easily, please let me know!)
I'd recommend running strace on that journalctl command, you might be surprised what happens. It's even worse if you actually store journalctl logs on disk (most distros defer to storing it in ram). I have it set to store on disk and keep at most 1GB of logs
(no i do not have baked-potato service on that machine)
It's DB implementation is utter garbage. It doesn't even organize files by time or really anything useful. If the binary database was a SQLite file with columns containing app name/log level/etc. it would actually be useful but current one is just shit.
Bonus round:
#:/var/log/journal echo 3 >/proc/sys/vm/drop_caches
#:/var/log/journal time journalctl -u nginx >/dev/null 2>&1
real 0m3,232s
user 0m0,036s
sys 0m0,292s
#:/var/log/journal echo 3 >/proc/sys/vm/drop_caches
#:/var/log/journal time ag -R nginx . >/dev/null 2>&1
real 0m2,531s
user 0m0,004s
sys 0m0,290s
slower than actually searching thru files directly...
Why would it care about stracing? It was never too slow for me. A "bad" performance might be caused by ex. consistency checking. Benchmarks like these don't tell much - you can't reduce software performance to one number.
> It's DB implementation is utter garbage. It doesn't even organize files by time or really anything useful.
That sounds interesting, can you please elaborate on the internal structure of journald files or link to further documentation? And why would I care if journald handles it for me?
> Why would it care about stracing? It was never too slow for me. A "bad" performance might be caused by ex. consistency checking. Benchmarks like these don't tell much - you can't reduce software performance to one number.
Well, I was routinely getting multi-second wait for operations like systemctl status servicename which is not something rare, because it didn't even kept the pointer to file that has last few lines of app's logs. It's not just benchmarks
It also trashes cache with hundreds of MBs of logs instead of stuff apps running on server actually needs. An equivalent of "tail -f" can go thru hundreds of megabytes of binary logs.
On the other hand I agree that journald seems to call open syscall more often than I would originally expect, which can be a problem for some edge cases[1], but I don't consider this to be a real problem.
But this buffer cache usage is caused by readers/clients checking logs for some reason, not logging itself, right? I see no significant difference compared to syslog here: if I grep all logs for something, it will also place these logs into buffer cache.
Your example with dropping cache and running systemctl status is indeed interesting, 11s looks like too much. But it's a number without a context, and I wonder how big a problem this actually is. I haven't noticed it myself before.
> It doesn't even keep index of "last file where app wrote logs" which causes above.
While I definitely see some room for optimization, I'm not sure what you mean here: journald uses one active journald database file, there should not be a problem with figuring out where the file is, should it?
> We could argue "tail" is ugly because you never know if it's "tail /var/log/nginx.log" or "tail /var/log/nginx/nginx.log".
The difference is that you can use ls to find out, the same ls that you use when working with everything else, rather than needing to know some journald-specific thing that will only ever work for journald and will no doubt change again in another 5 years.
And you can just issue “systemctl” to list every service. Do you blame excavation machines because you can’t drive them when you only know how to drive a car?
I really need to alias `journalctl -fu` to something because it's always so tedious to type all that out over and over when I'm trying to debug something. Also, is there any autocomplete for systemd unit names? Typing `journalctl -fu amazon-cloudwatch-agent` over and over and over is so much more tedious--ideally it would be something closer to `jctl amaz<tab>`.
There is autocompletion. It works by default on Pop OS and Debian. (You may need to install bash-completion and/or dbus if your install is very minimal.)
Knowing the path to the log file isn't more complex than knowing the name of the systemd unit, but not needing to know the right set of flags to pass because the sane behavior is the default is _really_ nice. That said, I didn't know about `--boot` or `--since`, so maybe I'll hate journalctl a little less (also, I've only ever used systemd-based systems, so I don't think "it's what you grew up on" applies, at least in my case).
It's a big difference, not because one command, but because tail is part of the standard unix toolset that works with all files. Except now you have this one log file that is special and needs a different unique tool.
No, having a special way to tail your special log file isn't a feature, tail is a general utility used system wide for this for decades; breaking this is idiotic no matter what you grew up with.
I'm happy we have special-purpose tools for dealing with logfiles. I don't want to craft one-liners for "give me logs around timestamp" or "give me logs about my service's first start after the boot" every other day.
I do. I specifically want that as a more valuable and powerful feature. Undifferentiated, open-ended, general purpose, sharp tools are infinitely more useful.
Your gripes all seem to boil down to your unfamiliarity, rather than anything specific that stands on its own as a problem. Unfamiliarity is understandable, but if that were enough to avoid progress, we wouldn't never have progress.
> ctrl+C because it's taking too long
Try --since=today. Then you end up in less, and that should be more familiar, including for search. This is then the equivalent of traditional daily rotating log files, with no other knowledge needed apart from --since and --unit (the latter is noted in "systemctl status" output as a reminder).
As other commenters have mentioned, the unfamiliarity is a symptom of the time I spend using less,grep,sed,sort.. etc on a daily basis. If I could use those journalctl flags for everything I do relating to files on the system then I would be very familiar with them.
It's the same rule I use when purchasing kitchen equipment - often it's better to have a single multi-purpose tool than a bunch of specific implements. Otherwise you end up with a drawer full of strawberry-stem removers, pizza slicers, corn-cob peelers, avocado-slicers, etc.
This is to say nothing of the binary format issues that others have discussed.
Maybe nginx logs at usable, but my experience with reading logs in the old way was often "grep servicename syslog" and prayer that the tool didn't have its own magical log file somewhere else.
The backing store may be controversial, but the journalctl command line is much more pleasant. "I want to see log messages from the end of the last boot when the system crashed" is no longer a Rube Goldberg command line involving less/sed/grep/cut/awk and manual scrolling, it's supported out of the box with no configuration necessary.
It gets the "one file to define the job" right but that's the only thing that it gets right.
There is no holistic view on the jobs so you can't easily say "show me last jobs that failed" and shove that into alert.
There is email on failure but only on failure so you can't also do simple "if this mail returns fail, mark check as failed, if it returns okay, mark it as okay". You can get that info from logs but you can't get the content of error message of the app from those logs...
Environmental options history is.... interesting, better to just make script that sets everything up beforehand instead of even trying to do it via cron.
From ops perspective of managing multiple servers its utter garbage. From single user perspective it kinda works but is annoying
Systemd's timers should just be [Timer] block inside the .service
The thing about cron though is this: I learned how to run crontab easily once and I've retained the knowledge of how to use it ever since. It's a piece of cake, I don't think I've ever had a cron job fail on me except for the path issue but I've always caught that on creation since I test them right after I make it.
With systemd timers, I have to look up how to write a unitfile every. single. time. and it's not a trivial 30-second lookup like every other command, it's a 5-10 minute read through every time. And even then there's weird variations on the timer unit files I don't quite understand, and most of the time it takes more than one try to get right. It fails in some bizarre ways on certain tasks due to internal systemd inconsistencies. (Like using different escape characters in different contexts, wtf?!) And I can't remember how it's set-up so I have to look it up again when I want to administer it or change something and on the whole it's just a huge PITA.
At this point, I've used systemd timers several times MORE than I have cron and I still have no idea how to write a systemd timer unit file without looking it up, and I'd have to start from scratch to write a new one.
About the only positive is that when it fails systemd reports it decently so I can follow up on it, but man, it's not worth it. I've spent a lot more time dicking with the unit files than I have debugging failed cron jobs.
Agreed. It reminds me how I recently tried to figure out why cert rotation failed, only to realize that the cron job is sending it's error messages via email to root on a system where there is no local mail server ...
I'm definitely not saying someone couldn't write a better cron, or that it doesn't have warts. Still, put 1/10th of the budget towards cron that went towards this and I think you'd get something a lot more usable.
Some people just seem to love reinventing square wheels. I suspect this "spiral of complexity" is largely the result of people trying to justify their salary, because it lends itself to generating a lot of otherwise unnecessary work.
Starting a service is way different than the service itself, what if I want it to be started when a request hits a given port? Also, the service files themselves are usually packaged up and are immutable, and every user will want different behavior on when it is started.
To me it sounds like proper architecture and your solution would be needless close coupling.
Socket activation is declarative. My last point is the dynamic one, but that is just calling out to systemd. The point is, all three can share the same service description.
This split between service file (what is going to happen exactly) and a timer file (when it's going to happen) is ok for complex use cases, but it really feels a bit silly when a single cron line would do the trick instead.
Another fun one I've gotten bitten by a lot is that the /etc/cron.* directories are run sequentially, so if someone who doesn't know this put some long running process in it, ideally with a name that sorts first, it blocks everything else in that directory. (I personally know this, obviously, since I'm saying it now, but I've been bitten by the other people who do not realize this and do not think about how putting a process that may run for three hours at the beginning of cron.hourly means that the other processes may not run for that long.)
NixOS fortunately solves this problem. Here's a daily rsync with some names changed to protect any innocent hosts on my network. Note that the contents of "sync.sh" could easily go in there as well, but I was previously running it via cron, so didn't bother to move that over.
Counter argument: Try implementing support for Alpine, Debian and other distros whilst using a cronjob.
It's impossible. All that opinionated /etc/crontab and /etc/cron.daily shit can go to hell for all I care. Every opinionated distro uses whatever kind of file format that the next best sh guru came up with, which id also prone for exploitation btw.
With systemd there's a failsafe way for maintainers to offer a cross-distro way to integrate their software.
And that's what systemd is about: automation of integration through convention.
> Don't glance at a crontab to see when things will next execute, run some other tool that will inspect everything for you. Make things more complex and difficult to read, and then write tools to make it easier to read again.
systemctl list-timers is vastly more informative in a practical sense than just looking at the crontab. It just straight up tells you exactly when it ran last, how long ago that was, when it will run next, and how much time is left until that happens.
'Make everything simple as it can be but no simpler'. To me your ask violates this.
Timers augment the variety of existing service types, so your idea just doesnt work straight up, unless we severely limit the flexibility of what timers can be via the 'periodic' type. Or we make 'periodic-oneshot' 'periodic-serice' et cetera, which feels absurd.
Perhaps we could leave Service= alone & just shoehorn each systemd.timer option into system.service. But then we need some way to view the timer statuses & logs, which we can easily review and handle by having a separate unit. 'list-timers' is ok & might still work but we cant filter and log as well if everything is jammed together. We also cant just disable a timer temporarily, & go manage the service manually for a bit, if somethings going sideways & we need to step in; the two are now one in your world.
And what if you want multiple timers, with different configurations between them (maybe one long running one with WakeSystem= and a shorter one without?). We eliniate a lot of creative ability by glomming stuff together.
I dont like this idea. I think it's a misservice to jam everything together. Systemd has similar patterns of representing different pars in different places that I think has served it well. It's an even more trivial case that is more supportable by this idea you've floated, but there is a systemd.mount and systemd.automoumt. It's just good, as an operator, having clear boundaries between these units, and has always made it clearer to me what aspect Im operating on, and enabled healthier patterns of system growth.
> And what if you want multiple timers, with different configurations between them (maybe one long running one with WakeSystem= and a shorter one without?). We eliniate a lot of creative ability by glomming stuff together.
It's not either-or situation.
The .service could just have [Timer] block
Then you could just put all in one file, or still have option of putting timer somewhere else.
>And what if you want multiple timers, with different configurations between them (maybe one long running one with WakeSystem= and a shorter one without?).
Well, that's where you can write a regular timer or just add a dependency to your timer-service.
But also you're sort of getting to the territory of inventing your own scripting language on top of systemd at that point, if your needs are that complex just write a script.
This feels quite dismissive to me. I dont see the appeal of any of these three options, given how well systemd.timers solve all of this today & how nicely they compose.
The whole point is to not need to keep running into a bunch of artisinal handcrafted did-it-myself shell scripts every time come to a system. Being able to compose timers that trigger units enables this nicely with a lot of flexibility. Managing them separately is powerful & makes sense.
Again, inventing arbitrary hacks totally on your own that none of the builtin tools will know about or help you do. This seems obviously worse than having a seperate services actuated by separate timers.
I guess preferences will mostly depend on how the system is being used. If this is your desktop machine sure, do crontab -e and add a line. If it is a system sitting in a colo or cloud provider, it's likely that it's been configured through automation anyways so "locality" doesn't matter. Your scripts know where stuff is located.
While I agree Systemd timers aren't as ergonomic to set up - there are some benefits that I would basically never give up at this point.
* Running the service on command ( this is the best way to troubleshoot issues with your slightly different environments / cron's version of shell. I've spent many minutes over my career - setting the crontab one minute and the future and waiting for it to run / fail )
* Easily inspecting when the next run of the service is. ( list-timers )
* Straightforward access to the logs ( Always journalctl )
Speaking on OnFailure, one of the unfortunate aspects of systemd timers (which are otherwise quite nice) is you need to do extra work to get proper emails when something fails. Here's how I do this:
Define email-unit-status@.service
[Unit]
Description=Email status for %i to root
After=network.target
[Service]
Type=oneshot
ExecStart=email-unit-status %i
Define email-unit-status (a script)
#!/usr/bin/env bash
set -eu
dest="admin@whatever.com"
userflag=
if [[ $HOME == /home/* ]]; then
userflag=--user
fi
html=$(SYSTEMD_COLORS=1 systemctl $userflag status --full "$1" | ansi2html -w -c)
/usr/sbin/sendmail -t <<ERRMAIL
To: $dest
From: systemd <${USER}@${HOSTNAME}>
Subject: SystemD unit failure alert: $1
Content-Transfer-Encoding: 8bit
Content-Type: text/html; charset=UTF-8
Mime-Version: 1.0
$html
ERRMAIL
Then you can add to a service:
OnFailure=email-unit-status@%n
You will need to install the ansi2html tool. This stuff should come with systemd really.
I seemed to never care until they took my sysvinit away (or maybe any decent init system :D). Then they took resolv.conf control away. Then when I shutdown my computer, some random process takes 1 min 30 seconds to shutdown when it probably doesn't need to. Then I read on hacker news recently that Fedora uses a systemd daemon for handling OOM and the writer said it was terribly misconfigured particularly when it shutdown his X session and all processes related to it when an OOM condition happened. I am not a Linux admin (or at least a sophisticated one so I can look smart with systemd), but now they are taking cron away too? :D
I kid about this, in a way, and I know I should accept the inevitable, but I feel like just moving to Devuan on my laptop and use a nice init system, like OpenRC :D
Cron requires manual locking, which results in an old service sometimes aborting without cleaning that up, and then refusing to start ever since.
I'm precisely in the middle of an update that's going to throw that out, and just change to a systemd service/timer. Then that problem will go away for good.
Funnily at my previous job once upon a time the cron daemon crashed for some obscure reason, so the default monitoring template for years afterwards included a check if the cron process is up and running.
This is exactly what happens, just like the old XKCD: Now there are N+1 competing standards.
It's why I have to check /etc/profile, ~/.profile, /etc/profile.d/*, ~/.bashrc (or whatever shell), /etc/environment, ~/.env, and a few others I don't even remember to figure out why an $ENVVAR is set.
I have to look at two cron-like things to figure out what's scheduled when. systemd-timers has been around and in use for a long while.
I've had the opportunity to delve into systemd in a previous role.
This sounds a bit weird, but its not easy to get started with, unless you're used to reading man pages. But once you're able to read man pages, all of systemd behavior is documented in the man pages. I do wish there were more human friendly docs though, especially for younger engineers used to better sorts of docs.
Anyways, once you do get the hang of it, its an immensely powerful system, but like any such system it has its quirks and edge cases. You can get 99% of what you need with it though, and for low level tasks on machines, its pretty amazing.
> I think a link prominently placed on their website would be helpful.
It's such a weird disconnect seeing people that can't go outside of the mindset that everything is a web application.
Systemd ships their documentation (most of what you asked for, actually) with the package, and their environment variables description can be found in /usr/share/doc/systemd/ENVIRONMENT.md (at least on my machine).
When you're dealing with the applications which form the building blocks of a *nix environment your first step should not be google, but grepping through the /usr/share/doc, and using the apropos and man commands.
> It's such a weird disconnect seeing people that can't go outside of the mindset that everything is a web application.
I don’t think thats what’s happening here. Providing online documentation thats easy to read, easy to navigate has become quite standard of all kinda of projects and systems. My main programming environment is a mac so Im certainly not gonna have the locally installed docs anyway.
Not only does a good online documentation make it easier to understand for an individual, but it makes it easy to share links etc to others to help them understand too. And the easier it is to comprehend, the better.
iOS is not a target for systemd, and the building blocks of a *nix system probably are not the building blocks of iOS.
> Not only does a good online documentation make it easier to understand for an individual, but it makes it easy to share links etc to others to help them understand too
God forbid developers package their software considering the needs of users that don't have 24/7 internet access. If it wasn't clear from my previous statement, this is what I wanted to express by the "mindset that everything is a web-application".
Still no debugging HOWTO, much less (but still poorly documented) DEBUG-related environment variables?
If you want people to troubleshoot your stuff (which is arguably the #1 documentation to start with), gotta start someplace (which is for most of us is agrep'ing the code, or save yourself a step and look at my link).
Or the RedHat/IBM systemd development team can quit doing that terse but cryptic error code output.
Or the documentation team can start covering these error codes.
But having to do any form of "agrep" is a sign of poor ... everything, unless formenting job security is what the ultimate goal is.
Some time ago I wanted the best bits from both worlds:
- from cron: specifying all jobs in one file instead of scattering it across dosens of unit files. In 90%
of cases I just want a regular schedule and the command, that's it
- from systemd: mainly monitoring and logging. But also flexible timers, timeouts, resource management, dependencies -- for the remaining 10% of jobs which are a little more complicated
So I implemented a DSL which basically translates a python spec into systemd units -- that way I don't have to remember systemd syntax and manually manage the unit files. At the same time I benefit from the simplicity of having everything in one place.
An extra bonus is that the 'spec' is just normal python code
- you can define variables/functions/loops to avoid copy pasting
- you can use mypy to lint it before applying the changes
- I have multiple computers that share some jobs, so I simply have a 'common.py' file which I import from `computer1.py` and `computer2.py` -- the whole thing is very flexible.
I've been using this tool for several year now, with hundreds of different jobs across 3 computers, and it's been working perfectly for me. One of the best quality of life improvements I've done for my personal infrastructure.
The one time I've tried to use this in anger, it said my script was running when it was supposed to and exiting cleanly, everything green, but nothing was happening.
Put it in cron, worked first try, and forgot about it.
When I set up a new box, the first thing I do is disable all sorts of systemd-* nonsense, keeping systemd as an initd service only (which it should be). Then I replace them with serious, battle-tested tools like ntp/chrony, unbound, etc...
It looks like systemd.timer will continue this practice ;) I'm expecting someone at Canonical/IBM/RH will be happy to put this as the default scheduler in the next distro release, marking it as a "significant improvement".
Systemd and NixOS feels like a great fit. It is easier to configure than the real thing and also short scripts are easier to tie together with services and timers. All in one file instead of 2 or 3.
Nix/Guix users would think that way. They've alrady given up on the concept of system libraries or a unified OS. systemd's writing of 2 config files per timer would be right at home for people that have to set up an entire OS environment every time they want to try to compile something somone else hasn't already written a nix script for.
This would be needlessly insulting if it were posted as a top level comment, but as a direct reply to a Nix user it is impossible to read as anything other than the worst of condescension. Can you really say this is a net improvement to the tone of the conversation in this thread?
I'm responding to a Nix user making this about how Nix does timing. I'm relating it to the common thread of cargo cult'ing shared between systemd and users of Nix as desktop operating systems. They both increase the amount of config files necessary to accomplish what should be almost config-less tasks and spread those config files out across many places instead of a single config store that is for the entire OS.
It's bad policy for desktops even if it's good for you in work environments and so you use it anyway. Normalizing that is bad. I said it like a jerk though. Sorry.
> Job output will automatically be written to systemd-journald
This is a bad thing, not something I’d boast about.
I actually really enjoy using systemd. Being able to start a process with complete isolation without heavyweight docker downloading mystery meat off the internet is a huge boon.
However, one thing systemd is logging. Jounalctl sucks. Grep, cat, etc, work infinitesimally better.
'Infinitesimally' means 'by a negligible amount'. Which is (accidentally) accurate. There's nothing preventing you from using grep, cat, etc. with the output of journalctl. It's just a tool in the pipeline.
I never quite understood that criticism, since systemd is extremely unapologetically Linux centric.
It uses all the cool features of the kernel. It's configured with text files, and has an override system that works great with the Linux filesystem layout and package manager -- much better than SysV init, by the way. And it handles a lot of little annoyances that have long been a thorn in an admin's/user's side on Linux.
I'd argue the better parts of systemd (it's not all bad but there's a lot of low-hanging fruit for all of us to gripe and bicker about) are the imaginings that came from launchd. That for sure was more UNIX centric than "linux" in particular.
You're right that the criticism "just become windows" seems weird - if anything else one should just say "get a mac" since that's where a lot of the launchd ideas originated.
I mean, systemd makes heavy use of Linux-only APIs.
For instance it's very reliant on cgroups, a Linux-only feature. The main project rejects even trying to be portable. This ensures there's no such thing as accounting for that some feature might not work on FreeBSD, because the project just clearly states that it won't be a thing on *BSD.
God, I wish Linux would copy more Windows features. Service management before systemd was a hell of intermingled shell scripts (that often specified sh but only ran on bash) and side effects everywhere. Cron is a very stable tool but it should've been replaced when the first GUI ran on Linux.
I don't understand this idolisation of 70s mainframes that "hardcore" Linux users all seem to fight for. Systemd features can all be turned off if you want them to, but every day more and more alternatives go into maintenance mode because nobody uses them anymore. Inetdwas great when it was invented but we've moved past that point more than ten years ago.
When I'm tinkering, I love the great hacks I can apply by modifying system shell scripts, but I mostly just want my computers to work for me. The old ways of randomly placed dot files, custom service scripts (and formats) and modifying scripts that I shouldn't need to ever touch anyway because they conflict with another cobbled together script have caused me more headaches than systemd ever has. I wish system's docs were easier to find, but even the obscure man page names are better than comments in bash scripts somewhere in /etc/init.d.
Yup, I'm honestly confused about the weird strain of conservatism in some Linux circles.
I've been an user of Linux since quite early on. Back then I recall a lot of enthusiasm for the desktop and attempts to make the user's life easier. Things like YAST were a selling point, there was the framebuffer console for a prettier boot, and Enlightenment if you wanted all the gratuitous eye candy your CPU could handle. There seemed a lot of interest in making things better and look cooler. It was a world full of possibilities, back when the Windows users rarely got more than changing the UI font to something weird.
Today there's this weird attachment not only to the days before that, but to the things that sucked. Like there's some people that are nostalgic for FTP for some reason I can't quite understand. FTP is an awful protocol, and which resulted in countless corrupted packages transferred in ASCII mode.
Locality of behavior is important. Note how this systemd-thingy is split into two separate files? It's like that a lot in systemd, whoever is doing the architecture doesn't seem to understand why locality of behavior would be desirable, instead taking the IDE approach of writing a bunch of tools to make managing the increased complexity more manageable. Don't glance at a crontab to see when things will next execute, run some other tool that will inspect everything for you. Make things more complex and difficult to read, and then write tools to make it easier to read again.
They could have just added "periodic" service type that accepts timer options, and at least then it wouldn't have to be two files.
Most of the advantages really don't seem like advantages to me. Templated unit files seems especially silly since it's equivalent to just copy and pasting a line in your crontab and changing an argument.
Seems to work for a lot of people, it's just not to my tastes I guess. Still, I find the whole thing pretty mystifying.