No, they are not different enough. Both are cron daemons. Both are in the end
supposed to be packaged. It's really enough to make the newer one (newer by
decades!) to change its name.
I have a bunch of Python scripts that run a various intervals (hourly, daily, monthly, etc). I have trouble keeping track of which jobs fail -- unless I notice an email doesn't show up, or a metric is missing. Is Dcron something I would use to keep track of my cron jobs and whether or not they ran successfully?
At this point I don't need my jobs to be distributed, I'm just looking for a way to keep track of them all and visualize characteristics of the job (start time, length, successful, etc). Ideally, I'd have a way to re-run the job within the UI.
It's an async task processing service with a built-in job scheduler. You can upload your python scripts to Iron.io, then set schedules and other triggers to execute on-demand. We have a dashboard to manage tasks and schedules, see what ran and what failed, and you can visualize the characteristics you're looking for. We do distribute the workloads for you, but sounds like it could be a good fit.
You can also try cronitor and others like it. You add a CURL command at the end of your script that 'touches' a HTTPS endpoint. If your script doesn't check-in at pre-defined intervals, you get alerted. They are perfect for situations where you don't actually want to set up any infrastructure.
I've been tinkering with this concept recently, it's not quite ready for beta testing, but I'd be happy to take feedback on what I've got so far: http://croncloud.io/
Sure, it can do this well. It can run in a single dcron/etcd node if you don't need it to be fault tolerant. At this point you should be checking the job status in the ui as it doesn't have notifications yet.
While Metafora doesn't support cron-like syntax for job scheduling or tag-based node targeting (yet!), it does support distributed fault tolerant task running. We hope to build out a webui soon as well.
However since Metafora is written as a Go library and provides a statemachine implementation for task handlers to use, it requires writing your distributed tasks in Go.
I haven't done any digging, but I'm assuming this uses etcd for membership? How does that work - how does a server know whether its a leader, and how does failover work?
Any change that dcron could manage the job scripts itself? I.E. you store the raw command scripts in dcon, then reference them in the create a job endpoint?
For example:
POST /scripts --data 'node /home/node/myJob.js'
That returns a script id { "script_id": "507f1f77bcf86cd799439011" }
I think you missed the point, jobs are stored in etcd so they're distributed and all dcron (server) nodes will see it. You can even create jobs from any node. Note that, non server nodes doesn't need access to etcd.
I was looking into that too. Based on this job.json[0] definition from the git repo, it looks like it just executes the command. So I believe you could have the job written in any language.
But I think this also means you have to copy every job executable to every server that might have a chance of running the job.
The "command" field is the most important part. What you put there gets executed like this by their agent (on Linux):
/bin/sh -c <your_scheduled_command_name>