What you're looking for is in the class of tools known as batch schedulers. Most commonly these are used on HPC clusters, but you can use them on any size machine.
There are a number of tools in this category, and like others have mentioned, my first try would be Make, if that is an option for you. However, I normally work on HPC clusters, so submitting jobs is incredibly common for me. To keep with that workflow without needing to install SLURM or SGE on my laptop (which I've done before!?!?), my entry into this mix is here: https://github.com/compgen-io/sbs. It is a single-file Python3 script.
My version is setup to only run across one node, but you can have as many worker threads as you need. For what you asked for, you'd run something like this:
$ sbs submit -- wget http://example.com/foo.json
1
$ sbs submit -- wget http://example.com/bar.json
2
$ sbs submit -afterok 1:2 -cmd jq -s '.[0] * .[1]' foo.json bar.json
$ sbs run -maxprocs 2
This isn't heavily tested code, but works for the use-case I had (having a single-file batch scheduler for when I'm not on an HPC cluster, and testing my pipeline definition language). Normally, it assumes the parameters (CPUs, Memory, etc) are present as comments in a submitted bash script (as is the norm in HPC-land). However, I also added an option to directly submit a command. stdout/stderr are all captured and stored.
The job runner also has a daemon mode, so you can keep it running in the background if you'd like to have things running on demand.
Installation is as simple as copying the sbs script someplace in your $PATH (with Python3 installed). You should also set the ENV var $SBSHOME, if you'd like to have a common place for jobs.
The usage is very similar to many HPC schedulers...
I've used (and installed) PBS, SGE, and SLURM [1]. Most of the clusters I've used recently have all migrated to SLURM. Even though it's pretty feature packed, I've found it "easy enough" to install for a cluster.
What is the sales pitch for OAR? Any particularly compelling features?
There are a number of tools in this category, and like others have mentioned, my first try would be Make, if that is an option for you. However, I normally work on HPC clusters, so submitting jobs is incredibly common for me. To keep with that workflow without needing to install SLURM or SGE on my laptop (which I've done before!?!?), my entry into this mix is here: https://github.com/compgen-io/sbs. It is a single-file Python3 script.
My version is setup to only run across one node, but you can have as many worker threads as you need. For what you asked for, you'd run something like this:
This isn't heavily tested code, but works for the use-case I had (having a single-file batch scheduler for when I'm not on an HPC cluster, and testing my pipeline definition language). Normally, it assumes the parameters (CPUs, Memory, etc) are present as comments in a submitted bash script (as is the norm in HPC-land). However, I also added an option to directly submit a command. stdout/stderr are all captured and stored.The job runner also has a daemon mode, so you can keep it running in the background if you'd like to have things running on demand.
Installation is as simple as copying the sbs script someplace in your $PATH (with Python3 installed). You should also set the ENV var $SBSHOME, if you'd like to have a common place for jobs.
The usage is very similar to many HPC schedulers...
etc...