You may wish to look into a message processing system in your language of choice and run that as a daemon in Dokku. We have plugins for various datastores, and commonly I see folks just connect their worker processes to that. It's much lighter weight than spawning new containers for each workload.
In my case spawn overhead is negligible relative to the jobs themselves, my existing experience with AWS Batch has been pretty good. I took another look at the Dokku documentation and extending `run:detached` would work well enough, though maybe it's time for me to revisit Airflow or something in that direction.
This comment to me is another upvote to use dokku. Been a happy user for years myself. If you do need help, the discord is pretty responsive and always helpful.
I see compose in production all the time - especially from folks that want compose support _in_ Dokku. I bought this up with the compose project manager a few months back. It seems like an interesting use case but it didn't seem like the Docker folks were... aware that this was how folks used docker compose? There is a project out there - Wowu/docker-rollout - that sort of provides this but it has some rough edges.
My understanding is that they are focused on the local development loop at the moment, especially with the acquisition of Tilt. That said, I don't work there so take this all with a grain of salt.
Dokku's multi-server offering is based on k3s. We interact with k3s but offload any actual clustering to k3s itself as it does the job better than Dokku could :) You can also just tie Dokku into an existing K8s cluster on your favorite cloud provider instead.
We've supported multiple servers for a few years and have had official k3s support since the beginning of the year, so not just one server anymore. We even support managing the servers associated in a k3s-based cluster.
Ohhh, I stand corrected. I don't think that was an option the last time I looked at Dokku. I see the schedulers section in the docs now, thanks for pointing it out!
Does the k3s scheduler work with existing non-k3s k8s clusters as well?
I dropped armhf (32 bit arm) a few releases ago. It was painful to maintain and the few users of that were older Raspberry PI installs. I think there are other tools out there that better support low-powered platforms (piku comes to mind).
ARM64 should be fine, with some caveats:
- Dockerfile/nixpacks support is great! Just make sure your base images and your Dockerfile supports ARM64 building
- Herokuish _works_ but not really. Most Heroku v2a buildpacks target AMD64. This is slowly changing, but out of the box it probably won't build as you expect.
- CNB Buildpacks largely don't support ARM64 yet. Heroku _just_ added ARM64 support in heroku-24 (our next release switches to this) but again, there is work on the buildpacks to get things running.
I run Dokku on ARM64 locally (a few raspberry pis running things under k3s) and develop Dokku on my M1 Macbook, so I think if there are any issues, I'd love to hear about them.
We have ansible modules (https://github.com/dokku/ansible-dokku) that cover the majority of app management if thats what you want. The reason I am hesitant to do it in something like `app.json` is purely because one might expose Dokku to users who only have push access and some of those commands can be fairly destructive.
I’d love this feature too. Why not add it as an optional thing to enable and let users decide? Maybe just put a big warning in the docs and make it opt-in?
I really hate adding knobs - it increases the amount of work I need to do to maintain and support the project.
Long term, I'd like to port the ansible modules over to being maintained internally by the dokku/omakase project, and then maybe that could be a plugin that folks could run from within their deploy.
Yeah the k3s scheduler is basically "we integrate with k3s or BYO kubernetes and then deploy to that". It was sponsored by a user that was migrating away from Heroku actually. If you've used k3s/k8s, you basically get the same workflow as Dokku has always provided but now with added resilience.