Hacker News new | past | comments | ask | show | jobs | submit login
Dokku: My favorite personal serverless platform (hamel.dev)
922 points by tosh 22 days ago | hide | past | favorite | 270 comments



I've been enjoying using Dokploy recently.

https://github.com/Dokploy/dokploy

It's similar to Dokku but has a nice web UI, makes it easier to deploy Docker/Compose solutions and auto LetsEncrypt functionality is built-in by design (not as a separate plugin).

I've also built a GitHub Actions workflow to trigger off a deploy to apps hosted on it (basic cURL command but works well). https://github.com/benbristow/dokploy-deploy-action

And put together some pre-configured Compose files you can deploy for various apps. https://github.com/benbristow/dokploy-compose-templates


Apologies for the very off-topic reply, but I can't help but find it a little funny that on a thread exalting a particular tool, the top comment at the time of this writing is a link to another, newer tool. Not that there's anything wrong with sharing the link, but it does seem like here at HN we have a bit of a grass-is-greener thing going on. I would understand it more if the discussion was around how bad a tool is and someone chimed in with an alternative. And it's not like I don't want people to share these other projects but personally on a thread about a particular topic, the comments I find the most useful are those from people with experience in that topic sharing their opinions, tips, etc. In this case, the comment our community found the most valuable on the topic of Dokku seems to be a link to Dokploy, a project that judging by the commit history is new as of this past April.


I find it helpful to have other tools listed. I already know a decent amount about Dokku and clicked on these comments specifically to find out what other tools might be up and coming or otherwise mentioned in the space.

I'm still waiting for something built on a rootless container solution and with everything defined in git (i.e. no or limited cli commands) so that exactly what is being deployed is, at all times, tracked in git.


> I'm still waiting for something built on a rootless container solution

I'm pretty sure you can run both Dokku and Dokkplay using podman. It's a drop-in replacement for docker that runs just fine rootless.


If the topic is "favorite personal serverless platform" then discussion of other offerings is absolutely on-topic.

> Apologies for the very off-topic reply ... it does seem like here at HN ...

There's nothing more HN than filling the first page of comments with discussion of everything except the linked article.


> it does seem like here at HN we have a bit of a grass-is-greener thing

Since it's Hacker News, not Old-But-Stable-Project-News, that seems expected? The other way it also happens, and has been happening forever, I published a new OSS project of mine ~10 years ago, went to the front-page, and 8 of 10 comments were recommending other pre-existing tools.


I think I was respectful enough to word the original comment I made to the difference between Dokploy & Dokku, not just saying one is better than the other. I've used both successfully and think both are great products - just wanted to share my experience. There seems to be an umbrella recently of self-hosting tools like Coolify/Dokku/Dokploy etc. so wanted to contribute to the discussion in that way. Dokploy is also an open-source project so thought the exposure might be positive on a high ranking HN post.


My comment came out crankier than I intended. I do think comments like yours are valuable, and I agree that you were respectful and informative. I'm just genuinely amused that the top comment is for a completely different tool. That's more an observation about how we vote as a community, not about your post. I include myself in that group though as I have in the past been drawn to the new and shiny over the already known.


I came here for the comments actually. I know Dokku and was wondering if someone would post a better alternative.


Same here. Actually I regularly revisit threads about Dokku, Coolify and CapRover because I know that there are references to other projects that I might have missed.


I personally appreciate it - I really like going to the comments to see other approaches and alternatives whenever something is on here. I don't think it's an insult nor do I think it's out of place if done correctly. HN is one of the only places left on the internet where I expect good value in the comments section and this is one of the reasons.


My theory for this, at least in this case when the featured article is about a software program, aka a tool, then it really becomes a discussion about the tools that serve the purpose of that in TFA.

Here's an analogy from the physical realm: "presenting shovel, a customizable tool for removing dirt".

- doesn't work well for rocky terrain, but I'm working on DigBar, which can outperform Shovel in many high performance workloads.

- it's a lot slower than Hoe, if you're only going down 4" of topsoil

- I wrote a custom frontend for Shovel call Flatend, it carries more volume for loose loads

- theres a paid product called posthole that is worth buying if you build fences, uses shovel under GPLv3


The linking of other tools/initiatives for me is half the value of the post


After seeing a recent HN post about Dokku, I started going into the nitty gritty of deploying it before finding out there is no multi-node support at all. So if you ever get to the point where you want to scale beyond one server, dokku can't do it which seems like most of the point of using a Heroku-ish tool (I've tried k3s in the past but Kubernetes always seemed like overkill for a non-enterprise setup).

I'll check out dokploy now that I see it has multi-node support.


Thanks for the recommendation. I've just given it a try and it looks great. I had tried coolify.io before, but the multi node/swarm support wasn't great, and the registry didn't work. Dokploy seemed to work straight out of the box.

One thing I wish it had to preview deployments though. Coolify had that. But I can live without it.


One alternative to this alternative is to use a webui for dokku itself. Such as https://github.com/ledokku/ledokku


I see that the last commit was 2 years ago. Do you know any other alternatives?


Looks nice, but this is quite the security issue:

https://github.com/Dokploy/dokploy/releases/tag/v0.7.2


Thank you -that looks significantly less involved than setting up dokku.


For a while dokku was selling a pro version with a web ui and json api. I don't really mind the CLI so while we bought it I don't really use it. I see there hasn't been much activity on pro, I wonder if it is still a focus.


Nice. Why not use a github merge webhook for triggering deploys?


I have actions on my projects to build & publish container images to GitHub's container registry. The deploy trigger from the workflow makes Dokploy get the latest image from the registry and run it.


I was looking at many of these "selfhosted Heroku" type of solutions recently and read many HN discussions about the different options (coolify.io, ploi, ...) as I migrated to a new server and always copying, adapting nginx configs got a bit old.

I've landed on Dokku in the end as it's the one with the least amount of "magic" involved and even if I stopped using it I could just uninstall it and have everything still running. Can highly recommend it!

The developer is also super responsive and I even managed to build a custom plugin without knowing too much about it with some assistance. Documented this on my blog too: https://blog.notmyhostna.me/posts/deploying-docker-images-wi...


How long did it take you to go from "making a new server / copying configs is fine" to "this is tedious enough I'd like to abstract it?"

Like, was it a years-long journey or is this the type of thing that becomes immediately obvious once you start working w/ N servers or something?

I'm trying to learn the space between "physical machines in my apartment" and "cloud-native everything" and that's led me to the point where I'm happily using cloud-init to configure servers and running fun little docker compose systems on them.


For homelab (but not only) you can install Proxmox Virtual Environment on your physical machine. You end up with a way to create VMs and containers with a web UI. It supports cloud-init too. If you have a spare machine it's excellent for experimenting and learning.

https://www.proxmox.com/en/proxmox-virtual-environment/overv...

https://proxmox-helper-scripts.vercel.app/


I wanted to self-host more of my Rails projects and Dokku comes with nice Buildpack support so I can just push a generic Rails app and it'll run out of the box. That plus that I had to set up a new server after many years made me look into that more.


That's where I am too right now for personal projects, and I ended up reimplementing parts of Dokuploy for that, but I don't feel much of a need to move from "fun little docker compose" for some reason


cloud-init is good, but it assumes that you treat your VMs like containers and that means you will need a lot more VMs that you constantly create and destroy and you will have to deal with block storage for persistence.

If all you do is ssh into a system with docker compose installed, you will hardly benefit from cloud-init beyond the first boot.


What you are actually searching for is called ansible


That's pretty much the opposite of what I'm searching for. Getting a static site running with https on Dokku on a fresh server is done in under 2 minutes if you type quickly.

1) Run curl command to install Dokku

2) Set up domain to point to my server

3) Run 3 Dokku commands (https://news.ycombinator.com/item?id=41358578)

4) Add remote git url to my repository

5) Git push to that remote

6) Done


1) Install Ansible

2) Create a playbook which pulls from your GIT, sets the DNS and installs Caddy (or apache+certbot or whatever) (~5min)

3) Run Ansible

You now don't need docker, can change to any other cheaper hoster any time you want and you don't have the limitations of "serverless" services


That reads like "draw a circle; draw the rest of the owl".

Look, I used Ansible for years. And chef. And puppet, so I've been around that particular block a bunch of times. There's no way you legitimately think that someone can, with no previous experience, create a playbook that does all you need in "~5min".

I ansible a good tool? Absolutely. Does it do what a tool like Dokku (or some of the others mentioned) do? Absolutely not. They aren't meant to compete, either.


That’s orthogonal; you can use Ansible with Dokku.


I downvoted because you didn't qualify your statement.


I have found over the years that trying new software risks immediately running into a road block in real use. There will be some detail or complexity or bug on a semi-basic requirement that goes directly to an issue in github.

Dokku is not one of those, it does what it does well and aside from a couple of cli argument ordering quirks it's been great for my light usage. If I was using it more I'd probably want to configure entire architectures with declarative config files, I have no idea if it can do that though.


And then those GitHub issues get closed by stalebot a month later and the repo looks like it doesn't have many issues.


Whenever a new tool / library / plugin / whatever is evaluated by myself or my team, I spend some time on gathering some github issue / PR stats. I think this is now part of my "software engineering toolset" / best practices.

If there are too many open PRs, or unresolved tickets, OR there are too many _new_ ones, I would rather start searching for something else


Same. I do a lot of research and try to get a sense of the person/team behind it and their values, vision, dedication, if they accept outside contributions, etc.

A few years ago it became popular to document the project processes, but it all turned into generic code of conduct garbage and abstract governance pillars, as if they were writing a constitution. So looking at GitHub issues and commit history is still the best way. And very well invested time.


Dokku is actually the opposite of that. Super responsive, helpful and nice maintainer (Already mentioned by a few other people in this thread).

I was so positively surprised that I got help so quickly after asking that I started to sponsor it via GitHub immediately.


Dokku is really neat! I've been using it before moving to building my own Docker images and deploying with Swarm. It was also (partly) the motivation behind my own take on self-hosted PaaS, Lunni (shameless plug): https://lunni.dev/

In general, I really love the idea of running all your stuff on a server you own as opposed to e.g. Heroku or AWS. Simple predictable monthly bill really gives you peace of mind.


> In general, I really love the idea of running all your stuff on a server you own as opposed to e.g. Heroku or AWS. Simple predictable monthly bill really gives you peace of mind.

Have you found hosting you like with bandwidth expense caps? I'm looking for something like this but I don't want surprise network bills if I misconfigure something.


> Have you found hosting you like with bandwidth expense caps?

Not exactly what you're looking for, but solves the same problem in a different way:

I've been quite happy with using Hetzner's dedicated servers which come with 1 GBit unmetered connection (unlimited bandwidth), so no surprise network charges :)


Note that if you saturate that 1Gbps link they will almost certainly ask you to stop. Lots of VPS offer "unlimited" but it's really not. It's only unlimited within their "fair use" restrictions, ie only as long as they think it's reasonable.

Would love to be shown a counterexample provider.


that's fine. if you're on that level either you're doing something wrong and should stop or you are making lots of money and should upgrade.

aws will just charge you regardless.


This happened once (?) in the history of LowEndTalk and Hetzner, and that was someone who was using several servers, 27/7, over several months at 100% link utilization to shovel raw footage around.

You are very unlikely to replicate this running any type of personal infrastructure. Or anything that's not specifically this.


It’s not really unlimited as they will charge you $1/TB for what they call “overusage”.


That's cloud, non-standard NICs, load balancers or vSwitches. Their standard 1 Gbps link does not charge usage fees. The one person that reported it did it on several servers, over 3+ months - and they weren't charged, just asked to slow down or upgrade to a dedicated link.


5-10x cheaper than AWS still.


Yeah, I think plenty of VPS providers do unmetered traffic too. Mine has a limit but it's something like 8 TB/mo, so I'm not particularly worried either.


Perhaps Hetzner or OVH?


I'm curious as to your thoughts around Swarm.

My concern around Swarm is around the Docker corporation, which appears to be struggling.

As a competitor, we have Nomad, but with the recent IBM acquisition, I'm concerned about Nomad's future.


I do have some concerns about Docker Inc. and Mirantis (which now owns Docker Swarm I believe), yeah. Swarm is pretty mature though, and while I don't think it's going anywhere soon, I don't think we'll get any more core features anytime soon.

For Lunni, my plan is to add support for another orchestrator while keeping the developer experience of just working with docker-compose.yml. I really didn't want to do K8s, but given it's essentially an open standard now, it should be a safer bet than Nomad. I guess we'll see when I can get to it!


IBM bought Nomad? That's disappointing to hear.

Nomad was always much better than k8s, sad that it never got the same kind of traction or mindshare.


My gripe with nomad was that it didn't have init containers. You were basically forced to either have an endlessly growing job specification file or put consul-template in every single container image.


Do you mind if I ask why you chose Docker Swarm? I don't know that much about Swarm and I'd love to know what you think about it compared to K8s (in terms of ease, nice things, things missing, etc.)


Not lunni's dev, but a Swarm fan :-)

I'm a swarm user, but using single node swarms. It's the best solution I found for deploying apps. A lot of projects publish docker compose files, and those are easily usable with Swarm after some small modifications. I'm using the setup described at dockerswarm.rocks [1] and it's smooth sailing.

It's a real pitty, and still surprises me, Swarm is not more popular. It's still maintained [2] but few people still recommend it (even dockerswarm.rocks doesn't anymore). I've switched to it in 2022 [2] thinking I didn't take a lot of risk as starting with it is a really a low investment, and I'm still satisfied with it. I've deployed a new server with it recently.

1: https://dockerswarm.rocks/traefik/ 2: https://www.yvesdennels.com/posts/docker-swarm-in-2022/


The main reason probably was the fact that I was already familiar with Docker and Docker Compose. Kubernetes introduces a whole lot of concepts that I didn't feel like studying up, plus there was a 3-node minimum requirement. I just wanted to be able to start with a single node and be able to scale up if needed, so Swarm just felt like a natural match here.

I'm looking into K8s and other orchestrators like Nomad and perhaps will add support in Lunni at some point, but for now I believe Swarm is the sweet spot for smaller deployments (from single server up to maybe a couple hundred nodes).


There are several k8s implementations that are fine with a single node: k3s in particular is worth a look. But Swarm is still quite legit in my book.


I'll look into it, thank you so much! Way back then there wasn't a lot of choice though. I think I've played with Minikube but that was not recommended for production, and all the other distributions were huge (or at least I thought so!).


There isn't actually ( nor was there ever ) a 3 node requirement for k8s.

Etcd requires 3 boxes for HA, but nothing stops you running a single node etcd.

I personally run single master clusters, because if the master goes down, you lose management as opposed to actual service availability, so mostly I don't care.

Now that there's anything wrong with your preference.


I might be misremembering it, huh! Yeah, it's pretty much the same as Swarm then (any odd number of manager nodes is valid, and if more than a half go down you only lose the management ability and everything else stays up).


How's Lunni going? Is swarm working well? I remember an announcement of it some time ago :-)


Not too bad, except I have no idea how many users we have :')

Swarm still works pretty smoothly for me, although I'm worried about the Mirantis situation, too. I'm currently working on a new backend, which will also enable us to plug in other orchestrators if need arises.


Related discussion on the front page today: "Coolify’s rise to fame, and why it could be a big deal" https://news.ycombinator.com/item?id=41356239

> Coolify can enable organizations of any size to host an arbitrary number of free, self-hosted software easier than ever.

https://github.com/coollabsio/coolify

> An open-source & self-hostable Heroku / Netlify / Vercel alternative.


> It’s often desirable to have HTTPS for your site. Dokku makes this easy with the Let’s Encrypt Plugin, which will even auto-renew for you. I don’t use this, because I’m letting Cloudflare handle this with its proxy.

Hopefully you do use TLS between Cloudflare and your Dokku (even with a self-signed cert or something), otherwise your personal sites (which are apparently sensitive enough to put behind basic auth) are being transited over the internet in plaintext.


From my understanding Cloudflare can generate origin certs for exactly this purpose and you can add certs to dokku with `dokku certs:add myapp`


Agreed. It also can't hurt to setup a firewall or EC2 Security Group that only allows ingress from Cloudflare IPs: https://www.cloudflare.com/ips/

Alternatively, you can use Cloudflare Tunnel, and then block all incoming connections.


You have to limit the traffic to that pool to prevent people accessing your server directly. But that's not enough on its own, because other people can use CloudFlare's IPs to scan you too, so you need some kind of auth on top or use the tunnel.


Yes, this is correct. If you're using IP address allowlists then you also have to check the Host HTTP header (Cloudflare won't allow their other customers to forge that header). Or, you can use mTLS (as another commenter pointed out), or tunnels (as I pointed out): https://news.ycombinator.com/item?id=26690388


Typically my servers is behind NAT and it has no public address, one can only reached the service through the CF tunnel and my access is through VPN, this should be safe, right?


they also provide certs for mTLS between cloudflare and your origin, which you can layer in along with IP restrictions

(the term they use is “authenticated origin pull”)


Can you issue wildcard certificates with Dokku? It seems like you need to have a proxy domain to register the TXT records, since you do not know the domain of the user in advance.


Genuinely curious what the threat model is here?


One might be avoid mass traffic interception due to malicious or corrupt BGP rules, either by accident or on purpose by a nation-state or telco. Another might be avoiding interception by your own ISP for various purposes.


You can avoid both of those easily using Cloudflare Tunnels, which seamlessly works with their proxying CDN.


I've been using Dokku for many years. It's remarkably stable and easy to use. I wrote an extensive tutorial on how to deploy various apps and websites with Dokku in 2018 [1] and I'm sure that following the same steps still works 6 years later.

1: https://maxschmitt.me/posts/tutorial-deploy-apps-websites-do...


Delighted to see dokku on here. It's an amazing product and the founder is super humble and helpful. I can't afford to throw much money at it now but it would be great if more people supported it financially


My experience with dokku was pretty poor. It was quick to start with but on my VPS crashing and restarting, my apps would not relaunch. I'd have to re-run the dokku commands again. Perhaps I did something wrong but I inevitably switched to a single-node k8s setup as it ended up being more reliable


Dokku maintainer here. If you have more detailed feedback, I'd love to hear it! Happy you've found something that works for you though :)


This comment to me is another upvote to use dokku. Been a happy user for years myself. If you do need help, the discord is pretty responsive and always helpful.


systemctl enable foo


Curious: for this type of infra, what do people use for file/object storage? Using something like AWS would negate all the savings with egress costs.


If you want hosted S3-compatible storage, you should be able to combine Backblaze B2, Wasabi, or Cloudflare R2 with any VPS provider from the bandwidth alliance:

https://www.cloudflare.com/bandwidth-alliance/

That should alleviate egress costs. Bonus that storage is also way cheaper.


If your capacity needs aren't very high you can just store data on your web server in a directory

https://dokku.com/docs/advanced-usage/persistent-storage/


Pocketbase (also in a Dokku-powered container) on a Hetzner Cloud VPS with attached storage. Stupidly cheap, very reliable.


minIO can also be self hosted as an s3 alternative. Or host a database. Likely just depends on the type of storage you are looking for


Total Annual Price > $48,000 $20.00/TiB/month

That doesn't sound cheap...


I would guess GP was referring to self-hosting minio.


Good ol local disk


Dokku is great, but historically it didn't really handle resilience. It looks like there's now a K3s scheduler (added earlier this year) which would mean I could have use a Kubernetes operator for a replicated database as well as have the app running on multiple boxes (in case one fails). It looks like it'll even setup K3s for you. The docs don't seem to go into it, but hopefully the ingress can also be setup on multiple boxes (I wonder if it uses a NodePort or the host network).

I was sad when Flynn died (https://github.com/flynn/flynn), but it's great to see Dokku doing well.


> Dokku is great, but historically it didn't really handle resilience.

Would you mind elaborating a bit on this? I'm exploring some serverless options right now and this would be useful info. Do you mean it's not really designed out of the box for resilience, or that it fails certain assumptions?


I'm not the person you're responding to, but I believe I can answer that question as well.

Dokku essentially just started a container. If your server goes down, so did this container because it's just a single process, basically.

Other PaaS providers usually combine it with some sort of clustering like k3s or docker-swarm, this provides them with fail over and scaling capabilities (which dokku historically lacked). Haven't tried this k3s integration either myself, so can't talk about how it is nowadays.


Yea, this. Dokku was basically a single-server thing. If that box dies, your site goes down until you launch it on a new box. That might not be a huge deal for smaller sites. If my blog is down for a day, it's not a big deal.

With a cluster, if a server goes down, it can reschedule your apps on one of the other servers in the cluster (assuming that there's RAM/CPU available on another server). If you have a cluster of 3 or 5 boxes, maybe you lose one and your capacity is slightly diminished, but your apps still run. If your database is replicated between servers, another box in the cluster can be promoted to the primary and another box can spin up a new replica instance.

Dokku without a cluster makes deploys easy, but it doesn't help you handle the failure of a box.


Yeah the k3s scheduler is basically "we integrate with k3s or BYO kubernetes and then deploy to that". It was sponsored by a user that was migrating away from Heroku actually. If you've used k3s/k8s, you basically get the same workflow as Dokku has always provided but now with added resilience.

Note: I am the Dokku maintainer.


Ah gotcha, thanks for the insight!


My major gripe with dokku is that there is no way to define the configuration in a file rather than executing the commands manually.

Otherwise: totally agree, great tool for self hosting.


We have ansible modules (https://github.com/dokku/ansible-dokku) that cover the majority of app management if thats what you want. The reason I am hesitant to do it in something like `app.json` is purely because one might expose Dokku to users who only have push access and some of those commands can be fairly destructive.

Disclaimer: I am the Dokku maintainer.


Thank you! I was hoping for something less intimidating than going full ansible/terraform.

Essentially something that captures all dokku invocations and could be transferred to another machine. Is app.json this?


I’d love this feature too. Why not add it as an optional thing to enable and let users decide? Maybe just put a big warning in the docs and make it opt-in?


I really hate adding knobs - it increases the amount of work I need to do to maintain and support the project.

Long term, I'd like to port the ansible modules over to being maintained internally by the dokku/omakase project, and then maybe that could be a plugin that folks could run from within their deploy.


It doesn't cover everything - but I've had great success with terraform and this module. https://github.com/aaronstillwell/terraform-provider-dokku


You can configure almost everything using an app.json file.

https://dokku.com/docs/deployment/deployment-tasks/


I believe they are talking about the Dokku commands that are needed to set up a new Dokku app.

For example for a static site that would be the following:

    dokku apps:create dewey.at
    dokku domains:set dewey.at dewey.at www.dewey.at
    dokku letsencrypt:enable dewey.at
That's also one of my wishes to get improved, currently I just have a long text file where I store them so that if I move servers I can just re-run them if needed.


Could you put them in a .sh file and then just run `sh setup_dewey.sh`? Maybe put `&&` between them so that if one fails, it won't keep running through the script?


> Maybe put `&&` between them so that if one fails, it won't keep running through the script?

Or just add `set -o errexit` at the top of the script. Or use make.


Yep, in theory I think that should work nicely. So the recovery procedure after a server died would be to restore the dokku data directory from backup and then re-run all the commands. I haven't tested that but I think that should do the job.

Right now I keep the list of commands more as a reference to look up things like how I mounted a volume or my naming scheme for data directories.


Exactely, I was really surprised that dokku isn't all based on storing these commands in a config/script which gets executed every time you change something.


I've been using a different tool that provides great developer UX for managing containerized web apps on your own servers. Its dead simple and does things like zero-downtime deploys and remote builds.

https://kamal-deploy.org/

I use it with rails but it works with any containerized web apps.


I've looked into this too, but it always felt like it's best suited for a "one app per server" model, and not really like Dokku which makes it easy to run many workloads on a single server.

Did I misunderstand something there?


I've never tried the many-apps-per-server use case and I don't think it's supported. We use it in production where its more common to have many-servers-per-app.


I’ve got two apps running on two servers (ie both on both) using Kamal and Traefik and it works great. I didn’t set it up but I deploy with it almost every day.


I’m pretty sure multiple apps is on their public roadmap, I’m sure I read it somewhere.


Recently DHH posted on twitter that Kama v2 will have support for multiple apps on a server.


I already run multiple apps on a single server with Kamal and it works fairly good. Sometimes there are some issues especially if you use similar resources across system such as redis. But overall it is stable and works good.


Looks neat, but what exactly makes it "serverless"? It's literally an application that you have to run on your server.

Edit: turns out (thankfully) that it's only the author of the article using that term. The project site (https://dokku.com/) is very descriptive.


Same question

> Dokku is an open-source Platform as a Service (PaaS) that runs on a single server of your choice

This is the first paragraph in the article


What makes anything "serverless", when it has to run on a server?


I always understood "serverless" as, your main application isn't running all the time, but once you make a request to it, another process starts your main application and then your request gets forwarded to the "main" application.

But I never really got it, so I may be completely wrong.


Kinda, but you are describing an implementation detail. More broadly, here's how it works:

Infrastructure as a Service (IaaS) - you rent a VM with a publicly accessible IP address. Everything else – patching/updating the OS, deploying your application code or binaries, process lifecycle management, logs, TLS certs, load balancing multiple servers and more – is your responsibility. Example: EC2.

Platform as a Service (PaaS) - the provider also manages the OS for your VM, including deploying and running your code on it, restarts, a logging pipeline, providing a HTTPS URL, scaling to multiple servers and more. All you have to do is write your application code to start a web server and listen for web requests on a particular port. Example: Heroku.

Functions as a Service (FaaS) - this goes one step further, and the concepts of web servers, ports and HTTP requests/responses are also abstracted out from your application code (hence "serverless"). You write a function with a set of inputs and outputs, and it's up to the platform to execute this function whenever demanded. The request can be sent via HTTP or a message queue or something else entirely. Your code itself doesn't have to care. Example: AWS Lambda


Practically, it's always running, as otherwise you'll get a cold start delay, but that's close enough. It doesn't mean there's no server, so the "lol how is it serverless if you have a server" meme is tiring.


It means it doesn't run on specific servers that you manage.


But the article starts with "I run a Dokku server on a $7/month VPS", so author meant something else (probably just wrong usage). Maybe Dokku has 2 modes, where you can launch it on classic server (fixed price per month) or deploy on pay-as-you-use platform (https://www.cloudflare.com/learning/serverless/what-is-serve...). Calling something serverless without trying it first is deceiving.


So every website I do not own is serverless?


Yeah something like GitHub pages is serverless.


One more upvote for Dokku. Been using it for as long as I can remember hosting things on servers. It is such an incredible piece of software. And open source to boot. If any of my projects ever make money, Dokku will be the first project I'm funding.


How does it compare to Coolify? https://coolify.io


I started out with Dokku and it was fine, but ultimately switched to Coolify solely because it has a web UI. Dokku Pro has one as well, but my use case was primarily just for hosting a demo and spinning up instances from GitHub PRs so didn’t feel worth spending money on.


Why does the title say “serverless” though? AFAIK Dokku is very much a “server platform”.


Here's a better question. For people that roll their eyes at the mere mention of "serverless" (like me), what is the value proposition of Dokku over VMs and your own dockers?

Don't convince me it is like AWS serverless. Convince me to give up VMs and docker images.


Exactly.

Dokku looks great but what is the value of using it over "run your container" platforms like Google Cloud Run, Digital Ocean App Platform, or Fargate.


One benefit for me was that a vps with 2vcors and 2-4gig of ram is about the same price as one of these app engine services. So I can run 3 or 4 dokku apps including Postgres, redis, memcache connected to them on my own vps and still have margin when inspiration strikes. I moved a production app from heroku to dokku and saved hundreds a month and still got tons and tons more compute.


In terms of money it might end up cheaper for the infra itself (especially if it runs in your garage)

In term of your personal time, well, you'll not grow a business on it without hiring people to maintain it or spend a lot of time doing devops


> Convince me to give up VMs and docker images.

I am in the same boat. Using VMs and docker images and not sure how this would benefit me.

I have looked AWS serverless stuff. They appears to solve problems I don't have.


Well its problem is that you still have to... install the entire thing. Whatever serverless means, it should allow programmers to focus on their code, not on the platform. Like, at all.


It’s the backend that implements serverless architecture. A serverless server, I guess. Roll your eyes if you like, but “serverless” is still a snappier term than “declarative on-demand server provisioning, configuration, and scaling” and most people are into that whole brevity thing.


Except Dokku doesn't do those things. Dokku doesn't scale your app automatically and it doesn't shut it off when it's not being used. It runs your web server process continuously, handles some 12 factor config, and does some nginx stuff for you. Until this year it didn't support managing a cluster at all and was entirely focused on single box deploys. The scale command just runs more processes on the same machine which if you're not using Node is probably not even a good idea to do.


> snappier term than “declarative on-demand server provisioning, configuration, and scaling”

Quoth the raven, “servermore”…


Can someone explain what this is for someone who doesn't know what Heroku is, and why it's called serverless when it clearly requires a server?


"Serverless" means there's no stateful application server in your architecture. No instance, just callable code. Shell script, not daemon.

You write narrow business-level functions that take inputs, do their business, and give you the output. They're called, they run, they terminate and carry nothing forward. You deploy them to a hosting platform which will handle making them available, routing requests, and all the other stuff that's incidental to doing the work. The only thing that's your concern is the logic that turns inputs into outputs. At least that's the pitch.

Technically a PHP script that you run as CGI through Nginx is "serverless" that way, though of course it's hosted with a software server running on a hardware server. It's serverless because you write a PHP script and it doesn't care what runs it or what's going on around it. It doesn't need to work within the context of an application server like an endpoint in a Django or a Rails app would.

Someone else could own a bunch of servers running Nginx, you'd give them prettyprint.php, and it would pretty-print your JSON or whatever at the URL they'd give you.

Services that do this are called serverless platforms. The hosting model is called "functions as a service" (FaaS). If you like the architecture but you don't want FaaS from Amazon, Heroku, or some other third party, you can host yourself a mini FaaS with something like Dokku.


heroku is (was?) a platform for hosting web apps with a lot of nice ergonomics, to let you run commands like "heroku up" to get your app running on the web with minimal fuss.

this appears to be a project to give you the niceities of a PAAS, without actually providing the platform.


Heroku still exists, it just sucks. Owned by Salesforce.


Heroku is still good for tons of stuff. After Salesforce bought it, it stopped getting much better, they removed part of free quota, some add-ons disappeared, and the like. For example, http2 support just went beta on may. It's a shame.


Or simply use traefik + app containers via standard podman + systemd integration:

https://docs.podman.io/en/latest/markdown/podman-systemd.uni...

https://traefik.io/traefik/


The word "simply" is doing a bit too much heavy lifting there - I'd love to see an article about that solution that's as detailed as Hamel's Dokku one https://hamel.dev/blog/posts/dokku/


I love this idea, simple reliable pieces.

edit: looking into traefik docs and perhaps not what I would call simple, probably would use caddy as reverse proxy instead.


That's the opposite of simple.


that's what I do and its easier to setup and understand than dokku that's for sure. tried using dokku multiple times and ran into so many issues with it. with traefik you literally just have to copy and past few lines in a compose file and push to docker hub and traefik will pick it up.


Dokku handles a significantly larger scope than the setup you describe, including a plugin ecosystem: https://dokku.com/docs/community/plugins/


I'm going to confess something: I still do it oldschool. A single box with a SQL server and a webserver running on it. I've taken courses in Docker and whatnot but never applied them.

When you're hosting a single-node cluster, what value do these docker-based tools offer? Is it the fact that you can use a dockerfile to declare your OS-level dependencies consistently?


Yes, cattle not pets, and portability.


But does the underlying server itself that's hosting the docker node not become a "pet", even if it hosts the "cow"?


There are specialized container OS hosts or ansible type updaters for wrangling traditional servers.


Is there any advantage of Dokku over using Kubernetes (I already have a person single-node cluster).

I initially setup Dokku on K8s, but since it would just deploy to that same server it makes more sense IMO to just use K8s


If you've already got k8s set up and you're comfortable with it, I'm not sure that Dokku offers much?

But... wow. For my single non-k8s server at home, Dokku makes getting stuff running behind HTTPS about as simple as I could hope for!


Some took my comment to mean "why would anyone use Dokku"?

This project looks great, and is obviously far less complex than Kubernetes. I was asking for advice about my particular situation. I wasn't saying that someone should choose Kubernetes over Dokku.

Though, I do see many advantages of Kubernetes for a homelab environment, mainly that you get experience using K8s which can transfer to a professional context.


It’s like two orders of magnitude easier to setup and use.


I mean "not having to learn Kubernetes" is surely a factor?


I liked Dokku when I was still happy using docker, but since I started working on https://www.dbos.dev/, I value microVMs way more.

The problem with Dokku is that, while its easy to use if you have experience in devops, well.. you still need to know devops! That's not what I call serverless...


Will it ever be possible to use DBOS with other providers asides from aws? Some part of me concerned about the vendor lock-in


The SDKs -- which do most of the heavy lifting of managing state for you -- are open source. For example the TS SDK (https://github.com/dbos-inc/dbos-transact) runs on node and you can run node anywhere (k8s, VMs, etc.)

With respect to AWS, where we currently host DBOS Cloud, we rent baremetal machines and built the entire hosting layer using Firecracker, Postgres, and other well known frameworks that we can bring to other metal providers (https://www.dbos.dev/blog/anatomy-of-dbos-cloud).


Don't you need to run everything in bare metal to effectively leverage microVMs? AFAIK, unlike containers, you can't efficiently run microVMs on cloud VM instances.


You need nested virtualization, which many VMs support -- it is architecture dependent. But, yes, to maximize the benefits you'll want to run on baremetal.

From the standpoint of a cloud user, the kind that likes Dokku, the experience is cheaper/faster/more secure if the infra uses uVMs vs containers.


Dokku looks great, I have not tried it yet myself. I've been using CapRover (similar thing) for years and have no complaints! LetsEncrypt support built in, has a web interface and CLI utility to deploy (can also use github triggers).


Striking while the iron is hot: I love dokku and I've used it for many years. That said, I thought I'd do something similar but with the goal of supporting k8s manifests

https://github.com/skateco/skate

It's daemonless, multi-host with multi-host networking, service discovery and like I said, allows you to create deployments, cronjobs, and route traffic via ingress resources.

Comes with letsencrypt out of the box.


Does anyone have experience using dokku-postgres?

The GitHub readme is well documented but hard to know how that translates into the dev exp, like with scaling or upgrades and if its features are comparable to managed Postgres providers (I'd assume no but happy to be proven wrong!)

[0] https://github.com/dokku/dokku-postgres


I used to use it but what got me was letting my Dokku install get stale and then upgrading a whole bunch of versions in a row. The old plugin broke, the new one wasn’t compatible, there were version issues.

Nowadays I just run Postgres directly on my Debian box and just create a new user/DB for ever application, then set an env variable for the Dokku app to connect. Postgres is so solid to begin with that it requires no babysitting unless you have very intense workloads (at which point either use a hosted solution or start thinking about how you’ll do your own DBA).


Is anyone running truenas scale for this kind of purpose. I haven’t used it but its architecture around k8s seems extremely promising. For most use cases a simple docker container is all you need but sometimes running other apps like grafana with a k8 manifest is easier to manage in one vps and gives you the flexibility of a cluster. Just curious.


IMHO Dokku still outperforms all other open-source alternatives for deploying Rails apps. There are a few proprietary alternatives that still manage the job with far more simplicity, but those are paid... I have tried to deploy with Kamal, DHHs Junta preferred way, but is still not better than Dokku in it's management and simplicity, and top of that, if follows the framework's latest trend of poor-to-no documentation.


I've been loving Kamal in production. What problems did you run into?


The documentation is a major turn-off. I havent revisited the situation in a few months, maybe things have changed, but could not deploy a single app to my traditional cloud vm's... I never struggled that much with Dokku, that's why the comparison...


Ruby. Only if it were a self contained binary in go or (or zig) it could go places.


Yes, this is what's holding Ruby back from the broad adoption that Zig enjoys.


to be honest, i just heard and i'm thinking twice because of ruby.

I really don't long for a gem breaking when i go to update my system. self contained binaries ftw.


I am wondering why php wasn't more popular? Think php as a serverless platform:

1. 10 minute to learn, simple syntax, rich functions

2. deploy with standard tools like ftp

3. easy scaling with php-fpm.

4. not exactly docker, but vhost works in most cases. And vhost can be containerized.

5. insta hot-reload

6. dirt cheap cost. Hi dreamhost

7. no vendor lock-in bullshit, supported by everyone everywhere

well, at least it was the case used to be


Does PHP run your database? Your caching layer? Handle SSL? Can you run a queue with PHP? Would you? If your answer to any of those things is "no", then PHP isn't a serverless platform. It is a language that you use to build web applications. That makes it very different from Dokku, Kubernetes, etc.

But yes, you're right that PHP is still a great choice for building the "web application" portion of your infrastructure.


To be fair, serverless platforms doesn't always provide queue or database. Usually they come alone with some sqlite or KV store for the convenience.

SSL was provided by the php-fpm part, with either Apache or nginx, like most "platforms" do. Modern platforms may choose less-popular fronts like Caddy, and Caddy does support php_fastcgi.


The purpose of serverless is easily deploy applications without worrying about / managing infra at all. The only reason you don't need to run your own infra with PHP is because it doesn't do anything except the web application. As you noted, it doesn't even do the web-serving. Most serverless application platforms are then paired with hosted databases, hosted queues, etc, so that you don't need to manage those either. But yes, only some of those additional pieces can be described as "serverless", e.g. AWS Aurora is an actual serverless database, vs DigitalOcean's hosted Postgres, which does not scale in a "serverless" way. Regardless, minimal management of the database is required on these platforms.

Dokku in turn is a replacement for those platforms and makes it easy to host your own [insert some part of your platform stack here]. That is the purpose of a PaaS, which is what Dokku is and what PHP certainly is not. Yes, the OP maybe should've said "My favorite personal PaaS", but that is less catchy, and I think calling it a "serverless platform" is close enough to the truth and gets the point across.


> Most serverless application platforms are then paired with hosted databases

> Dokku in turn is a replacement for those platforms and makes it easy to host your own

I tried to search

database site:dokku.com/docs/

or

queue site:dokku.com/docs/

But failed to find anything relevant. Am I doing this wrong? Where does Dokku as a "replacement platform" provide the database or queue capability?

> purpose of serverless is to provide scaleable infra which you don't need to manage

> Dokku is an extensible, open source Platform as a Service that runs on a single server (from https://dokku.com/docs/)

and how does Dokku on a single server solve the "scaling" problem exactly?

> As you noted, it doesn't even do the web-serving

Technically speaking, php can. https://github.com/swoole/swoole-docs/blob/master/modules/sw...

it's just not widely used.

Dont get me wrong, Dokku is a fine project of its own, but it doesn't seem to tackle the problems you mentioned either. On the other hand, if php is not enough, the LAMP can do everything.


I'd recommend reading more thoroughly about Dokku (or PaaS in general) since it is somewhat clear you don't understand the value prop. Just grepping for certain terms isn't gonna get you very far.

Dokku is Docker + Buildpacks + Orchestration. You give it a buildpack (developed/popularized by Heroku) to specify your application, which is the main "serverless" part. Then you tell Dokku that you want datastores or queues or whatever, via plugins[1]. Dokku handles pulling the relevant docker images, running them, and linking it all together. In other words, Dokku is open-source Heroku, where you can add databases and such with a single command, specify your app with a buildpack, and you're off to the races.

If Dokku only did the application via buildpack part, people would not find it nearly as useful and PHP on a webhost somewhere would indeed be a good alternative.

But Dokku is not that, it is a PaaS, so it can also run your PHP application just as easily. And if you're self-hosting Postgres and RabbitMQ via Dokku, why would you run your PHP on Dreamhost? You probably wouldn't, you'd throw a PHP buildpack up there and call it a day.

[1] https://dokku.com/docs/community/plugins/?h=plugins#official...


ok, but "pulling the relevant docker images" does not scale, bro. There's a gazillion of parameters to fine tune a Postgres alone, and trust me, running db inside a container is bad, as stateful containers are terrible in general.

And I really do hope scaling should be as easy as "pulling docker images"


So, how is PHP + queue library different than Dokku(or PAAS as you call it) + queue plugin? Other than the fact that you write it all in yaml/json/gui instead of code?

> "I'd recommend reading more thoroughly about Dokku (or PaaS in general) since it is somewhat clear you don't understand the value prop."

My cynical and uncharitable translation of this and the rest of your post for the GP: We've invented platform or PAAS or devops so that we can carve-away work from devs that used to traditionally do this kind of stuff in partnership with infrastructure, and you're just not understanding it because you're coming from a dev perspective.

It's all just evangelism/marketing-push to gather the online webosphere mindshare and capture the development process so that all that's left for the downstream individuals (developers) is nothing but config/yaml/plugin-ecosystem and even that stuff only the ordained priests of DevOps understand (or are allowed to touch), again leaving developers with even less, making them glorified code-monkeys that need to spit out CRUD screens. It's either that, or put "DevOps" on your resume or title, almost like a mafia-style shake-down.

It works, sure, provides immense value sometimes and gets you to market quickly sometimes, sure, etc. But damn is it toxic and detrimental for our community in the long run. In an alternate universe or timeline we could have gone some other direction, but this is what we have for now, so we have to live with it. DevOps and UX will many years from now be known as two very bad paths that we allowed our industry to be dragged down into.


To answer your question with another question: what exactly is the PHP queue library interacting with? Where is the queue being persisted? How is that persistence layer being deployed/run?

I am a dev, so your translation is indeed cynical and uncharitable (and inaccurate).

I also don't understand what you have against DevOps or UX. DevOps is just how you actually run the applications/services/etc (that you've written with code) in production. DevOps can be done by a developer or by people dedicated solely to that task. UX is just caring about how end users actually interact with the application/services/platform. This can also be done by developers, or by designers, or by UX specialists, or whomever.

These concepts exist regardless. Sure, you can choose not to care about how your application and its required dependencies are run. Sure, you can choose not to care about how your users interact with the things you are building. But not caring about those things doesn't mean they go away.


But that's the reason why serverless has mostly failed so far -- because it is not stateful. And that is only because most existing offering don't have the correct primitives.

By that I mean that if the programming model has a way to automatically capture state, an implementation framework can manage it automatically on behalf of the application, while the user (person writing the app logic) doesn't have to care about the state.


All of those are easier in Python or Ruby or Javascript. And all of those languages have done thing properly to make you code lesser , better with lesser errors and better performance. Thats why PHP become a dinosaur.

> not exactly docker, but vhost works in most cases

You are understanding Docker and Vhost wrong then..


> easier in Python or Ruby or Javascript

Have you tried write anything with those on serverless™ platforms? They don't do things properly, serverless makes coding harder, yield more errors and poor performance.

> better performance

PHP8 JIT offers outstanding performance.

> You are understanding Docker and Vhost wrong then..

chroot/jail'd multi-worker with inetd port fowardng solves the same problems as cgroup'd process and netns do. I'd argue it's a sidecar before it's cool.

If you take extra steps like wrapping LXC for your web workers, you may even call your platform, wait for it, Heroku. https://devcenter.heroku.com/articles/dyno-isolation


I mean, it’s not like PHP is unpopular.

According to the first thing I googled, it’s more popular than Go, and even higher up the list if you remove languages like C/C++ that aren’t typically used for web.

https://www.hackerrank.com/blog/most-popular-languages-2023/


For the web, PHP remains #1. It's super fast, especially when using FrankenPHP or Swoole. Laravel has the fastest development velocity out of any framework available.


I really like Dokku. I recently wrote a plugin to automatically expose the apps I add on it on my local network as subdomains of the host via MDNS (https://github.com/calyhre/dokku-mdns), perfect for hobbyists


I love this! Did you add this to our plugins page by any chance? I can't recall and not at my personal laptop to check.

Disclaimer: I am the Dokku maintainer.


It is yes. Thanks for such a nice tool


I have a heroku app that I'd love to try migrating. It's a pretty simple express.js single page app running on heroku's lowest level, uses firebase and has no database or other backend dependencies. The domain is on godaddy and it uses Cloudflare for DNS. Heroku's "Automated Certificate Management" takes care of the SSL cert.

The main issue is that it's for playing a game, and the game is held in-memory, and once a day heroku restarts their servers, so everyone gets kicked out of the game they're in when it restarts with cleared process memory. I need to fix this by migrating and I don't have time.

If anyone feels like this migration would be something that they have relevant experience for and which they could do confidently, please get in touch. Email in profile.


Your server will likely occasionally have to restart regardless of whether you migrate or not. Maybe you could look at using Redis to store your game state. I think Heroku has a free add on.


Great writeup -- I have a gist floating around somewhere with a similar workflow, but for bitbucket pipelines.

Good solve!


I'm outing myself as a bit ignorant here, but the author:

- used a VPS - made a docker file

So what does doku actually do?


Same here,

All I do on top of what you said is use Traefik for reverse proxy and let's encrypt.


If you search a simpler solution I suggest

https://github.com/daitangio/misterio

I created it for managing my homelab, it works great and it is a thin layer over docker compose


Sometimes the slighly more complicated (Dokku is still a very thing bash wrapper around running git, ssh and docker commands) is simpler just because they have a great documentation and other people using it.


Self-hosted solutions are the way.

No one will be stealing your data as the big corps do.

Less chances to overrun your budget because of how cloud platforms conveniently have no breaks on utilisation of resources.


> No one will be stealing your data as the big corps do.

We take care of that ourselves by self-hosting and having faulty or unmonitored backups :)


Or you can use something with straight-forward pricing, like Digital Ocean. I don't understand why AWS is default for everything. People need to snap out of that racket.


Because provisioning/configuring/monitoring/scanning/patching/grooming servers sucks.. and people are lazy.


Digital Ocean is even on the expensive side these days. There are a lot of vps providers that offer decent pricing and include bundled bandwidth. Some of the hosts may not be good for business-use, but should be fine for personal stuff.


I must have missed the incident where a major cloud provider was stealing their customers' data. Do you have a link?


Any suggestion for a simple FaaS platform that isn't OpenFaaS? Fn Project looked promising, but their repo looks abandoned (more than one year without commits).


just get good with docker and make a shell script to instantly create/update pre-existing docker images. you wont regret it.


Damn, who wrote this amazing piece of software


What a genius the original maintainer of this software is


I do have a hard time understanding how is this different from setting up a VM with docker running on it.

I don't see any added benefits in therms of the extent of configuration I need to deploy. What is the new thing Dokku and other similar services bring to the table? What is the extra configuration I don't have to do if I go with it?


Looking at the examples in the post above, and looking at the Dokku site and documentation

Is it the case that there is no visual in the free version? Just hacking around some files? That is not that user friendly and certainly does not really remind me of Heroku.

The GUI you get with the Pro version looks good. and only a bit more than $800 for life.


I love dokku. And josegonzalez is always a huge help.

I pay a monthly support to dokku. You should too. Jose will help you either way, but I feel slightly less guilty when I ask questions and he immediately resolves them for me in the slack channel. Don't you want to use this incredible piece of software guilt free?


Has anyone a made or has a link to a recent, detailed comparison between all these self-host-platform projects?


I have a video on how to setup Dokku via Docker, so you can launch it in any server like aws or digital ocean without paying extra for other services.

https://youtu.be/O_-KxC9FjuA


Last time i tried to use Dokku as Docker container it was unusable, Nginx, Traefik, Mongo, https all plugins with bugs, 2 weeks wasted on debugging without result.


Are there plugins or anything that will allow Dokku to run queues of batch jobs? Skimming the docs there is clear support for Heroku-style apps and one-offs that can be launched via cron, not so much for batch although it would be quite useful.


Dokku maintainer here.

You may wish to look into a message processing system in your language of choice and run that as a daemon in Dokku. We have plugins for various datastores, and commonly I see folks just connect their worker processes to that. It's much lighter weight than spawning new containers for each workload.


In my case spawn overhead is negligible relative to the jobs themselves, my existing experience with AWS Batch has been pretty good. I took another look at the Dokku documentation and extending `run:detached` would work well enough, though maybe it's time for me to revisit Airflow or something in that direction.


A linode vm + dokku has supported my personal projects (for profit and otherwise) for years.


For personal stuff I always come back to Ansible and, if I want it containerized, Kubernetes


Dokku is great, but have you tried https://ptah.sh ? ;)

(sorry)

This is the service I have been working for the lasts months alongside my 9-5. Heavily inspired by Coolify, but it is based solely on Docker Swarm to save the development efforts on other features.

Also, it is a bit opinionated to adjust the UX to what I need myself, so there are slight deviations from the way how others work with Swarm.

I have a short vid which I have recorded today on how one could easily deploy WordPress to any VPS: https://youtu.be/k34Zdwcsm6I

It covers usage of the 1-Click apps templates which speed up everything "a little bit".


What’s funny is before I read this I thought Dokku was just used as a convenient driver for kitchen testing and stuff like that. I never really thought of it as a PaaS!


Dokku is AMAZING. I’ve been using it for about 6 years, never had a single problem with it and I host a lot of apps on a single instance server. Can’t recommend it enough


Can I not do all of these things with docker-compose already?


dokku does a lot of things that docker-compose does not. One of the bigger ones is zero downtime deploys:

https://dokku.com/docs/deployment/zero-downtime-deploys/

This has the added benefit of warming up the app before traffic ever hits it, something I was always surprised that even heroku didn't do (at least, the last time I used it ~6 years ago)


I see compose in production all the time - especially from folks that want compose support _in_ Dokku. I bought this up with the compose project manager a few months back. It seems like an interesting use case but it didn't seem like the Docker folks were... aware that this was how folks used docker compose? There is a project out there - Wowu/docker-rollout - that sort of provides this but it has some rough edges.

Disclaimer: I am the Dokku maintainer.


Interesting, how is compose meant to be used then? Just for building images and running local dev environments?


My understanding is that they are focused on the local development loop at the moment, especially with the acquisition of Tilt. That said, I don't work there so take this all with a grain of salt.


No. With Dokku you can just push to git remote and it'll build, deploy the image, set up LE certificates, roll out the app with zero downtime (if you want). To get this running you'd have to do some manual stuff with git commit hooks, but that's just one small part of Dokku.


I have gitlab-runner on my VPS, all it does is `docker compose up`, that already includes a traefik setup with LE certificates.

Zero-downtime does sound interesting though, and is probably better than `traefik.http.middlewares.test-retry.retry`.


I've been using dokku for years. Constant useful updates over time and the developer was super helpful with a niche case using old documentation.


Currently using Convox a lot, but miss the simplicity of Heroku. Anyone know if there's a good comparison breakdown of all the PaaS options out there?


This is not fully up to date but is a good start https://www.herokualternatives.com/


Sadly missing Convox - particularly I'm looking for examples of things each excel at against each other.


At least Coolify and CapRover are missing. Also, I don't think Kubernetes qualifies as PaaS.


We wrote up this guide (withcoherence.com, we're somewhere in the PaaS space and I'm a cofounder) https://www.withcoherence.com/post/the-2024-web-hosting-repo.... Hope it's helpful!


I feel like this is similar to trying out Docker to use for small side projects for the first time, lots of gotchas that is hidden


I love dokku! I've been running my SAAS seamlessly with it for 5+ years now. It's awesome to see it actively maintained.


I feel like a caveman doing this with a git bare repository and a hook, one of this days I am going to give Dokku a try!


Looks promising, I have been self hosting a lot of apps, this can make things easier.


How well does Dokku support running on ARM?


I dropped armhf (32 bit arm) a few releases ago. It was painful to maintain and the few users of that were older Raspberry PI installs. I think there are other tools out there that better support low-powered platforms (piku comes to mind).

ARM64 should be fine, with some caveats:

- Dockerfile/nixpacks support is great! Just make sure your base images and your Dockerfile supports ARM64 building - Herokuish _works_ but not really. Most Heroku v2a buildpacks target AMD64. This is slowly changing, but out of the box it probably won't build as you expect. - CNB Buildpacks largely don't support ARM64 yet. Heroku _just_ added ARM64 support in heroku-24 (our next release switches to this) but again, there is work on the buildpacks to get things running.

I run Dokku on ARM64 locally (a few raspberry pis running things under k3s) and develop Dokku on my M1 Macbook, so I think if there are any issues, I'd love to hear about them.

Disclaimer: I am the Dokku maintainer.


While ago I couldn't get LetsEncrypt working properly, was very frustrating. I think I just gave up at some point


Thank you for taking the time to provide this summary, and thanks for all your work on Dokku!


Obligatory mention of Piku as a Dockerless alternative: http://piku.github.io/


"Serverless platform" is an oxymoron.

But it worked for Salesforce, which is a software company whose slogan is "no software".


Salesforce's "no software" slogan dates back to a time when software was sold in boxes on store shelves. They were one of the first (possibly the first) cloud-based business app. The slogan only looks weird in hindsight because the term "software" now includes SaaS apps.


Let me guess ... it's called serverless but it still has a server somewhere in the equation?


Most of the time I see "serverless" used these days, it's referring to the fact that server specifics are abstracted away from the application itself's deployment artifacts. Instead it just runs on a platform of some kind without worrying about having to be built for this version of this OS, etc.

While it is being used a bit recklessly here, taking it literally is about as insightful and constructive to discussion as pointing out that "cloud" servers are located on the ground.

I would even defend its usage here by pointing out that it's entirely possible to use this at a company in which the servers are managed by one person or team, and the developers building applications simply interact with the service and never touch a server themselves. Neither team has to touch each others scope, making it indistinguishable from conventional "serverless" approaches in which the decoupling occurs across company rather than across team within one company.


> without worrying about having to be built for this version of this OS, etc.

Maybe call it "server-agnostic" or "OS-agnostic" then.


I honestly don't see the need to change it. The first use of "serverless" is from this article:

https://readwrite.com/why-the-future-of-software-and-apps-is...

In it, he points out:

>The phrase “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them.

I don't know how anyone could interpret "serverless" as meaning there's no server involved at any point in the application's execution, and if they did, I'm not sure what harm it causes? It seems like the only objection here is a pedantic urge to be more correct.


Can we not do this? Everyone knows that “serverless” doesn’t actually mean there are no servers. It’s not productive to do this “haha gotcha!” trope every time someone uses the serverless term.

Serverless refers to the fact that you can launch individual workloads on the platform while abstracting away the underlying infrastructure. Yes, to set up dokku you still need to provision a server. But to deploy an application onto dokku after it’s been set up, you do that without worrying about provisioning new infra for your app. That’s what is “serverless” about it, and it’s a perfectly acceptable use of the term.


"Serverless" means 1- pay for what you use. 2- No infra setup, ever. Dokku requires a server that you have to manage. It is not serverless. You should never hit a scaling wall.


Azure Functions are serverless to us, but for the team developing and deploying that feature they are not serverless. Dokku provides a tool so that once you deploy projects they can be ‘serverless’.


1- That would throw out serverless with (free) unlimited use (A la PocketHost)

2- Then getting a third party to set up Dokku and then using that would qualify (Because it'd be the same as getting AWS to setup their server abstraction) The platform is serverless, you hosting it probably not, maybe server-light, as you setup the abstraction and use that for many apps


> Can we not do this? Everyone knows that “serverless” doesn’t actually mean there are no servers.

Then maybe people shouldn't use a term that means "there are no servers". One doesn't get to complain if they use a word to mean something the opposite of its actual meaning, and then people don't like it.


The actual meaning of words is defined by how people use them. Serverless has a very specific, well-defined meaning despite its seemingly contradictory etymology.


Are you the sort of person that says "ackshewelly people in orbit aren't weightless" or complains that wireless headphones technically contain wires, or that motionless rocks are actually moving really fast because the Earth is moving through space...

It's just annoying.


Dokku doesn't mention "serverless" anywhere, it's just this persons blog post that uses this word wrongly.


It is, but "We provide a programmatic interface for deployment that allows deploying docker containers on a VPS or server that you control" doesn't have a good buzzword.


Feel free to point us to the computing paradigm that doesn't require execution on some host


> Dokku is an open-source Platform as a Service (PaaS) that runs on a single server of your choice

Lol


This is the part that gets me every time. It sounds pretty neat, but.. if it only works on one server - what's the point?

What about things like scaling, or even just what if your one server runs out of resources to fit more apps on?


It‘s a tradeoff: a real PaaS is more managed, fault tolerant, and scalable, Dokku is much less expensive, especially with multiple projects.

One server scales vertically and can serve a good number of projects and users. Huge spikes eg. due to attacks, lead to outages instead of runaway bills.


> What about things like scaling

Premature optimization for 99% of people's projects. Once you run into "scaling" issues you can always run it on a more powerful server.

Related: https://twitter.com/dhh/status/1827322640685506895


>what's the point

To get started in a simpler way, and in a way that solves 80% of use cases. Once you need to scale, then you can scale. Why worry about that upfront with all the complexity it entails?

>what if your one server runs out of resources to fit more apps on

There's vertical scaling. Rent a bigger server.


We've supported multiple servers for a few years and have had official k3s support since the beginning of the year, so not just one server anymore. We even support managing the servers associated in a k3s-based cluster.

Disclaimer: I am the Dokku maintainer.


Ohhh, I stand corrected. I don't think that was an option the last time I looked at Dokku. I see the schedulers section in the docs now, thanks for pointing it out!

Does the k3s scheduler work with existing non-k3s k8s clusters as well?


Yep, you can set a single property for the kubeconfig and it'll respect that.


You can still solve for that just fine in a multi-Dokku setup. It's just that Dokku won't do the coordination for you. Sometimes you do want something more integrated like k8s/openshift/nomad instead; sometimes not.


Dokku's multi-server offering is based on k3s. We interact with k3s but offload any actual clustering to k3s itself as it does the job better than Dokku could :) You can also just tie Dokku into an existing K8s cluster on your favorite cloud provider instead.

Disclaimer: I am the Dokku maintainer.


This sounds like a great approach! Thank you for keeping it up and on!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: