Over the years, I kept tweaking my setup and now settled with running everything as a docker container. The orchestrator is docker-compose instead of systemd. The proxy is caddy instead of nginx. But same as the author, I also write a deploy script for each project I need to run. Overall I think it's quite similar.
One of the many benefits of using docker is that I can use the same setup to run 3rd party software. I've been using this setup for a few years now and it's awesome. It's robust like the author mentioned. But if you need the flexibility, you can also do whatever you want.
The only pain point I have right now is on rolling deployment. As my software scales, a few second of downtime every deployment is becoming an issue. I don't have a simple solution yet but perhaps docker swarm is the way to go.
If I understand you correctly, you do a sort of blue green deploy? Load balancing between two versions while deploying but only one most of the time?
How do you orchestrate the spinning up and down? Just a script to start service B, wait until service B is healthy, wait 10 seconds, stop service A, and caddy just smooths out the deployment?
Thanks for that. Didn't know this is a thing in Caddy. Seems low effort so I'll probably do that for now. I omitted it but I'm actually using caddy-docker-proxy. It's awesome, makes the config section be part of each project nicely. Haven't seen docker-rollout though. Seems like it could be promising.
I‘ve built up the software stack of the startup I work for from the beginning, and directly went for Docker to package our application. We started with compose in production, and improved by using a CD pipeline that would upgrade the stack automatically. Over time, the company and userbase grew, and we started running into the problems you mention: Restarting or deploying would cause downtime. Additionally, a desire to run additional apps came up; every time, this would necessitate me preparing a new deployment environment.
I dreaded the day we’d need to start using Kubernetes, as I’ve seen the complexity this causes first-hand before, and was really weary of having to spend most of the day caressing the cluster.
So instead, we went for Swarm mode. Oh, what a journey that is. Sometimes Jekyll, sometimes Hide. There are some bugs that simply nobody cares fixing, some parts of the Docker spec that simply don’t get implemented (but nobody tells you), implementation choices so dumb you’ll rip your hair out in anger over, and the nagging feeling that Docker Inc employees seem incapable to talk to each other, think things through, or stay focused on a single bloody task for once.
But! There is also much beauty to it. Your compose stacks simply work, while giving you opportunities to grow in the right places. Zero-downtime deployments, upgrades, load balancing, and rollbacks work really well if you care to configure them properly. Raft is as reliable in keeping the cluster working as everywhere else. And if you put in some work, you’ll get a flexible, secure, and automatically distributed, self-service platform for every workload you want to run - for a fraction of the maintenance budget of K8s.
Prepare, however, for getting your deployment scripts right. I’ve spent quite a while to build something in Python to convert valid Docker-spec compose files to valid Swarm specs, update and clean up secrets, and expand environment variables.
Also, depending on your VPS provider, make sure you configure network MTU correctly (this has shortened my life considerably, I’m sure of it).
there is no one size fits all answer to that. The standard is 1500; but MTU lowers with levels of encapsulation. (since you need those bytes for the encapsulation overhead).
There's also "Jumbo Frames" though you're not likely to encounter that day to day in a VPS.
I do the same. Swarm is the way to go since you already have compose files, but I have made the choice that it is not worth it. Until you hit scaling issues (as in many customers/users).
Nowadays I use github's packages registry. I used to run my own registry in the past along with the docker save method. But both of them are annoying to deal with. I have Github Pro so it's pretty much free. However even if I need to pay for it in the future, I'll probably do so. It's just not worth the headache.
Whenever I have anything to deploy, so depends on the project. On actively developing ones, could be once or twice a day. On slower days maybe once every 2/3 days.
One of the many benefits of using docker is that I can use the same setup to run 3rd party software. I've been using this setup for a few years now and it's awesome. It's robust like the author mentioned. But if you need the flexibility, you can also do whatever you want.
The only pain point I have right now is on rolling deployment. As my software scales, a few second of downtime every deployment is becoming an issue. I don't have a simple solution yet but perhaps docker swarm is the way to go.