I've always understood docker-compose to be a development or personal tool, not for production. Is that not usually the case?
Also aside, "docker compose" (V2) is different from "docker-compose" (V1) [0], it was rewritten and integrated into docker as a plugin. It should still be able to handle most old compose files, but there were some changes.
For many people self-hosting implies a personal server; it's not really "development" but it's not "production" either. In that context many people find k8s or other PaaS to be too heavyweight so Docker Compose is pretty popular. For more production-oriented self-hosting there are various newer tools like Kamal but it will take a while for them to catch up.
I've managed to keep my personal server to just docker, no compose. None of the services I run (Jellyfin, a Minecraft server, prowlarr, a Samba server [this is so much easier to configure for basic use cases via Docker than the usual way], pihole) need to talk to one another, so I initialize each separately with shell scripts. Run the script for a new service once, and Docker takes care of restarts on system reboot or if the container crashes, don't even have to interact with my base OS's init system or really care about anything on it. When I want to upgrade a service, I destroy the container, edit the script to specify the newer version I want, and run the script again. Easy.
> For many people self-hosting implies a personal server; it's not really "development" but it's not "production" either.
There's Docker swarm mode for that. It supports clustering too.
It's nuts how people look at a developer tool designed to quickly launch a preconfigured set of containers and think it's reasonable to use it to launch production services.
It's even more baffling how anyone looks at a container orchestration tool and complains it doesn't backup the database they just rolled out.
> In that context many people find k8s or other PaaS to be too heavyweight so Docker Compose is pretty popular.
...and proceed to put pressure to shitify it by arguing it should do database backups, something that even Kubernetes stays clear from.
The blogger doesn't even seem to have done any research whatsoever on reverse proxies. If he would have done so, in the very least he would have eventually stumbled upon Traefik which in Docker solves absolutely everything he's complaining about. He would also have researchdd what it means to support TLS and how this is not a container orchestration responsibility.
Quite bluntly, this blog post reads as if it was written by someone who researched nothing on the topic and decided instead to jump to
I'm curious how that last sentence was going to end.
Let's say I agree with you and that TLS termination is not a container orchestration responsibility. Where does the responsibility of container orchestration start and TLS termination end? Many applications need to create URLs that point to themselves so they have to have a notion of the domain they are being served under. There has to be a mapping between whatever load-balancer or reverse proxy you're using and the internal address of the application container. You'll likely need service discovery inside the orchestration system, so you could put TLS termination inside it as well and leverage the same mechanisms for routing traffic. It seems like any distinction you make is going to be arbitrary and basically boil-down to "no true container orchestration system should care about..."
In the end we all build systems to do things to make people's lives better. I happen to think that separating out backups and managing ports as an exercise for the deployment team raises the barrier to people that could be hosting their own services.
I could be totally wrong. This may be a terrible idea. But I think it'll be interesting to try.
> If he would have done so, in the very least he would have eventually stumbled upon Traefik which in Docker solves absolutely everything he's complaining about
I'm aware of Traefik, I ran it for a little while in a home lab Kubernetes cluster, and later on a stack of Odroids using k3s. This was years ago, so it may have changed a lot since then, but it seemed at the time that I needed an advanced degree in container orchestration studies to properly configure it. It felt like Kubernetes was designed to solve problems you only get above 100 nodes, then k3s tried to bang that into a shape small enough to fit in a home lab, but couldn't reduce the cognitive load on the operator because it was using the same conceptual primitives and APIs. Traefik, reasonably, can't hide that level of complexity, and so was extremely hard to configure.
I'm impressed at both what Kubernetes and k3s have done. I think no home lab should run it unless you have an express goal to learn how to run Kubernetes. If Traefik is as it was years ago, deeply tied to that level of complexity, then I think small deployments can do better. Maybe Caddy is a superior solution, but I haven't tried to deploy it myself.
If you want an HTTPS ingress controller that's simple, opinionated, but still flexible enough to handle most use cases, I've enjoyed this one:
https://github.com/SteveLTN/https-portal
> Let's say I agree with you and that TLS termination is not a container orchestration responsibility.
It isn't. It's not a problem, either. That's my point: your comments were in the "not even wrong" field.
> (...) It seems like any distinction you make is going to be arbitrary and basically boil-down to "no true container orchestration system should care about..."
No. My point is that you should invest some time into learning the basics of deploying a service, review your requirements, and them take a moment to realize that they are all solved problems, specially in containerized applications.
> I'm aware of Traefik, I ran it for a little while in a home lab Kubernetes (...)
I recommend you read up on Traefik. None of your scenarios you mentioned are relevant to the discussion.
The whole point of bringing up Traefik is that it's main selling point is that it provides support fo route configuration through container tags. It's the flagship feature of Traefik. That's the main reason why people use it.
Your non sequitur on Traefik and Kubernetes also suggests you're talking about things that haven't really clicked with you. Traefik can indeed be used as an ingress controller in Kubernetes, but once deployed you do not interact with it. You just define Kubernetes services, and that's it. You do interact directly with Traefik if you use it as an ingress controller in Docker swarm mode or even docker-compose, which makes your remark even more baffling.
> I'm impressed at both what Kubernetes and k3s have done. (...) If Traefik is as it was years ago,(...)
Kubernetes represents the interface, as well as the reference implementation. k3s is just another Kubernetes distribution. Traefik is a reverse proxy/load balancer used as an ingress controller in container orchestration systems such as Kubernetes or Docker swarm. The "level of complexity" is tagging a container.
Frankly, your comment sounds like you tried to play buzzword bingo without having a clue whether the buzzwords would fit together. If anything, you just validated my previous comment.
My advise: invest some time reading on the topic to go through the basics before you feel you need to write a blog post about it.
On my personal home server I abuse Docker Compose so I don't have to use a huge command line to spin up/spin down containers but I can just say "docker compose up -d" and be done with it.
it's perfectly fine if everything can run on a single server instance. which is probably the majority of things.
i've run production instances with 2PB of data being scraped per month by 100+ CPUs and 256GB of RAM using docker compose. some of the machines (smaller instances) have run flawlessly with zero reboots for years on end. both on cloud and on-prem.
+1 to this. Smaller datapoint from my side I guess, but anyway, docker is the core of my self hosting setup and it is one of the things that I don't have to fiddle with, it just works.
That's fine. Some people also manually launch containers individually with Docker as their production system.
They'd be wasting their time and making their lives needlessly harder, though.
They already have tools that do all the legwork for them. Why not do the research and opt to force a square peg into a round hole?
Research Docker swarm mode, reverse proxies like Traefik, and Let's Encrypt. Kubernetes can be even easier with implementations like microk8s, which are a snap away from any Ubuntu installation. Look at what you're doing and figure out the right tool for the job. Don't just whine about how you're doing something wrong and it's the tool that needs to change to fix your own mistakes.
Not sure what axe you have to grind, but the person you replied to didn't whine about anything as far as I can see. I've also been hosting a lot of stuff on docker-compose and it is perfectly fine for a lot of things. No one said it's perfect or does everything.
>They'd be wasting their time and making their lives needlessly harder, though.
Using Kubernetes where it's not needed is just that - wasting your time and making your life harder.
Before you say that I just need to research more: I know Docker swarm mode, I run my personal server on Kubernetes using Traefik and Let's Encrypt, I professionally work with Kubernetes (both as an admin and working on Kubernetes security, which is tough to get right), most services in my dayjob run on Kubernetes, and I was the person who introduced CI/CD pipelines there some years ago.
I still claim that there are production usecases that are better served by docker-compose.
> Using Kubernetes where it's not needed is just that - wasting your time and making your life harder.
I think this is a very ignorant and misguided take.
Kubernetes is a container orchestration system, just like Docker swarm mode or even Docker compose. If you need to deploy sets of containerized apps into your own VMs, you can pick up any Kubernetes implementation. You don't even need Helm or anything. Plain old kustomize scripts will do. Some aren't even longer than a docker-compose.yml.
More to the point, Kubernetes is an interface. One that you can use in local deployment and in major cloud providers.
You should really check your notes because your comments contrast with the realities of actually running a service.
> Before you say that I just need to research more: (...)
All your appeals to authority are falsified by your claims.
I, on the other hand, actually use Kubernetes both professionally and in personal projects, as well as Docker swarm mode, and can tell you I'm no uncertain terms that none of your points have any traction in reality.
> I still claim that there are production usecases that are better served by docker-compose.
I'm sorry, but your comments simply sound deeply uninformed and misguided.
I mean, it makes absolutely no sense to comment on using docker compose in production when Docker swarm mode is far more capable and Docker swarm mode stacks already share most of the schema with docker compose. You literally have virtually nothing to do to adapt a docker-compose script to launch a stack.
Most people dont need rolling release, 24/7 availability, auto scaling, etc.. on their home server, so managing k8s just add way more complexity. My main reason to not use is because I would need to host a artifact service somewhere else which is PITA. Some k8s runtime support local building but is not as easy as compose in my experience.
> Most people dont need rolling release, 24/7 availability, auto scaling, etc..
That's perfectly fine. That's not the reason why you are better off running your apps in Kubernetes though.
You are better off running your apps in Kubernetes because it handles everything you ever need to effortlessly run containerized apps. You don't even need to install tooling or deployment tools or anything at all. You can have a single kustomize script that defines your services and deployments, have in place an ingress and Let's Encrypt, and you're done. You don't need to bother with anything else. Just run kubectl apply and go grab a coffee.
Hey that is not true, at least last time I tried I spent considerable time trying to integrate a bare metal load balancer from a 9 stars github repo plugin because apparently exposing the port is not recommended. Also having the master and the node in the same server can be problematic because by design it shouldnt be like.
One last point, the runtime ram and cpu overhead is far from minimal.
> Hey that is not true, at least last time I tried I spent considerable time trying to integrate a bare metal load balancer from a 9 stars github repo plugin because apparently exposing the port is not recommended.
You sound like you tried to put together an ad-hoc ingress controller without knowing what an ingress controller is and why abusing NodePort is a mistake in the context of a cluster.
> Also having the master and the node in the same server can be problematic because by design it shouldnt be like.
You should look at the problem to see if an approach makes sense. If you're using vanilla Kubernetes on a one-box deployment then you're abusing a large-scale high-scalable cluster management system by shoving it into a single box. It can work, but most of the resources will be wasted on managing your cluster of one node.
There are plenty of Kubernetes distributions that are designed to handle well small clusters, and even one-box systems. Minikube, microk8s, and even k3s come to mind. I'm partial towards microk8s because installing it is just a matter of installing a single package from the distro's official repository.
I've used it for both.... but for production only in extremely constrained/limited environments. Its every bit as stable as anything else... you can version the compose you deploy so rollbacks are easy, etc etc.
far more useful for development IMO, but when push comes to shove and you need a few things running together on a machine in the middle of nowhere with limited/unreliable internet access.... you can't do much better... a few volume mounts for durability and you're really in decent shape.
logs, networking, system health etc etc.... just all a few docker and docker-compose commands away
Its not going to be amazingly professional and 100% best practice, but you can set up docker-composes and/or design containers to pull everything they need on first run.
That plus a decent backup system would work for a small array of servers with fail-safes.
Though I would die inside if a production user-focused app under any level of proper load was set up like that.
You'll need more than a backup system. At least some sort of a load balancer to switch between different groups of running docker containers (so that upgrades, backups, etc... can happen without service being interrupted).
not every single system requires 100% uptime... some folks still use release windows for specific applications/hardware. I'd argue that most systems out there can support a release window.... and the overlap between "docker-compose works for us" and "we have release windows" is probably quite high.
We needed to do inference on remote machines stationed at random points across North America.... the machines had work to do depending on external factors that were rigidly scheduled. really easy to make upgrades, and the machines were _very_ beefy so docker-compose gave us everything we needed.... and since we shipped the data off to a central point regularly (and the data had a short shelf life of usability) we could wipe the machines and do it all again with almost no consequences.
I needed to coordinate a few small services and it did the trick beautifully
My experience has been that the vast majority of systems could tolerate a few minutes offline per month for upgrades. Many could tolerate a couple hours per month. No or negligible actual business harm done, enormous cost savings and higher development velocity from not adding the complexity needed for ultra-high uptime and zero-downtime upgrades.
What's vital is being able to roll back a recent update, recover from backups, and deploy from scratch, all quickly. Those are usually (not always) far easier to achieve than ultra-high-uptime architecture (which also needs those things, but makes them all more complicated) and can be simple enough that they can be operated purely over ssh with a handful of ordinary shell commands documented in runbooks, but skipping them is how you cheap out in a bad way and end up with a system down for multiple days, or one that you're afraid to modify or update.
Early this year I consulted for a company that used docker-compose as their software deployment mechanism. They created new compute, but compose gave them the sidecars you’d want from k8s without all the other overhead that came with.
To go even further, podman supports actual sidecars (one or many forming a pod) and even supports actual Kubernetes deployment manifests without needing a kube-apiserver.
According to docker: "Compose has traditionally been focused on development and testing workflows, but with each release we're making progress on more production-oriented features."
Also aside, "docker compose" (V2) is different from "docker-compose" (V1) [0], it was rewritten and integrated into docker as a plugin. It should still be able to handle most old compose files, but there were some changes.
[0] https://docs.docker.com/compose/releases/migrate/