Podman was good when it supported systemd unit files, so I could auto start and auto update containers, even entire pods with systemd.
Then they removed that in favor of Quadlet. Now in order to do a single container I can do a unit file, but for a pod, I need to use a Kubernetes cluster definition.
Plus, unlike Docker their containers bow to SELinux definitions, so I have repeatedly struggled with containers unable to access mapped directories.
So what is it, Podman? Should I just use Kubernetes? Should I just make dedicated directories for everything instead of mapping logical places for things?
I already was defining my infrastructure with docker-compose.yml files, and found out that podman-compose has a poorly documented feature that generates systemd units. It doesn't use the now-deprecated podman feature, it writes the unit files itself, and I find the process much smoother than the podman feature anyway.
Its documented. If you just type `podman-compose` in the command line you get...
usage: podman-compose [-h] [-v] [--in-pod in_pod] [--pod-args pod_args]
[--env-file env_file] [-f file] [-p PROJECT_NAME]
[--podman-path PODMAN_PATH] [--podman-args args]
[--podman-pull-args args] [--podman-push-args args]
[--podman-build-args args] [--podman-inspect-args args]
[--podman-run-args args] [--podman-start-args args]
[--podman-stop-args args] [--podman-rm-args args]
[--podman-volume-args args] [--no-ansi] [--no-cleanup]
[--dry-run]
{help,version,wait,systemd,pull,push,build,up,down,ps,run,exec,start,stop,restart,logs,config,port,pause,unpause,kill}
...
command:
{help,version,wait,systemd,pull,push,build,up,down,ps,run,exec,start,stop,restart,logs,config,port,pause,unpause,kill}
...
systemd create systemd unit file and register its compose stacks
When first installed type `sudo podman-compose systemd -a create-unit`
later you can add a compose stack by running `podman-compose systemd -a register`
then you can start/stop your stack with `systemctl --user start podman-compose@<PROJ>`
Yeah, I eventually found that, but only after I finally stumbled upon someone referencing the feature in a GitHub issue.
The --help is fine documentation for the people who have already installed the tool, but it doesn't help people like OP who just want a simple way to run multiple containers as a systemd unit and don't yet know that podman-compose has a solution.
That's why I said "poorly documented" instead of "undocumented". It's there once you know where to look.
That’s… not documentation. That’s a CLI helpfile. It’s better than nothing but also what is completely broken with the “move fast and break things” mindset.
it's documentation enough to not call it an undocumented feature.
the concern with truly undocumented functionality is that it's not included intentionally: either it's a bug or an experimental feature that could be removed or changed with no notice. a poorly-documented feature, on the other hand, will probably at least get a deprecation notice before it disappears
I'm using podman-compose for my homelab, which is obviously fine.
But even for small-scale single-node production use cases, I suspect that podman-compose with systemd doesn't have the same concerns as docker-compose does. Since you're registering the workload with systemd, it'll restart with the node as easily as any other service, and rootless containers are a big win for security.
Where you can't keep using (podman|docker)-compose is when you have to scale up a service beyond a single node.
You can keep using x-compose on several nodes, you just need e.g. ansible or salt on top of it. For many things this is still a local maximum compared to a K8s cluster or "just ssh in'.
For a lot of the services I'm thinking of for this case the scheduling may be across multiple nodes but it may not be flexible - e.g. this is my preferred way to run things which need guaranteed local iops. So Swarm maybe helps with service discovery etc. but its main purpose is lost.
I have a lot of complaints about Docker Swarm but they're either about its ownership issues or relatively minor (but - lots of minor) issues, if you want to use it it's fine. But you do still need an orchestration layer above it anyway.
> unlike Docker their containers bow to SELinux definitions,
That's a bug in docker. If your system isn't configured for SELinux, disable it.
Also the systemd files generated by podman-generate-systemd are just executing "podman start containername", you can write them on your own easily but (unlike e.g. docker-composr) the containers are black boxes pretty much.
The advantage of quadlet is that the definition of the container is declared in the .container file; before I used to write the podman run command line manually in a handwritten systemd unit, and quadlet is a big improvement in that respect and can be an alternative to docker-compose (with advantages and disadvantages).
I have been doing Linux sysadmin for 20 years and I just stopped trying to understand SELinux. It looks and feels like an abomination borne out of some IBM or other antediluvian corporate UNIX system for programmers wearing suit and tie.
I'll probably fail with specifics, where they certainly do a better job.
So. First it's important to know SELinux runs in one of two modes:
* A targeted mode where well-known/accounted-for things are protected. For example, nginx
* A more draconian mode where *everything* is protected
People often present the first [default] mode as if it were the second.
The protection is based on policies that say 'things with this label/at this path are allowed to do XYZ'.
It's very focused on filesystem paths and what relevant applications try to do.
It's entirely manageable, but admittedly, complicated. Without practicing the words I can't express them.
Most people having trouble with SELinux are defying some convention. For example: placing application scratch data in '/etc'.
Policy management is a complicated topic.
The policy can be amended in cases where the standard doesn't apply; I won't cast judgement - sometimes it's a good idea, sometimes not.
Another way to handle this is to copy the label from one path and apply it to the one your application requires/customizes. This is less durable than leaning on the policy.
It acts as a sort of central DB... the goal is to make things such that the policy stores all of the contexts so the files/dirs can have "labels" applied for SELinux
Only if you want it back in enforcing mode after the next reboot. If you want to make the change permanent, you need to set the following in /etc/selinux/config as well:
I keep forgetting this transition happened until I try to 'podman generate systemd [...]'
This is rare because I wrote an Ansible role to do this in a way that feels nice.
Anyway, it really feels like podman lost the mark. I've already subscribed to the unit file maintenance/relationship planning thing. Just let me use the generator. I don't care about Quadlets or how they might be better.
I recently migrated over to NixOS which treats systemd as the source of truth for everything, including containers. I found this model extremely intuitive, but it was difficult to apply this to Docker Compose without a lot of manual migration. So I ended up writing a tool that handles this for you — it converts your Compose files into a NixOS config that can be interpreted and managed natively.
Arion can wrap docker-compose and run as a project or part of a nixos config. Did you come across Arion before creating this, and have you compared them?
I had a brief look through your examples and it doesn’t look like compose2nix implements docker-compose’s network per compose file. Is this something you want to add?
So, from what I understand, arion provides a Nix frontend for Docker Compose. This allows you to write Nix that runs via Docker Compose. It doesn’t solve the migration problem: if you have an existing Docker Compose project, you still need to manually convert it into Nix for arion to consume.
My tool does the opposite: it takes a Compose file and converts it into OCI containers in Nix. The idea is that your Compose file is the source of truth, and you simply generate Nix to run on NixOS. One benefit here is that you can easily migrate an existing Compose project into native Docker/Podman containers running on NixOS. This removes Docker Compose from the equation entirely - essentially a “reimplementation” of Compose.
That was a cool feature (I didn't realize it was gone, that's unfortunate), although I felt the generated code wasn't super great, and if the container is stateless (excepting what's stored in volumes of course) then it's so simple to write your own systemd unit file that I just do that now. I wrote it once and pretty much just copy/paste it when needed, changing the podman run command for image names, port numbers, volumes, etc. For example, here's what I use for Jellyfin. Just drop at `/etc/systemd/system/jellyfin.service`:
Note: You can also just `s/podman/docker/g` and reuse the same service file with docker, which is really convenient for systems where you have no choice
> So what is it, Podman? Should I just use Kubernetes?
If you're talking about a production system for any business larger than a 10 person tech startup; yeah, probably. Alternatively there's Docker Swarm and Hashicorp Nomad. Though Swarm is not nearly as flexible, it's just easy to use. And Nomad... well, let's just say I've been paying closer attention to Hashicorp's build processes in their open source repos like Packer and Vault as of late and they do some stuff that seems shady to me so use at your own risk.
`podman generate systemd` is still there, and I see no reason you couldn't use it. it's just a bunch of podman commands wrapped in a unit file, no magic.
feels like a lot more cruft than quadlets to me though.
Mapping directories from the host requires that you change selinux labels on those files so that the container process can access the files.
That's just how selinux works.
Then they removed that in favor of Quadlet. Now in order to do a single container I can do a unit file, but for a pod, I need to use a Kubernetes cluster definition.
Plus, unlike Docker their containers bow to SELinux definitions, so I have repeatedly struggled with containers unable to access mapped directories.
So what is it, Podman? Should I just use Kubernetes? Should I just make dedicated directories for everything instead of mapping logical places for things?