One of the goals of containers are to unify the development and deployment environments. I hate developing and testing code in containers, so I develop and test code outside them and then package and test it again in a container.
Containerized apps need a lot of special boilerplate to determine how much CPU and memory they are allowed to use. It’s a lot easier to control resource limits with virtual machines because the application in the system resources are all dedicated to the application.
Orchestration of multiple containers for dev environments is just short of feature complete. With Compose, it’s hard to bring down specific services and their dependencies so you can then rebuild and rerun. I end up writing Ansible playbooks to start and stop components that are designed to be executed in particular sequences. Ansible makes it hard to detach a container, wait a specified time, and see if it’s running. Compose just needs to be updated to support management of shutting down and restarting containers, so I can move away from Ansible.
Services like Kafka that query the host name and broadcast it are difficult to containerize since the host name inside the container doesn’t match the external host name. Requires manual overrides which are hard to specify at run time because the orchestrators don’t make it easy to pass in the host name to the container. (This is more of a Kafka issue, though.)
Systemd, k8s, Helm, and Terraform model service dependencies.
Quadlet is the podman recommended way to do podman with systemd instead of k8s.
Podman supports kubes of containers and pods of containers;
man podman-container
man podman-generate-kube
man podman-kube
man podman-pod
`podman generate kube` generates YAML for `podman kube play` and for k8s `kubectl`.
Podman Desktop can create a local k8s (kubernetes) cluster with any of kind, minikube, or openshift local. k3d and rancher also support creating one-node k8s clusters with minimal RAM requirements for cluster services.
kubectl is the utility for interacting with k8s clusters.
k8s Ingress API configures DNS and Load Balancing (and SSL certs) for the configured pods of containers.
E.g. Traefik and Caddy can also configure the load balancer web server(s) and request or generate certs given access to a docker socket to read the labels on the running containers to determine which DNS domains point to which containers.
Container labels can be specified in the Dockerfile/Containerfile, and/or a docker-compose.yml/compose.yml, and/or in k8s yaml.
Compose supports specifying a number of servers; `docker compose up web=3`.
Terraform makes consistent.
Compose does not support rolling or red/green deployment strategies. Does compose support HA high-availability deployments? If not, justify investing in a compose yaml based setup instead of k8s yaml.
Quadlet is the way to do podman containers without k8s; with just systemd for now.
I find that I tend to package one-off tasks as containers as well. For example, create database tables and users. Compose supports these sort of things. Ansible actually makes it easy to use and block on container tasks that you don’t detach.
I’m not interested in running kubernetes, even locally.
Ok one more to add that is a kind-of an abuse of containers: Some compute cluster solutions (like those used for HPC) are using containers to manage software installations on the clusters. They are trying to unify containers with the standard Unix environment, however, so that users still see their home directory (mounted in the container) and other paths so that running applications in the container is the same experience as running it directly on the host OS. This is just a TERRIBLE solution. I much prefer Environment Modules or something like Python's virtual environments (if it worked for arbitrary software installs) as a solution.
Containerized apps need a lot of special boilerplate to determine how much CPU and memory they are allowed to use. It’s a lot easier to control resource limits with virtual machines because the application in the system resources are all dedicated to the application.
Orchestration of multiple containers for dev environments is just short of feature complete. With Compose, it’s hard to bring down specific services and their dependencies so you can then rebuild and rerun. I end up writing Ansible playbooks to start and stop components that are designed to be executed in particular sequences. Ansible makes it hard to detach a container, wait a specified time, and see if it’s running. Compose just needs to be updated to support management of shutting down and restarting containers, so I can move away from Ansible.
Services like Kafka that query the host name and broadcast it are difficult to containerize since the host name inside the container doesn’t match the external host name. Requires manual overrides which are hard to specify at run time because the orchestrators don’t make it easy to pass in the host name to the container. (This is more of a Kafka issue, though.)