I wish I had read this article a decade ago. For many years I have been wondering "why the heck would I use containers when I have chroot, cgroups and namespaces?"
Turns out that's exactly what containers are a packaging of! And I only found out about two years ago.
Although this article doesn't go into it, the benefits I've found of using containers rather than rolling isolation by hand is that a lot of semi-standardised monitoring, deployment, and workload management tooling expects things to come packaged as containers.
> Turns out that's exactly what containers are a packaging of!
Well, no. When people say "containers", they always mean "Docker".
And Docker also comes with a daemon with full root permissions and ridiculous security policies. (Like, for example, forcefully turning off your machine's firewall, #yolo. WTF!)
P.S. I actually run systemd-nspawn in production, but I am probably the only person on earth to do so.
Those in the know are familiar with OCI, etc. but (without hard data to back me up) I think it's still fair to say that the majority of people (lay people, if you will) consider them the same thing by virtue of ignorance.
>By default, all external source IPs are allowed to connect to the Docker host. To allow only a specific IP or network to access the containers, insert a negated rule at the top of the DOCKER-USER filter chain.
Yikes. Should people read the docs? Yes. Should Docker not do this? Also yes.
Perhaps I belong to the minority, but I really don't think about containers as Docker. Actually, I don't remember the last time I used Docker for anything. For the past several years, I've been using either Podman or systemd-nspawn, as yourself.
> P.S. I actually run systemd-nspawn in production, but I am probably the only person on earth to do so.
Mind sharing some good practical introduction article or set of articles for using VEs (virtual environments) with it? I'm tied to LXD at this moment which manages to provide both ease of operational and ease of configuration fine tunings be needed. I.e. I understand and tested for the projects I do taking care about on how to have network bridges, resources limiting, snapshot/rollback/create new image for VEs, storage profiles (say some I want to put on BTRFS some on ZFS some ...), simple `lxc ls` and `lxc shell <VE-name>` interfaces - may be systemd has all this kind of stuff as well. Or may be it shines in different area?
> P.S. I actually run systemd-nspawn in production, but I am probably the only person on earth to do so.
You're not alone, systemd-nspawn is very much underrated. I have used it a lot for machine containers, though I'm using podman+quadlet+systemd more right now.
systemd-nspawn with mkosi for generating workload images is still a nice & powerful ecosystem.
I can't speak for anyone else, but I definitely feel like there's no time to actually learn about all of these tools before being thrown into them by management/other well-meaning ICs. The end result is everyone is using a tool they know nothing about, with predictable results.
Totally agree. There are many, many pulls for attention - I don't really fault these people I mention. It's just notable that with all of this noise, the smallest bit of specialization can go a long way.
Honestly, habits/being adaptable are most of it. For example, don't waste time searching if you know 'ansible-doc' or 'man 5 something.conf' has it
Turns out that's exactly what containers are a packaging of! And I only found out about two years ago.
Although this article doesn't go into it, the benefits I've found of using containers rather than rolling isolation by hand is that a lot of semi-standardised monitoring, deployment, and workload management tooling expects things to come packaged as containers.