Another way of putting this, which largely amounts to the same thing, is that containerization was developed by and for very large organizations. I have seen it used at much smaller companies, most of whom had zero need for it, and in fact it put them into a situation where they were unable to control their own infrastructure, because they had increased the complexity past the point where they could maintain it. Containerization makes deploying your first server harder, but your nth server becomes easier, for values of large n, and this totally makes sense when your organization is large enough to have a large number of servers.
I think containers are great even for really small companies.
You boiled it down to `n' servers but its `n' servers times `m' services times `k` software updates. That's easier as soon as n * m * k > 2!
First of all, containers can be used with Cloud Run and many other ways to run containers without managing servers at all!
(tho if you can use services like Netlify and Heroku to handle all your needs cost-effectively, you probably should).
Setting up a server with docker swarm is pretty easy, because there's basically one piece of software to install. From there on all the software to update and install is in containers.
If your software gets more complex, your server setup stays simple. Even if it doesn't get complex, being able to install software updates for the app independently of the host is great. Ie, I can go from Python 3.7 to Python 3.8 with absolutely zero fuss.
Deploying servers doesn't get more complicated with a few more containers. At some point that's not true but if you want to run, say, grafana as well, the install/maintenance of the server stays constant.
Imagine what you would do without containers... editing an ansible script and having to set up a new server just to test the setup, or horribly likely ssh'ing in and running commands one-off and having no testing, staging or reproducibility.
I vastly prefer Dockerfiles and docker-compose.yml and swarm to ansible and vagrant. There are more pre-built containers than there are ansible recipes as well. So your install/configure time for any off-the-shelf stuff can go down too.
Setting up developer laptops is also improved with Docker, though experiences vary... Run your ruby or python or node service locally if you prefer, set up a testing DB and cache in docker, and run any extra stuff in containers.
Lastly, I think CI is also incredibly worthwhile even for the smallest of companies and containers help keep the effort constant here too. The recipe is always the same.
Having used Docker and Kubernetes, and also spun up new VM's, I can say that Docker and Kubernetes are _not_ easier, if you're new at it. Spinning up a new VM on Linode or the like is easier, by far.
Now, this may sound incredible to you, because if you're accustomed to it, Docker and Kubernetes can be way easier. But, and here's the main point, there are tons of organizations for whom spinning up a new server is a once every year or two activity. That is not often enough to ever become adept at any of these tools. Plus, you probably don't even want to reproduce what you did last time, because you're replacing a server that was spun up several years ago, and things have changed.
For a typical devops, this state of affairs is hard to imagine, but it is what most of the internet is like. This isn't to say, by any means, that FAANG and anybody else who spins up servers on a regular basis shouldn't be doing this with the best tools for your needs. I'm just saying, how you experience the relative difficulty of these tasks, is not at all representative of what it's like for most organizations.
But, since these organizations are unlikely to ever hire a full-time sysadmin, you may not ever see them.
Some of us have notes, that we can mostly copy-paste to setup a server and it works well without magic and n·m·k images.
Last time I checked, docker swarm was accepting connections from anywhere (publish really publishes) and messing with the firewall making a least-privilege setup a PITA; docker was building, possibly even running containers as root; and most importantly - the developers thought docker was magically secure, nothing to handle.
An nginx container handles redirects to HTTPS and SSL termination and talks to the other services using unpublished ports. Only 22 (sshd running on server) and 80 and 443 (published ports) are open to the world. Swarm ports open between the swarm servers. That's between AWS security groups.
I don't build on my servers. A service (in a container) makes an outgoing connection to listen to an event bus (Google PubSub) to deploy new containers from CI (Google Cloud Builder).
Config changes (ie, adding a service) are committed then I SSH in, git pull and run a script that does necessary docker stack stuff. I don't mount anything writable to the containers.
I cannot agree that "Containerization universally makes first server deployment harder". Even at single person scale, tools like Docker-Compose etc make my dev life massively simpler.
In 2020, I'd argue the opposite for most people, most of the time!
Also, if your container runtime is preinstalled in your OS as is often the case, the first run experience can be as little as a single command.
One of my favorite things is how it forces config and artifact locations to be explicit and consistent. No more "where the hell does this distro's package for this daemon store its data?" Don't care, it puts it wherever I mapped the output dir in the container, which is prominently documented because it pretty much has to be for the container to be usable.
Hell it makes managing my lame, personal basement-housed server easier, let alone anything serious. What would have been a bunch of config files plus several shell scripts and/or ansible is instead just a one-command shell script per service I run, plus the presence of any directories mapped therein (didn't bother to script creation of those, I only have like 4 or 5 services running, though some include other services as deps that I'd have had to manage manually without docker).
Example: Dockerized Samba is the only way I will configure a Samba server now, period. Dozens of lines of voodoo magic horsecrap replaced with a single arcane-but-compact-and-highly-copy-pastable argument per share. And it actually works when you try it, the first time. It's so much better.