Still have to decide on a single OS to reduce maintenance problems. Could just have installed all the services (which are all available as packages) and handled the configuration files instead of configuration files+docker file+s3 costs of docker image with nothing but the base os + one package and a configuration file.
As for the "s3 costs of docker image", it's a few cents per month.
OP can now deploy all of his websites
onto another server instantly
He could have just written a setup script that installs the needed services on any machine.
Without adding all that docker specific complexity described in the post:
Moving to Alpine
Building his own Alpine image
Building 9 more Docker images
Orchastrate all the docker images
Sign up for Amazons container registry
Seems totally insane to me.
My guess is that you never used containers at all, let alone Docker.
A Dockerfile is just a setup script that installs the needed services on an image, which you can run on any machine. That's it. There is no added complexity.
This is not about a single Docker file vs a setup script. If you read my post you will see that I describe the steps the author took. And they are plenty.
My guess is that you did not read the article at all.
He was "building Docker images for each of the services". So not a single one. 10 of them. And he signed up for a commercial registry to host them. An additional service he depends on now.
Yet even a single Docker file would not be as simple as a setup script. A setup script on the host OS would install some packages that the host OS will keep up to date. Using a Docker image instead puts the burden on you to keep it up to date.
A setup script on the host OS would install some packages that the host OS will keep up to date.
This is simply not as easy as you make it out to be. Installing dozens of services from the OS, is inherently creating a nest of dependencies which is hard to explicitly reproduce on other systems.
Whereas Docker provides explicit isolated environments for each service so it's far easier to reproduce on other systems. This appeals to me for cloud environments but Docker on the desktop might be a bit too far for me...
It also removes attack vectors and weirdness that happens when a package sees optional dependencies on the system. I.e,, if I need ldap for one thing, I don't have services in other containers trying to work with ldap.
Yes, most docker enthusiasts don't do this. They run a bunch of containers full of security holes.
I expect this to become a hot topic as soon as we will start witnessing data breaches that have outdated containers as their source.
That's pretty much the baseline when dealing with any software system, whether it's a bare metal install of a distro, a distro running on a VM, or software running in a container.
> Now every time a package in Alpine gets an update you have to update all 10 containers.
All it takes is inheriting the latest version of an image and running docker build prior to redeploying.
I mean, this stuff is handled automatically by any CI/CD pipeline.
If you don't carr about running reproducible containers you can also sh into a container and upgrade it yourself.
Do you also complain about package managers such as deb or rpm because most debian and redhat users run a bunch of software full of security holes?
Software updates is not a container issue. It is a software deployment issue. I mean, when you complain about keeping packages updated you are in fact complaining about the OS running on the base image.
That's pretty much the baseline when dealing
with any software system
All it takes is inheriting the latest
version of an image
I mean, this stuff is handled
automatically by any CI/CD pipeline.
you can also sh into a container
and upgrade it yourself
nest of dependencies which is hard to
explicitly reproduce on other systems
A good way is to call your setup script "setup_debian_9.sh" for example, so it is clear which OS it was tested on.
10 services, 10 installers, 10 installations.
Where exactly do you see any problem or issue?
> even a single Docker file would not be as simple as a setup script. A setup script on the host OS would install some packages that the host OS will keep up to date. Using a Docker image instead puts the burden on you to keep it up to date.
That's simply wrong on many levels. Yes, a single Dockerfile is as simple (if not simpler) than a setup script. A Dockerfile is a setup script.
And yes, you can update individual containers or even build updated images.
Again, you seem to be commenting on stuff you know nothing about.
a) you have 10 setup scripts rather than 1. This would make sense if you actually wanted to have different dependencies/OS setup/whatever for your 10 services. But if you've decided to standardise on a common baseline set of dependencies for the sake of consistency (which is a valid choice) then why repeat them 10 times over?
b) You have the extra intermediate artifacts of the images which just give you one more thing to get out of date, go wrong, or slow down your process. Rather than run script -> get updated things, it's run script -> generate images and then deploy those images. Sure, it's all automatable, but what's it gaining you for this use case?
If you have a single setup script to build, package and deploy all 10 services, and you can't build and/or deploy each service independently, then you have more important things to worry about than figuring how containers are used in the real world.
What other solution for the easy portability do you know? Or how would you propose to handle this?
If it is easier then docker build && docker push and docker pull on the other side I'm all ears!
If upstream supplies a decent docker image, chances are that means the package is more amenable to scripting and running in a chroot/jail/container - and documents it dependencies somewhat.
That said, snaphotting the "state" of your container/jail can be nice. Recently I used the official ruby images, and could build a custom image to use with our self-hosted gitlab (with built-in docker registry) that a) got a standard Debian based ruby image and applied updates, and b) built freetds for connecting to mssql.
Now I can quickly update that base image as needed, while the CI jobs testing our rails projects "only" need a "bundle install" before running tests.
And the scripts are largely reusable across heterogeneous ruby versions (yes I hope we can get all projects up on a recent ruby..).
Puppet for the server-automation part. Languages that make it easy to produce a "fat binary" for the isolation part.
Docker solves a real problem for languages where deployment is a mess, like Python. It just grates on me when the same people who spent the last 10 years mocking Java (which does essentially the useful parts of docker) are suddenly enthusiastic about the same kind of solution to the same kind of problem now that it has a trendy name.
You're arguing a point never made. That containers make things portable is not saying that's the ONLY thing that makes things portable.
I find using containers a lot easier to be portable when I have multiple apps that bizarrely require different versions of, say, python, or python libs and the same version of python.
Although, it's not exactly the same thing, because with Docker you have everything already installed in the image. I've only used Ansible and I was never happy with its dynamic nature.
Docker (all container solutions really) aren't a panacea, but they solve a very real problem.
If it looks waaaay different to puppet, ansible and chef, there's a reason for that :) Doing provisioning "properly" means managing every file on the drive...
For example, there's no concept of `Maybe this request failed, you should handle it`. So when you run the deployment script, the request fails and the rest of your deployment process.
Defining the possibility of failure with a type system would force you to handle it in your deployment code and provide a backup solution.
Why? Personal projects aren't more stable or bound to a single provider. If anything, personal projects may benefit more from a deployment strategy that makes it quite trivial to move everything around and automatically restart a service in a way that automatically takes dependencies into account.
> Considering that it's likely we might change our personal infrastructure less than one every year
In my experience, personal projects tend to be more susceptible to infrastructure changes as they are used to experiment stuff.
> and I've never got a case when an unmaintained docker setup can run 6 months later,
The relevant point is that the system is easier to maintain when things go wrong and no one is looking or able to react in a moment's notice. It doesn't matter if you decide to shutdown a service 3 or 4 months after you launch it because that's not the usecase.
> I'm not sure if the value for portable is that high.
That assertion is only valid if you compare Docker and docker-compose with an alternative, which you didn't. When compared with manual deployment there is absolutely no question that Docker is by far a better deployment solution, even if we don't take into account the orchestration funtionalities.
I look at this from a different perspective: I have plenty of actual things to do, personal infra should be the least of my concerns and I should be able get them up and running in least amount of time.
> I've never got a case when an unmaintained docker setup can run 6 months later
It really depends on the well-being of the host and the containerized application. I have plenty of containers running for more than a year without a single hiccup.
I’ll let you know my kids personally disagrees with you on this one if Plex on the TV or iPad suddenly doesn’t work.
Being able to easily migrate apps is super nice to when changing hardware/server.
That's technically right, but not what you'd expect. Docker runs root in a few restricted namespaces and behind seccomp by default. The syscall exploits people are worried about are often simply not available. Even then it's easy to take it another step and create new users. You could even have root mapped to a different user on the outside if you prefer that.
That shouldn't be an issue if you're coming from FreeBSD - https://www.freebsd.org/doc/en/articles/new-users/adding-a-u...
> If you did not create any users when you installed the system and are thus logged in as root, you should probably create a user now with ...
I have not used docker. However, I have been putting things in containers with FreeBSD 'jail' since 2001 ...
If I jail my httpd and there is a vuln in the httpd, the attacker gets the jail (and the httpd, etc.) but not the underlying system.
That's a huge win, in my mind - is that not how it works in dockerland ?
jail is pretty tremendous - a real win in many areas.
The biggest advantage of Docker in my opinion is that it makes it much easier to make conflicting software coexist on the same machine (for example two websites requiring two different versions of node.js or php). Also it is nice that you can build the image once and then deploy it many times. Ansible's equivalent is of rebuilding the image every time you want to deploy it.
Also I find it a bit easier to accomplish what I want with a Dockerfile than with an Ansible script and if you make some mistakes it is easier to rebuild an image than "cleanup" a VM instance.
So, Docker smooths many edges compared to Ansible, but I wouldn't consider that a _huge_ advantage, expecially in the context of a personal infrastructure.
The downside is that you should rebuild the containers daily to be sure to have all the security patches. Not as convenient as apt-get. Maybe it's more cost effective to run another VPS or two.
there are lot of cases for containers. this isn't one (although over engineering personal projects is always fun)
Also, `docker commit` is pretty easy, and you can also just back up individual volumes.
Portability with LXD is even cleaner as all the data is in the lxc container. It's not immutable, and the initial setup is a little more involved as you have to set up services on each container, eg no dockerfiles, and you need to figure out ingress on the host often less declaratively, with normally routing 80/443 via iptables to an nginx or haproxy container to then reverse proxy to the relevant container per domain-based ACLs.... etc.
But, I still prefer it to Docker. I rather don't mind initially setting up and configuring the needed services the first time on each container... And for me that's a good way to get familiar with unfamiliar packages and services/applications that I might want to try/play with, rather than just go find a dockerfile for Application X or Y and not actually learn that much about the service or application I am deploying. Speaking for myself only-- obviously there are the gurus who know common services and applications inside and out already, and can configure them blindfolded, so Dockerfile everything would make sense for them.
To each his/her own.
It's fun, easy to backup, easy to migrate, easy to just test something and cleanly throw it away. And in practice the containers are pretty much like VMs (talking about personal projects here, corporate is more complicated of course).
And the upfront work is not that much. Do the quick start guide and one or two things. Maybe you don't even need to configure iptables manually, "lxc config device add haproxy myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:localhost:80" does a lot for you.
Can also only recommend LXD/LXC.
Dockerfiles are a part of the configuration
>s3 costs of docker image
Just like how most programs are available as packages in your favourite distribution's package manager, most programs are available as Docker images on Docker hub, which is as gratis as packages built for distros. And for the ones that don't have an image on Docker hub, you can include the Dockerfile to build it on the spot in the production environment.
The real benefit comes from having the _ability_ to use a different base distribution if required. While alpine is suitable for most applications, sometimes you might be required to run a different distro; perhaps you're packaging a proprietary application compiled for Glibc.
Also, networking isolation is straightforward in Docker. Let's say there's a serious security bug in Postgres that allows you to log in without any password. If someone could perform a RCE in a public-facing website, this would be a catastrophe as they'd be able to exploit the database server as well. With Docker, you can easily put each service in its own network and only give access to the database network to those services that need it. Or you can be even more paranoid and have a separate database and a separate network for each service.
One more feature I'm fan of is the ability to run multiple versions of a software. Maybe some applications require Postgres 9.x and some require Postgres 10.x. No problem, run both in separate containers. You can't do that with regular distributions and package managers (at least in any I know of).
Nix would be your best shot at that outside Docker.
Guix is like Nix, but I don't think Nix has an equivalent command to bundle a bunch of packages into a self-contained file for running on arbitrary distros (without Nix/Guix installed).
cgroups and network namespaces.
> run multiple versions of a software
Wrt "bug ridden"
> if I were having trouble with something Docker-related I would honestly feel like there was a 50/50 chance between it being my fault or a Docker bug/limitation
> Still have to decide on a single OS to reduce maintenance problems.
No you don't, you can run various distros in docker containers. We're using a mix of Debian (to run legacy services developed to run on old Debian LAMP servers) and Alpine (for our sexy new microservices) at my current job.
> Could just have installed all the services (which are all available as packages) and handled the configuration files
Then you would have a system dependent on the volatile state you configured by hand, meaning the system configuration is not declarative or reproducible.
I think you are missing the point. For each distro base you introduce via docker you must track and update the security releases. Standardizing on one base distro absolutely reduces the ongoing maintenance work.
If you're using Ansible as the author already was, you essentially have this already. I can do a one command deploy to cloud servers, dedicated servers and colo boxes with just Ansible. Docker gets you a slightly more guaranteed environment and an additional layer of abstraction (which has its own set of pros and cons), but that's about it.
Pin specific versions of your packages, coupled with caching and you're sitting pretty.
Some software, like RDBMS, are heavy to setup. You can do once and is not that hard, but then when the day come to reinstall, upgrade or move it to another version:
- You need to remember how do that
- If you upgrade OS, the step have changed
- If the install (of the OS) go kaput, you can lose the work of setup everything so far -the steps- (ie: I have a few times manage to ruin some ubuntu install. Instead of wasting time fixing it, I spin another server and redo)
- If the install (of the app) go kaput, you are left with a inconsistent state (I manage to broke a PG database in a upgrade, because was looking to the wrong tutorial, Instead of wasting time fixing it, I spin another docker image and redo from backup)
- I try to upgrade everything fast. A new ubuntu version? upgrade. A new major PG version? upgrade. Not always to production but I don't like to wake up 2 years late and face a BIG upgrading cycle. prefer to spread the pain in the year. If something wanna break, I want to see it coming. So I reinstall a lot of times.
And it work across OS. So all the problems of above, was in my dev machine!
P.D: I wanna something alike docker, I don't care for orchestration (so far) only something that allow to package apps/dependencies, work in my small serves, allow to (re)build as fast as possible. What other option can work? (P.D: My main deps are postgress, nginx, redis, python3, .net core, rust (coming). Wish to setup android/ios toolchains too)
An alpine container is in the middle double digit MB range and will be reused in all other images using it. The space costs are trivial. Furthermore the base image is hosted on Dockerhub and last I checked you can host up to 3 images privately on Dockerhub and unlimited in public.
Agree. Especially in native cloud, like AWS, where you already took care of host deaths, scalability, making data persistent, aka good automation.
You could just install the services and handle the configuration files, but then you suffer from replicability and deterministic build problems:
- The configuration files are all over the place, and while you can remember most of them, you can't be 100% sure that you got everything in /var/lib/blahblah/.config/something.conf, /etc/defaults/blahblah, /etc/someotherthingd/blahblah.conf, etc.
- Rebuilding the server becomes a problem because you don't remember a year and a half later everything you did to set the damn thing up. Even worse, you've been tweaking your config over time, installing other packages that you can't remember, putting in custom scripts for various things...
- Recovering from a catastrophic failure is even worse, because now you don't even have the old configuration to puzzle through.
- If you set up another failover system, you can't be sure it's configured 100% the same as the old one. And even if you did get them 100% the same, they WILL drift over time as you forget to migrate changes from one to the other.
You can mitigate this by using ansible or chef or docker or the like.
The other handy thing with container tech is that you can tear down and rebuild without having to wipe your hard drive and spend 30 minutes reinstalling the OS from the iso. Iterating your builds until you have the perfect setup becomes a lot more appealing when your turnaround time is 2-5 minutes.
Damage control becomes a lot easier. A bug in one program doesn't damage something in another container.
After this level, you can move up to the orchestration layer, where each container is just a component, and you tell your orchestration layer to "launch two of these with failover and the following address, link them to instances of postgresql, do an nginx frontend with this cache and config, pointing to this NAS for data", and it just works.
It's a lot like programming languages. You COULD do it all in assembly, but C is nicer. You COULD do it all in C, but for most people, a gc functional language with lambdas and coroutines etc gives you the sweet spot between cognitive load and performance.
With a shared language for doing these powerful things, you now gain the ability to easily share with others (like docker hub), increasing everyone's power exponentially. Yes, you need to vet whose components you are using, but that's far less work than building the world yourself.
Yes, there's a learning curve, but the payoff over time from the resulting power of expression is huge.
Also, I'm not sure what are the benefits going `FROM alpine` and installing nginx, than just starting `FROM nginx:alpine`. The latter has benefit of a more straightforward update logic when a new nginx version is released - `docker build` will "just" detect this. It won't notice that Alpine repos have an upgrade, though, and will reuse cached layers for `RUN apk install nginx`.
I came to pretty much the same conclusion too.
For years I was using Alpine but as of about a year ago I've been going with Debian Slim and haven't looked back.
I'd much rather have the confidence of using Debian inside of my images than save 100MB on a base image (which is about what it is in a real web project with lots of dependencies).
Either way, it's not 1.12GiB I was getting with a fat `FROM python:3` base image.
They mention trust of who’s building the images, which is valid, but then using `apk install nginx` means you still have to trust that package maintained. It’s really just moving trust from Docker to Alpine.
It’s fair that it’s a reduction of entities that need be trusted since they are using Alpine as their operating system already, however they are still running Docker binaries...
It’s pretty neat to be able to say they don’t use Docker Hub for anything, but it doesn’t seem to offer any advantage.
Nah, that would require going completely `FROM scratch`. Otherwise that won't be true as `alpine` base image is still hosted on Docker Hub: https://hub.docker.com/_/alpine
I'm not sure what are the benefits
going `FROM alpine` and installing
nginx, than just starting `FROM nginx:alpine`.
The latter hands your balls over to Docker Inc, the "Alpine Linux Development Team" and a guy or girl called "jkilbride".
Edit: No, it does not! See toong's comment below.
This totally voids my comment!
Everything else matches patterns we use. WebPageTest is probably the most hilariously janky application that’s the best at what it does that we use. Standing it up locally so you can test internal stuff is a revolting experience I’ve had several times.
TBH I found kubernetes easier to use than docker compose. Mainly because I saw little reason to learn the syntax when I was already using kubernetes yamls and kubeadm makes it so easy to stand up. What you have looks pretty simple though so I may take another look at it.
You can actually ship images as tarballs and reimport them. That’s what I do on personal stuff instead of standing up an ECR/registry. As long as you version tag your containers it should be fine.
HTTP/2 is pretty much a nonissue for everyone except some janky java clients at this point. We turned it on a couple of years ago with only minimal issues.
Maxmind dbs are... awkward to get access to. They are expensive as hell if you buy them directly but usually built into edge service products. F5 had them as well I think.
I would like to switch to a dockerized setup, but running everything on Debian/stable has the advantage of unattended-upgrades (which has worked absolutely flawlessly for me for years across dozens of snowflake VMs). Not going back to manual upgrades.
I tried a Registry/Gitea/Drone.io/Watchtower (all on same host, rebuilding themselves too) pipeline and it worked, but felt patched together. Doing it wrong?
I use dockerhub/github/drone/watchtower to automatically publish images and update services. I use dockerhub and github to avoid self-hosting the registry and version control. I also have about a dozen servers (as opposed to single-server in your example). This works really well for me and does not feel patchy. Yes there are moving parts, but fewer moving parts than a full-blown orchestration system.
From what I've seen, the is no consensus on the "right" way to do it.
You could run the upgrades in the container and lose them when it's re-upped, or, you need to continuously deploy the containers.
This alone is one major reason I'm in favour of statically linked binaries in from-scratch containers, where possible.
PSA: everyone should turn on the userns option in docker daemon settings. It messes with volume mounts but you can turn it off on a per container basis (userns=host) or arrange a manual uid mapping for the mounts.
> The cornerstone of Docker is in its ability to use Linux control groups, namespace isolation, and images to create isolated execution environments in the form of Docker containers.
If you want to run dockers at home, i suggest you give it a try. All you need is an old computer and a USB (it runs in RAM). Unraid basically is Linux with a (happy little) webinterface for NAS shares + apps.
 - https://github.com/wemake-services/caddy-gen
I've been meaning to start blogging again and use this as a first topic.
Now my data volumes are separate from the system. I used raid for redundancy on the physical machine with another drive for differential filesystem level backups (snapshot every day for 60 days with only differential storage cost, see rdiff-backup). When I upgrade I just add new drives, assign them into the raid, wait for them to sync, and remove the old drives. By generational I mean my storage and how I do things aren't going to change when I reset things like they used to. All my data is outside the system I'm using and must be attached.
I can for example, spin up an AWS machine, run the package install commands for the apps I normally use, VPN to my fileserver and mount my home directory and project volumes, and in quite literally about 15-30 minutes have the exact same environment as I do at my house. CFEngine et al would speed that up quite a bit.
How does systemd, or any init for that matter, come into the picture if you're running everything inside docker? Containers don't use any init, right? They just execute the binary in the host environment (but containerised). Or am I missing something?
Edit: nevermind, OP is using Alpine as the host OS as well.
Secondarily, if you want to be neat and save some pids and kernel memory, you need an init system to wait(3) on orphaned zombie processes.
These are the only two use cases AFAIK, which a small init system such as tini satisfies, without the complexity and size of systemd.
I have multiple network devices. I want some to be controlled by processes running in a container; effectively I want some processes to run under a user account but still provide root (root-like?) access to the specified network device(s). I want to be able to give a specific (containerized) user full control over one or more specific network devices. My (naive?) understanding is that the init daemon takes care of bringing the network online and then subsequent management of it. For systemd, that would be Network Manager? Or do I misunderstand?
Plus the package manager and system structure was a bit of a learning curve.
But otherwise I was very happy with it. The lack of portability with my Linux ArchLinux/Debian desktop/servers can be overcome with time and experience like anything.
I used some basic Docker images as well but they always needed some config work, as they mostly installed the base software but didn’t get your apps running. Other people may be using more sophisticated images though.
It depends on the container runtime. LXC/LXD run an actual init system and can be treated like lightweight VMs running multiple processes.
Edit: Ah you meant the host OS. I can't reply to wezm down below for whatever reason (there's no "reply" button), so I'll just edit this to say I didn't realise that he was using alpine as your host OS as well. I haven't seen many people running it outside Docker, so it's quite interesting.
Suppose the author updates one of his rails apps and there some database schema modifications.
Is that handled by docker?
How long does a deployment take? (Minutes, seconds ... basically is the tool able to figure out what is changed and only apply the changes or does it remove the old installation and build the new from scratch?)
If you have a more complex setup, e.g. if by using Kubernetes, you can do things like run both the version at the same time, person A/B testing or have canary deployments to ensure the new version works .
Time for deployment would be most likely in seconds unless the setup is complex/convoluted.
Schema modifications are another beast. For small use cases, you could run a specialized one time container that performs the modifications, but once you need high availability, you'd have to consider a more complex approach. See https://queue.acm.org/detail.cfm?id=3300018
The author mentions the Docker port for FreeBSD. According to the FreeBSD Wiki, it's meant to run Linux Docker images and relies on FreeBSD's Linux ABI layer to do so. To me, this is the wrong approach.
FreeBSD already has good container technology, what it really needs are good tools around that. Since the author ended up building his own Docker images, I suspect that he'd be happy with a FreeBSD-equivalent way to declarative build and manage jails.
Otherwise it's a great distro and I use it for non-Pythonic stuff or for Python with no dependencies.
I sympathize, but rather sounds like the problem is that you're not using a purely interpreted language anymore.
K8S is a platform to build things. Because of this most of the amazing features you have access to are built by the community (service mesh for example).
Docker Compose is a mess once you have 10+ services with each having different containers backing it.
Given that he's using docker-compose, I wonder why he's chosen to host his images in a repository at all, instead of just specifying the Dockerfile in the yaml and having them locally.
I was a big fan of slim images until unsolvable bugs started popping up. Like others have said, not much benefit shaving off a few hundred mbs in the age of fiber.
I also looked to docker and gave up. I like bhyve, and have considered a low-pain migration to bhyve instances to package things into Linux, and then (re)migrate into Docker. A way to avoid pain, and cost of a duplicate site, to build out and integrate.
I wish something as logistically simple as docker compose was in a BSD compatible model, to build packages to. I'd like the functional isolation of moving parts, and the redirection stuff.
Nice write up. I wonder how many other people are in this 'BSD isn't working for me as well' model?
For personal it’s probably fine but I wouldn’t use it in prod again.
Debian ZFS is not easy to install as root FS which is .. disappointing. It would be nice if it was integrated into the net install .iso as a legit disk install option.
I’ve run it on Ubuntu for at least 5 years without issue.
I need to re-spin a set of standard utilities on local hardware from time to time so am looking for the best way to manage the config files (bind, Apache, and the like)
Ansible always win: no agent, works over plain ssh, gazillion of modules ...
It just takes a little while to learn to be efficient with it apparently.
The people i found "hating" ansible just didn't know the few options they needed to understand what's going on.
I've seen examples of how Ansible will keep recreating instances but that's only if you don't define what your infrastructure should look like in the inventory.
Edit: I have not tried it
I reckon a good benefit would be static typing and intellisense.
I use a FreeNAS server to manage storage pools and the bare metal box, run Ubuntu vms on top of it, and then manage top level applications in Docker in vms via Portainer.
This is nice because the vms get their own ip leases, but can still be controlled and very locked down (or not) depending on their use.
Docker volumes are mounted over NFS from the underlying NAS, and the docker level data is backed up with the rest of the NAS.
What would you use for doing the same with jails?
The prefered alternative to docker is rkt.
Kubernetes will take anything that implements CRI.
However, I still stand with rkt rather than docker...
Kubelet actually has a translation layer baked into it that it starts in-process when detecting docker, which provides the gRPC CRI interface on a real filesystem socket.
Being on HN front page has pushed it up from a baseline of 7.5% utilisation to about 12.5%.
Thanks for the article, very informative.
16323 bytes of text, 6933 bytes of vector graphics, 23,256 bytes. 22KB of content, 171KB total, 87% of the transfer total is potential bloat.
It could be worse, but there's almost certainly room for improvement.
Mind that you don't need to use Docker to use containers, there's always LXD and others.