
Dissatisfied with Docker - khuey
https://robert.ocallahan.org/2019/09/dissatisfied-with-docker.html
======
aloknnikhil
> In fact, a global system daemon should not be needed. Either users should be
> able to run their own daemons or container management should avoid having a
> daemon at all, by storing container state in a shared database.

Absolutely love Podman. You can even define registries to work with docker
hub, easily.

[https://podman.io](https://podman.io)

~~~
nikisweeting
Yeah but no docker-compose support :/ dealbreaker for us. Theres an alpha
podman-compose project but we couldn't get it running.

~~~
likeclockwork
You really don't need it since podman has pods of containers that communicate
with each other. You can export and playback the pods as kubernetes yaml.. but
I just use a bash script + an environment file to configure, create and launch
pods. What else is docker-compose really doing for you?

~~~
nikisweeting
Our entire company runs on like ~20 docker-compose files. It's a super elegant
way to describe collections of containers without the management overhead of
Kubernetes. We've had nothing but good experiences with compose, and lots of
headaches with Docker swarm and K8s.

------
neckardt
I'm surprised nobody is mentioning LXC[1]. I'm by no means a containers
expert, but they claim to be more secure since they default to running as non-
root. Unlike docker, I had no trouble installing LXC with apt, while with
docker I often got an outdated version. I'm now using LXC for all of my basic
container applications (self hosting a wiki and a few other sites).

[1]: [https://linuxcontainers.org/](https://linuxcontainers.org/)

~~~
judge2020
Docker isn't up to date on the official Ubuntu (and debian too?) repos for
some odd reason. A better way is either snap (snapcraft) or adding docker's
own apt repository.

~~~
FreeHugs
Aren't those images provided by Ubuntu and Debian? Then it would not be under
the control of Docker which versions are provided.

docker.io provides Debian 10 as far as I can tell.

------
KaiserPro
A few things that I would add to that list:

o No primitives to deal with secrets.

o Terrible disk handling (aufs was just horrid, overlay2 I think misses the
point. device mapper is just, silly)

o poor speed when downloading and uncompressing images.

Of all of them, the most serious is the lack of secrets handling. Basically
you have to use environment variables. Yes, you can use docker compose and
stuff appears, but thats only useful if you use compose/swarm.

In this day and age, to have secrets as a very distant afterthought is pretty
unforgivable.

~~~
xorcist
The most basic problem with Docker is the use of a daemon that is not init.

Access control is at best problematic.

Upgrading the daemon without losing state is tricky.

Requiring daemon access to build images is insane.

Building this functionality into something like systemd would be more robust
but it's way harder to sell as a product.

~~~
Smithalicious
Yeah Docker should get with the times and be assimilated into systemd like
everything else

~~~
oblio
You could argue about many things bundled with systemd, but since containers
are just souped-up processes, this is actually an use case that make sense for
an init system.

~~~
CameronNemo
I disagree. I would prefer to have a model where the supervisor needs no
knowledge of the LSM profile, namespaces, seccomp profile, or cgroups applied
to a service. Additionaly, I would prefer a non-init service supervisor.

------
nickjj
I've been using Docker since 2015ish and the container start up / stop speed
is really the only thing that bugs me.

Everything else is fine for day to day usage IMO (on Windows and Linux at
least) and very much worth the trade offs, but having to wait multiple seconds
for your app to start is tedious since it plays such a heavy role in both
development and even in production. Each second your app is not running is
downtime, assuming you're not load balanced.

I opened an issue for this almost a year ago but not too much has come from it
other than identifying there was maybe a regression in recent'ish versions:
[https://github.com/moby/moby/issues/38077](https://github.com/moby/moby/issues/38077)

We're talking multiple seconds in Docker vs ~150ms as a regular process to
start up a typical web app process like gunicorn in a decently sized Flask app
(many thousands of lines of code, lots of dependencies -- a real world app).

~~~
TheWizardofOdds
The long startup time is somehow what keeps me from really liking Docker. In
the end (if I understood correctly), it is supposed to be the go-to tool for
serverless architecture. If my serverless function needs more then a second to
startup, it's not usable for me.

Even the hello-world container, which is only a few kB in size needs roughly a
second to startup.

~~~
utopian3
> In the end (if I understood correctly), it is supposed to be the go-to tool
> for serverless architecture.

You misunderstood. No one calls Docker a “serverless” architecture.

~~~
TheWizardofOdds
And i didn't either, I called it a tool for serverless architecture. In fact
some tools for serverless architecture like AWS Fargate require you to use
Docker.

~~~
VonGallifrey
I know that AWS Fargate has the tagline of "Run containers without managing
servers or clusters", but that is not what "serverless architecture" means.
Fargate is a container service.

Serverless would be, for example, AWS Lambda, Azure Functions or Google Cloud
Functions.

~~~
onefuncman
Fargate is serverless because the compute is abstracted away completely. A
lambda runtime is just a specialized container and they've added similar
customizability to it lately with Layers/Runtime configuration.

~~~
VonGallifrey
I know that the definitions of these kinds of buzzwords can be fuzzy
sometimes, but I have never heard a definition of serverless that would
include Fargate.

Here is what Cloudflare uses to describe serverless:

> Serverless computing is a method of providing backend services on an as-used
> basis. Servers are still used, but a company that gets backend services from
> a serverless vendor is charged based on usage, not a fixed amount of
> bandwidth or number of servers.

With Fargate you still get charged for your running instances your containers
are running on. Even if the containers themselves are idle. This is a
container service and not a serverless architecture.

~~~
chatmasta
How do you come to this conclusion from this pricing page? [0]

I might be missing something but that seems like serverless pricing. You might
be thinking of the pricing scheme when Fargate first launched? Or maybe you’re
thinking of ECS, which does in fact charge as you described.

[0]
[https://aws.amazon.com/fargate/pricing/](https://aws.amazon.com/fargate/pricing/)

~~~
VonGallifrey
Everything I am reading there screams container service and not serverless.
Some Quotes:

> You pay for the amount of vCPU and memory resources consumed by your
> containerized applications.

How does vCPU fit into serverless architecture?

> Pricing is based on requested vCPU and memory resources for the Task.

Tasks being a collection of Containers. This is simply a container service
like ECS or EKS.

> Pricing is calculated based on the vCPU and memory resources used from the
> time you start to download your container image (docker pull) until the
> Amazon ECS Task terminates.

This means you pay for the resources that were started for your containers
until the container ends. Including all idle time and any overprovisioning you
did because you have to tell it which instance type you want. Compare that to
the pricing of Lambda where you only pay for the time your functions need to
execute when they get called based on external events.

To bring this back to the beginning of the discussion: Complaining about
docker because it is not a good tool for serverless architectures is not smart
because it is not used in serverless architecture offerings. Fargate uses
containers but it is not a serverless service. Fargate is a container service
that tries to simplify setup of compute clusters in comparison to ECS, EKS and
EC2.

~~~
TheWizardofOdds
From [https://www.learnaws.org/2019/09/14/deep-dive-aws-
fargate/](https://www.learnaws.org/2019/09/14/deep-dive-aws-fargate/)

> Fargate and Lambda are both serverless technologies from AWS.

------
chucky_z
I am currently on this train.

Having used rkt in the past, I went to revisit it recently only to find this:
[https://www.cncf.io/blog/2019/08/16/cncf-archives-the-rkt-
pr...](https://www.cncf.io/blog/2019/08/16/cncf-archives-the-rkt-project/)

I am so extremely disappointed in the CNCF as rkt (at the time, at least)
seemed to be more "production ready" than Docker.

Are there any real alternatives? Is the answer "find something else that uses
containerd in a more friendly way?" Is the answer "try to use podman/buildah,
which are weird in their own way?"

~~~
roca
Podman looks interesting at a quick glance, though not supporting docker-
compose makes migrating nontrivial for us.

~~~
3131s
It was discussed recently in the podman Github issues that this functionality
will be covered by the separately maintained podman-compose, which was
recently transferred here:

[https://github.com/containers/podman-
compose](https://github.com/containers/podman-compose)

------
ses1984
There are two kinds of software: software no one uses, and software people
complain about.

~~~
notyourday
There's also software that was created to solve a specific problem that got
misappropriated to do something else

There used to be an ISP called pilot.net. It was a crappy ISP but to solve its
ISP billing problem it wrote a billing system for telcos.

There was a company that tried competing with AWS selling hosting based on
hyperthreads of CPUs. It wrote its own provisioning system because it did not
want to pay for Virtuozzo. The company's name used to be dotCloud.

~~~
otabdeveloper4
> There's also software that was created to solve a specific problem that got
> misappropriated to do something else

Docker is a classic case. Docker must be the craziest, most over-engineered
solution for packaging developer artifacts in the Universe.

~~~
majewsky
In the same way that Dropbox is the most over-engineered solution for file
sharing, when all you need is an SVN-backed directory with curltmpfs. ;)

------
bob1029
What's wrong with booting a VM off a standardized base image (e.g. an AMI),
and then applying simple deployment scripts for each application you need to
run? You could probably replicate 90% of the justification for using docker
with some basic scripting.

    
    
      git clone https://github.com/myprofile/my-cool-app
      cd my-cool-app
      chmod +x deploy.sh
      ./deploy.sh
    

That's it. The above script would be responsible for getting your application
runtime environment up and then getting the application running as a
persistent service with reasonable defaults. Most cloud vendors let you put
something like that in your VM startup configuration. All you would need from
this point is to ensure that any sort of desired management functions are
built into the app itself. Perhaps having a central management service it
communicates with across the public internet would be a good place to start.
You don't need a lot of tooling to make a huge impact here. If you own the
codebase behind it, and have a fundamental understanding of your deployment
techniques, you can easily pivot and embrace radical new approaches. If you
are stuck on Docker, this doesn't seem as optimistic from where I am standing.

~~~
jordanthoms
That approach works, but if you are going to go into production with it you
might need to support:

\- Rollbacks

\- Automatic deployment of new releases from CI, rolling releases

\- Healthchecks, detecting if/when the server exits and making sure that VM
gets killed

\- Canary deployments

\- Autoscaling (could use an autoscaling instance group for it, but what if
you need to scale based on other metrics)

\- Log aggregation

\- Monitoring

\- Service discovery, load balancing between services

\- Service mesh

\- Fast startup of instances (starting a fresh VM and waiting for it to setup
everything from scratch could take ~3-4 minutes, docker is in the ~30s or less
range).

\- Bin-packing (run a high-cpu low-memory workload colocated with a low-cpu
high-memory workload for maximum efficiency)

etc, etc, all of which you get either out of the box or without a huge amount
of work if you adopt something like Google Kubernetes Engine (which takes a
few clicks to spin up a cluster).

If you don't need any of that stuff and don't want to learn Kubernetes, it's
totally justifiable to go that way, but personally I would take Google
Kubernetes Engine over something like that any time. There's some up-front
cost learning how it works which then pays off very quickly.

~~~
KaiserPro
None of these features are a function of docker, they are a function of the
orchestration layer.

bin-packing isn't what you describe, its shoving as much stuff on a machine as
possible. Proper resource dependencies management allows what you describe,
something k8s is weak on compared to other orchestration systems.

------
zests
I'm newer to the Docker scene but haven't really found any of the complaints
in this article realized in my work. Faster speed would be nice but I don't
really mind it now.

I see a lot of complaints about the docker daemon and root privileges on HN
and I've tried to understand where they are coming from but I can't get
anywhere. For instance, I understand the reasoning behind "if there is no need
for a daemon there shouldn't be a daemon" but I don't really understand what
the actual/realized/tangible benefits of not having a daemon would be.

~~~
wokwokwok
It would be faster.

I also don’t really care if a container take 2 seconds or 100ms to start...
but building docker images is painfully slow.

I’ve also ended up (numerous times) with the “docker daemon is borked”
situation, which requires a restart to fix... and you can imagine how that
sucks on a prod or multi tenant systems.

~~~
roca
One reason we care about containers taking seconds to start is that our CI
tests have to start and stop a lot of containers, and it all adds up. And
running them in parallel wouldn't help because they would just bottleneck on
the global docker daemon... or break entirely by interfering with each other,
unless we run multiple Docker-in-Docker setups, which would be even more
painful and add its own overhead.

------
syrusakbary
The article is completely on point. Because of all the reasons exposed there
(and a few more) I started Wasmer, a new container system based on WebAssembly
- [https://wasmer.io/](https://wasmer.io/)

Here are some advantages of Wasmer vs Docker:

* Much faster startup time

* Smaller containers

* OS independent containers (they can run in Linux, macOS and Windows)

* Chipset independent containers (so they can run anywhere: x86_64, Aarch64/ARM, RISC-V)

~~~
anaphor
One thing I also find sorely lacking in Docker is the ability to run your
containers with the appropriate seccomp privileges (in order to enforce
Principle of Least Authority). I know this is _possible_ with Docker, but it's
not really done much in practice because of various difficulties. I wonder how
difficult it would be to do that with your tool?

~~~
syrusakbary
Since Wasmer is in control of all the syscalls its actually quite easy to
manage the privileges in a more fine-grained way (think on CloudABI
permissions on top of your containers)

~~~
anaphor
That sounds really interesting. I'm going to check it out. Thanks for working
on this!

------
aledalgrande
These are real problems with docker, but do we wanna talk about docker for
Mac? A total performance disaster.

[https://github.com/docker/for-
mac/issues?q=is%3Aissue+is%3Ao...](https://github.com/docker/for-
mac/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc)

~~~
djsumdog
I hate how the Docker team called it native. Docker for Mac/Windows still run
in a hypervisor because so much of Docker is specific to Linux/cgroups. There
was a FreeBSD port of Docker that attempted to implement a lot of the Docker
API using zfs+jails but it went unmaintained and was never ported to the newer
modular Docker implementation.

You're always going to get that performance hit with the hypervisor layer
there.

~~~
charles_f
Docker for Windows is coming to use the WSL2, beta is already out.

~~~
lojack
WSL 2 itself uses a hypervisor though, so this is sort of a moot point.

~~~
addicted
Is this correct? I thought the headline feature of WSL2 was that unlike WSL it
didn't use the hypervisor.

Edit: I looked it up. WSL2 continues to use a VM.

[https://devblogs.microsoft.com/commandline/announcing-
wsl-2/](https://devblogs.microsoft.com/commandline/announcing-wsl-2/)

~~~
skissane
> Is this correct? I thought the headline feature of WSL2 was that unlike WSL
> it didn't use the hypervisor.

No, the headline feature of WSL2 is it uses a real Linux kernel under a
hypervisor, as opposed to WSL1's approach of a kernel driver running inside
the NT kernel which partially emulates the Linux syscall API.

> Edit: I looked it up. WSL2 continues to use a VM.

More accurately, WSL2 starts using a VM, whereas WSL1 didn't. WSL2 runs the
Linux kernel inside a VM. WSL1 runs Linux processes under the NT kernel
(albeit as a special type of lightweight process called a picoprocess), with
an emulation layer to translate Linux syscalls into NT kernel calls.

------
djsumdog
These are all pretty good points. I can understand why Docker allows any base
layer OS, but they could have made their own packages or limited a single
distro and it would be easier to check for outdated packages and security
issues in containers.

The cgroups and Linux specific hooks keep Docker from being implemented
natively anywhere else. The fact you have to share the entire Docker socket
for containers to be able to control other containers, or that it's not
trivial to run Docker-in-Docker, is terrible.

I did a similar writeup about the things I hate about container orchestration
systems:

[https://penguindreams.org/blog/my-love-hate-relationship-
wit...](https://penguindreams.org/blog/my-love-hate-relationship-with-docker-
and-container-orchestration-systems/)

~~~
imiric
> The fact you have to share the entire Docker socket for containers to be
> able to control other containers, or that it's not trivial to run Docker-in-
> Docker, is terrible.

FWIW, if you enable the remote API, which, granted, isn't as trivial to do
securely as it should be[1], then you can connect from any Docker client by
simply setting the `DOCKER_HOST` env var and using the right TLS certs. This
makes Docker-in-Docker much easier to manage, avoids the security issues of
sharing the Unix socket, and works remotely of course.

[1]:
[https://docs.docker.com/engine/security/https/](https://docs.docker.com/engine/security/https/)

~~~
djsumdog
I created an ansible role that does this for me:

[https://github.com/sumdog/bee2/blob/master/ansible/roles/doc...](https://github.com/sumdog/bee2/blob/master/ansible/roles/docker/tasks/servertls.yml)

It creates client certs and copies them locally too, so I can connect to
Docker remotely over a VPN. Still this doesn't solve the original problem I
talked about. It's not about securely connecting to the daemon. Even if you
connect securely, you still essentially have root access on the host machine.

I've considered writing a proxy that restricts what commands can be forwarded
on to the Docker host socket (e.g. allowing for container IPs x,y and z to
restart containers, but not to create new ones or pull images). There doesn't
seem to be fine grained security or roles built into the docker daemon itself.

Running docker in a docker container would give you a throw-away docker to use
for things like Jenkins, Gitlab-CI, and other build tools without giving it
access directly to root on the host.

~~~
imiric
> Even if you connect securely, you still essentially have root access on the
> host machine.

Your original point was about the pain of sharing the Unix socket to control
other containers, so that's why I brought up the API approach.

It's been a while since I used Docker, but have you tried enabling user
namespace remapping[1]? I remember it working as documented, and don't see why
it wouldn't work remotely or DiD. There's also experimental rootless support
since 19.03[2], maybe give that a try. Other than that, make sure you trust
the images you run, or preferably, inspect every Dockerfile, ensure that the
process runs as an unprivileged user, and build everything from scratch
yourself.

I agree with you that this is a major security issue, but we've known that
since its introduction, and things seem to be improving, albeit slowly.

Thankfully, nowadays there is other OCI-compatible tooling you can use and
sidestep Docker altogether. Podman[3] is growing on me, mostly because of
rootless support, though it's not without its issues and limitations.

[1]: [https://docs.docker.com/engine/security/userns-
remap/](https://docs.docker.com/engine/security/userns-remap/)

[2]:
[https://github.com/moby/moby/blob/master/docs/rootless.md](https://github.com/moby/moby/blob/master/docs/rootless.md)

[3]: [https://podman.io/](https://podman.io/)

------
alpb
Nearly all points (daemon-less runtimes, daemon-less builds) the author has
mentioned has been addressed by open source ecosystem in smaller standalone
projects that are available.

Docker provides a nice high-level layer for the end user on the developer
machine. Its CLI is efficient, so are the builds when they're local, with a
cache. I still use it on my dev machine despite being aware of the
alternatives.

Anyone security-aware company using containers in production most likely go
with non-Docker approaches. Some notable examples include runc, Podman,
Buildah, img, ko, Bazel/distroless.

~~~
solipsism
Distroless is not non-Docker, it's very much Docker.

~~~
alpb
Please explain.

You can build images using Bazel with distroless as the base image. Similarly
I think Jib/ko use distroless images without needing a docker engine.

~~~
solipsism
The thing you build is a Docker image.

~~~
alpb
Technically an OCI image, and there’s nothing relevant that is mentioned in
the original article in your argument, and you can run the resulting image
without docker so I think at this point your argument sounds like random
blabber, sorry.

------
jbergknoff
It's interesting to see the complaint about containers starting too slowly. I
haven't seen much discussion about it before this article and this comment
thread, but it's one of my biggest pain points with Docker. I know we can save
some time by skipping some namespaces (e.g. `--net host`) but I've still never
been able to get satisfyingly fast container execution in that way. (e.g.
`time echo` -> 0.000s, `time docker run --rm alpine echo` -> 1.3s; come to
think of it, this is even slower than it used to be)

Still, Docker is the best method that I've seen for distributing software,
especially cross-platform. Not just for shipping a containerized web app to
production, but also running dev tools (e.g. linters, compilers) and other
desktop and CLI applications. I know some people run lots of stuff in
containers
([https://github.com/jessfraz/dockerfiles](https://github.com/jessfraz/dockerfiles),
probably the most prolific), but I think this is a largely underappreciated
facet of Docker.

My team at work is heavily Mac while I'm on Linux. The dev workflows are
identical on my machine, on their machines, and in Jenkins. Nobody has to have
a snowflake setup, installing this or that version of Python, we're all just
`docker run`ning the same image. It's great.

Unfortunately, Docker for Mac's IO performance is abysmal. If past performance
is any indication of future results, that's never going to change. I'm
constantly on the lookout for other ways to share tools cross-platform that
don't involve Docker. Things like podman and binctr are exciting, and I've
played with them, but I don't see them filling this niche.

------
atarian
I just want to be able to save a container binary to a USB drive and then run
it from a different computer without having to install anything.

~~~
astockwell
The first time I emailed a buddy a Go cross-compiled binary and he opened and
it ran no problem (opsec aside), our worlds changed.

~~~
atarian
This is exactly how I'd like containers to be as well.

------
est
All I want is a process that can be frozen and copied to multiple servers.

start/stop is merely a state change.

~~~
ehotinger
That's quite an oversimplification of the problem, copying stack/heap/etc.
Anyways, check out CRIU as a starting point.

~~~
e12e
(and note that lxd/lxc allows migration based on it. Not sure about copying -
I suspect you run into similar issues as fork - who owns the open file handles
and other resources?)

------
broknbottle
switch to podman?

[https://podman.io/](https://podman.io/)

~~~
nikisweeting
No docker-compose suppport.

~~~
ptman
[https://github.com/containers/podman-
compose](https://github.com/containers/podman-compose)

~~~
nikisweeting
I know, but have you actually tried using it though? It's not going to be a
viable production-grade docker-compose replacement for quite a while unless
they get some serious funding or open-source attention.

------
anordal
... then I suppose Selfdock is for you.

* Does not give or require root.

* Fast: Does not write to disk.

* Fast: Does not allocate memory.

* No daemon.

[https://github.com/anordal/selfdock](https://github.com/anordal/selfdock)

~~~
aequitas
> Give up the idea of data volume containers. Given that volumes are the way
> to go, no other filesystems in the container need to, or should be,
> writable.

Interesting philosophy, it will pose some issues when replicating a Docker
'build' like system though. But that could be separated from the 'run' system
with cached layers.

edit: Btw, Docker also seems to support this:
[https://nickjanetakis.com/blog/docker-tip-55-creating-
read-o...](https://nickjanetakis.com/blog/docker-tip-55-creating-read-only-
containers). Also allowing explicit creation of writable tmpfs mountpoints.

------
devmunchies
Do BSD jails solve these problems? The little i know of them, they are good at
containerization but don't solve the distribution problem that docker
containers do.

------
rotten
I'd like it to be more like git. Yes Docker has "push" and "pull", but I want
branches, and automatic attach when I do a checkout, and rollback when I screw
it up.

As a developer I'd like to be able to check out a docker image, work _in_ it,
and merge it to master when I'm done. When my code is deployed, I'll know
_exactly_ what is running.

Managing secrets and environments (so the container knows when it is in
production instead of running on a developer's laptop) is important to get
this to work well.

It feels like it is half way there already. I'm looking forward to when it is
as straightforward as git for a developer to use. I'm not too worried about
start up time - the biggest drawback to slow startup time is when you are
running very sensitive autoscaling that is tearing down and spinning up new
nodes very quickly. If you have that problem you may want to rethink your node
size, hysteresis, and scaling thresholds.

------
peterwwillis
Containers Without Docker: [https://dzone.com/articles/containers-with-out-
docker](https://dzone.com/articles/containers-with-out-docker)

Dockerless: [https://mkdev.me/en/posts/dockerless-part-1-which-tools-
to-r...](https://mkdev.me/en/posts/dockerless-part-1-which-tools-to-replace-
docker-with-and-why) [https://dzone.com/articles/dockerless-part-1-which-
tools-to-...](https://dzone.com/articles/dockerless-part-1-which-tools-to-
replace-docker-wi)

What's funny is, Docker is only actually useful because of how many features
it has, all its supported platforms, all its bloat. You won't ever be totally
satisfied with any alternative because it takes so long to make something like
Docker, and someone will always need that one extra feature.

------
mschuster91
> In fact, a global system daemon should not be needed.

You will need a daemon running as root to bind to ports below 1024.

In addition, in many cases you want a bind-mount onto your filesystem that
_also_ supports arbitrary UID/GID on files, which means you will need a root
daemon. The problem is of course that anyone having access to the docker
daemon can simply say "bind host / to container /mnt" and then can hijack
/etc/sudoers for a privilege escalation on the host.

It's mutually exclusive to have usable containers and a system that is secure
against privilege escalation by the users at least and (in case of docker-in-
docker implemented by bind-mounting the Docker socket in the container) by
anyone accessing the container and achieving RCE there.

------
bokieie
We tried to use LXC directly and realized that Docker simply does it all for
us, and better.

------
tigroferoce
Beside technical considerations, the main point of Docker, to me, is its
diffusion. Just like the actual physical containers it gets inspiration from,
you can find Docker almost everywhere. It has become a Lingua Franca for
devops, even in enterprise environments, and it will be very difficult to get
rid of it.

------
notyourday
Have you heard of this amazing thing called "a VM"? It is like docker but when
you pee at the wall it always splashes back right at you so you quickly learn
that you never want to pee at the wall.

