
Goodbye Docker and Thanks for all the Fish - Corrado
https://technodrone.blogspot.com/2019/02/goodbye-docker-and-thanks-for-all-fish.html
======
zenexer
I’m failing to see the argument here. The author suggests that the advent of
viable Docker competitors will inevitably bring about Docker’s death. Why
would that be the case? Competition is great, but it’ll be a while before
others can match Docker’s maturity and ubiquity. Even then, there’s no
guarantee that any of them will be better than Docker, never mind good enough
to warrant switching.

What’s wrong with Docker? Why would I want to switch to something else? Are
these other solutions really so superior that they warrant the significant
time investment that it would take for me to learn how to use them?

The author doesn’t actually answer any of these questions. No arguments
against Docker are made, nor are any arguments made in favor of competitors.

~~~
stingraycharles
You're missing the point of the author. It's not so much that Docker as a
technology is bad; there's nothing wrong with it. The point is that Docker,
the company, cannot monetize upon the technology, because the moment they
start charging for their containerization technology, people will switch to an
alternative container technology, especially now that Kubernetes has won the
orchestration "wars" and made it so easy to switch the underlying technology.

Personally, I have to agree with the author and think it's difficult to see a
bright future for the company that lives up to its $1.3B valuation.

~~~
Spivak
Docker Inc is fine. It would be silly to charge for the actual
containerization because that's not really the value of their platform.

Things Docker can/does charge for and could do very well with.

\- Support: Businesses are happy to pay maintenance contracts for fixed LTS
versions of Docker which is their EE product.

\- Kube: Docker has pivoted their UCP product into a turnkey on-prem
Kubernetes distribution. Plenty of room in that space to grow.

\- Registry; You wouldn't run random images from DockerHub in production,
right? Similar to Red Hat's product in this space there's a lot of money to be
made in having officially supported images. Images with a pedigree, aduit
trail, CVE reporting, yada yada. Partner with Canonical and since it's way
easier to do this when you already have distro maintainers and you have a
solid RH competitor that devs will like more.

\- Security: Audit your images that have been sitting around and not updated
in ages.

\- Hosting: They'll be one of many but there's plenty of space in providing
some ergonomics compared to Google/AWS's offerings.

~~~
morpheuskafka
Fun fact: if you use Red Hat, you have to use Docker EE. You don't even have
the option of getting CE and taking responsibility for Docker support
yourself.

~~~
justinclift
> if you use Red Hat...

"Red Hat" as in RHEL specifically, or are you meaning any of the family (eg
CentOS)?

Asking because I've not hit any problems using Docker CE with CentOS 7. Well,
aside from general bugs (etc). But nothing seems to force the use of EE
instead of CE.

~~~
morpheuskafka
Docker CE is officially supported (well, it's not supported because its CE,
but it's packaged, repos, install guide) etc. but only EE is packaged for
RHEL. RedHat also has their own docker package like many other distros, so you
can still get it free easily, but you have to get it through them and wait on
them for updates.

~~~
Spivak
If you're using RHEL why wouldn't you default to using Red Hat's build of
Docker? I definitely prefer it to the official builds. Red Hat in general
seems very annoyed at Docker and carries quite a few QoL patches like being
able to add registries to search.

------
mickeyp
A shame, really, but not a huge surprise that products like Swarm fell by the
wayside. I feel it could've occupied a nice middleground for teams that didn't
need the full capabilities (or overhead of supporting) Kubernetes, even though
I think K8s is an exceptional project.

I chose Swarm as a pragmatic choice in an enterprise environment that didn't
have any prior experience with Docker at all, really, at the operational
level. We had to support financial models, often many different versions of
the code at the same time, on top of the usual stack of applications to go
with that. The choice was a "success", albeit one muted by the wacky
networking layer they use. Compound that with RHEL's older kernels, and we had
to deal with oddball issues like iptables/arp table getting out of sync with
what's actually running, resulting in connectivity issues. And don't get me
started on removing and redeploying a stack; that would occasionally wedge
things so badly we had to cycle the docker daemon.

Still, a shame. The gap between "Look, I wrote a compose file" and running
something on a small cluster is tiny, and that was its main strength, even if
it did suffer from some serious heisenbugs. Why they decided to add and remove
features between versions and do their damndest not to make a compose file
100% "forwards compatible" with Swarm is another mystery.

~~~
usgroup
Swarm was a pretty big mistake. I think based on just relative resource
investment compared to Kube, it ought to have been obvious that it’d never be
relevant if it wasn’t extremely specialised.

~~~
mickeyp
That's true. However, consider this environment:

\- The ops team are in a different country, and are wedded to very old-
fashioned views of administration ("Automation? But I like manually running
commands from a runbook!")

\- You work with a team of people who are
quants/actuaries/scientists/engineers but not professional developers, but you
want them to have a turn-key environment so they can Get On With It. When they
need new python packages or god forbid upgrade pandas or something else,
there's a full CI chain that'll make sure that what they do _here_ also works
_there_.

\- Swarm is (from personal experience) easy enough to teach people who don't
know anything about Docker. You can show them how to query the state, modify
it, look at logs, etc. all without the hassle and overhead of configuring and
running K8s, even though it will always be my #1 choice for tech-literate
orgs. Swarm, for many, including myself, was a pragmatic choice -- 80% of the
immediate benefit of container orchestration with 20% of the cognitive
overhead for the chaps in another country who had to maintain it if things
went south.

~~~
usgroup
Sure and then google release “kubelite” or some guy writes a convenience
script and there goes your competitive advantage for the use case :)

Docker didn’t have enough of an advantage even though Swarm game shipped with
it, so de facto if you had Kube you would have had Swarm first...

~~~
mickeyp
I made a tradeoff; keep in mind, it's not always the case that the best
technology wins. Running <Technology X> is all well and good, but if you
cannot keep it running perfectly, or it results in unacceptable downtime due
to operator error, then that reflects poorly on the architect/lead in charge
of picking the tools.

I am more of a mind to make sure that I can solve the task(s) that I am given,
such as it is, with the resources available (people, knowledge, time, etc.).
That inevitably means tradeoffs. In a parallel universe I would have used K8s
instead, as I think it's exceptional and far superior to Swarm. However, with
the limited resources available, I chose Swarm, and for all its faults it's
running fine.

~~~
fatherlinnux
I agree with your pragmatism (and admire it). I would only urge you to add a
couple of tools to your toolbelt for analysis:

1\. Open source politics aka is the project viable? 2\. Where's the money
coming from? Aka, what products/companies build solutions off the tech? 3\.
Look for growth, not survival. If a company is not growing, it is dying.

These extra three test "gates" help me select what technology I will use,
learn, and bet my career on....

~~~
mickeyp
Keep in mind, when this solution was adopted Docker were still wedded to
Swarm. Even if they stop caring about it -- as they pretty much have now -- we
have a system that works at rest. Two years on, the team(s) that handle the
production support and operations are more comfortable with Docker & co,
because of this. Not to forget, this is a very large (and risk-averse)
enterprise. You don't always get to pick whatever you like!

------
gulikoza
I never understood why "modern" tools like Docker have to provide everything:
networking, firewall, repository, you name it...

I understand somebody wanting to type "docker run xxx" and have everything
setup automatically, but if you're running anything but default networking and
actually care where the xxx image comes from, it's gonna fail miserably.
Coming from the VM world, I found it much easier to work with macvlan
interfaces that lxd supports for example - the container gets it's own
interface and IP address and all networking can be prepared and set up on the
host instead of some daemon thinking it knows my firewall better than me...

~~~
snidane
Yeah, it feels too monolithic. Just to showcase it can run Hello World in one
"docker run" command I guess?

Another thing people use Docker for but shouldn't is application packaging.
Using Docker you build one fossilized fat package with both all OS and app
dependencies baked in. Then some day after years of using that Docker image
you need to upgrade your OS version in the image but you can't replicate the
app build because you didn't pin exact library versions and the global app
repository's (pip, npm) later version of the package is no longer compatible
with your app.

Application packaging is better to be done in proper packaging systems like
rpm od deb or other proprietary ones and stored in organization's package
repository. Then you can install these rpm packages in your Docker images and
deploy them into cloud.

The difference in OS dependencies and app dependencies are clear when looking
at contents of actual dockerfiles. OS dependencies are installed leveraging
rpm or deb ecosystem. Apps are cobbled together using a bunch of bash glue and
remote commands to fetch dependencies. Why not use proper packaging for both
OS and apps and then just assemble the Docker image using that?

~~~
gerbilly
> Using Docker you build one fossilized fat package with both all OS and app
> dependencies baked in.

Exactly. Most uses of Docker are like a junk drawer: neat on the outside, a
total mess on the inside.

People stuff their python 2 app in there and forget what their dependencies
are, or where they got them from.

Good luck upgrading that 2-3 years from now.

~~~
cpuguy83
To be fair, this is how people build applications. "Oh there is this library
let me just pull that in"

------
zmmmmm
> When people understand that they can easily make the choice to swap out the
> container runtime, and the knowledge is out there and easily and readily
> available, I do not think there is any reason for us to user docker any more
> and therefore Docker as a technology and as a company will slowly vanish

How about MySQL as a counter argument? It's always been feature-weak compared
to PostgresQL, with a less business friendly license and now owned by one of
the most despised software companies by technologists. But it's probably
_still_ the default relational database people pick up. Habit, defaults and
massive installed base can go a long, long way.

~~~
imhoguy
Container engine is just a wrapper around couple of Linux APIs mainly cgroups
& namespaces & iptables/BPF - there is most of hard work done. A good example
how trivially it can be implemented is this:
[https://news.ycombinator.com/item?id=9925896](https://news.ycombinator.com/item?id=9925896)

Now, implementing DBMS with even basic SQL query language, is not a trivial
job.

~~~
cpuguy83
That's funny because getting these things right (without security issues) is
quite difficult.

~~~
013a
Its very likely that most of the challenging security stuff will continue to
move down into the kernel itself. Its important to remember that the entire
concept of a container wasn't really a single unified concept in the Kernel
until very recently as they've gained popularity; instead, they were a
amalgamation of a few different capabilities in the kernel.

~~~
fatherlinnux
Perhaps, you caught news of something I haven't seen, but AFAIK, "container"
is still defined in user space. Talking to Eric Biederman, that's what the
kennel team wants - people to experiment in user space, remixing kernel tech
together...

~~~
cpuguy83
There's a new-ish (from Feb) LWN article about container's as an object in the
kernel: [https://lwn.net/Articles/780364/](https://lwn.net/Articles/780364/)

Reception still doesn't seem great.

------
raesene9
It's fair to say that in the hype cycle Docker (the technology) has passed its
peak, but I don't think the conclusion that as a technology, it's finished, is
warranted.

Containerization is still on the rise and Docker is part of that. The toolset
is still a good, fairly easy to use, place for individual developers to use
containers on the client-side whilst creating containerized applications.

For smaller deployments (without orchestration) a single Docker engine with
things like compose, can still work well.

Obviously on the orchestration side, Kubernetes has won, although it too will
face the inevitable trough of disillusionment when people realise that all
technologies have downsides.

Personally, for simpler workloads, I think Docker swarm can still be a good
answer, as it's a lot less complex than Kubernetes to set-up and maintain.

The idea that the Redhat container stack (podman, CRI-O, et al) necessarily
mean the end of Docker, doesn't really follow at all to me.

If anything the increased use of containerd directly is a bigger threat to
Docker's market share.

~~~
wodenokoto
I think the main point to consider is, if Kubernetes is the technology for
large deployments, who is left to buy enterprise solutions from Docker the
company?

And without enterprise sales, what is going to fund docker development for the
small and simple docker workloads?

~~~
raesene9
Well Docker the company is a different game. They have gone quite heavily for
the enterprise market and support Kubernetes as part of their Docker EE
product.

Of course, whether that will give them enough success to justify their
valuation, is another matter. Personally, I had thought they would get bought
out by one of the large tech. players who is heavily investing in
containerization (e.g. Microsoft) but that doesn't seem to have happened so
far.

~~~
pbalau
Amazon might have a say in this, both their ECS and EKS solutions are based on
docker.

------
tobbyb
What is wrong with Docker? This does not address any technical or other
shortcoming and only seeks to replace one set of over engineered tools with
another with the exact same problems. [1]

The is yet more of the ecosystem continuing to push over engineered tooling
and 'winners' breathlessly without basic technical scrutiny that leaves end
users dealing with needless complexity and debt.

Containers can be useful as a lightweight, efficient alternative to VMs and
those who want containers untouched by questionable ideas should try the LXC
project on which all this was based.

Any additional layer on top of this be it non standard OS environment or
layers should meet technical scrutiny for end user benefits and most users
will be surprised by the results.

[1] [https://www.flockport.com/guides/say-yes-to-
containers](https://www.flockport.com/guides/say-yes-to-containers)

------
danieldk
This post links to podman. But the podman website is completely useless,
because it does not tell me what it does differently/better than docker, only
that

    
    
        What is Podman? Simply put: `alias docker=podman`
    

So, for someone who occasionally uses Docker for running services or creating
specific build environments (manylinux1): what are the benefit of podman over
Docker?

~~~
rmk2
Podman doesn't have a daemon like Docker does. It also more tightly integrates
with buildah, which the article doesn't expand on. Have a look at this (very
brief) overview to get a bit better idea of their relationship:
[https://github.com/containers/buildah#buildah-and-podman-
rel...](https://github.com/containers/buildah#buildah-and-podman-relationship)

Podman also uses the same notion of pods, and it _doesn 't_ support docker-
compose syntax/files, because RedHat strongly believes that Kubernetes has
already won. Basically, podman/podlib allow you an easy migration path from
your local computer to a k8s cluster, with the same images and same concepts.
Have a look here: [https://github.com/containers/buildah#buildah-and-podman-
rel...](https://github.com/containers/buildah#buildah-and-podman-relationship)

~~~
nirv
_> Podman also uses the same notion of pods, and it doesn't support docker-
compose syntax/files, because RedHat strongly believes that Kubernetes has
already won._

Could you expand on that please? Almost everything I run locally (be it a
self-hosted service or app devel) with docker is a docker-compose stack. It
allows me to easily manage/monitor services via CLI or Portainer. How does
Podman and other modern tools offer to solve this case, or is it proposed now
to use K8s locally?

I got enthusiastic about Podman not having a daemon and running Podman
containers as a non-root user[1].

[1] [https://opensource.com/article/18/10/podman-more-secure-
way-...](https://opensource.com/article/18/10/podman-more-secure-way-run-
containers)

------
languagehacker
It's not clear to me that the author has a solid understanding of how Docker
makes it money. You don't make money just by giving away software, and you
also don't make money by providing support software for no one uses. Docker
has been pretty smart to build a well-known brand around supporting a specific
set of technologies -- some theirs, and some not. When it became clear they
were unable to own every part of the container ecosystem, they made smart
decisions around supporting k8s and engaging the open container standard.

Docker's got lots of runway providing enterprise support contracts, so I'm not
worried about them.

I think the author's also not noticing that orchestration was really a bit of
a stumbling block that would eventually be removed. Sure, you've still got to
use k8s in self-hosted, GCP, and Azure, but those of us on AWS have the option
to use ECS with Fargate and have many of the core features of something like
Kubernetes fully managed.

So anyway, this post is a bit dramatic, and maybe has a few blinders on.

------
maaaats
This article contains really no good reasons and spends many words to say that
unnamed alternatives exists.

One argument is that there are no new big features. But that is completely
normal when things become stable.

~~~
cwingrav
Agreed. I came for rational but saw none. There seems to be a lot of hate for
Docker I don’t understand. Is it community management? Is it some esoteric
tech concern? Can someone more knowledgeable pipe in?

~~~
pknopf
I agree.

Why do people hate _Docker, the CLI_. Anyone? I mean, it's slowly going OCI,
so no vendor lock-in.

~~~
pxtail
HN crowd likes to gaze at FANG-like corps and treat it like gods, FANGs use
Kubernetes? well then for sure docker is passe :) It doesn't matter that for
some use cases swift and straightforward solutions like docker and it's
ecosystem are better.

------
vbezhenar
For me Docker's value is to be able to easily run some Linux services with
Windows. They got a lot of ready to use recipes on their website, so it's
really easy to run e.g. wordpress with mysql. I would spend at least hour to
manually setup VM with Ubuntu, install that stuff there. Also I'm using it to
run postgresql for development. While technically I can do it just from
Windows, I feel safer concentrating all that stuff inside disposable VM, also
it's easy to share onboarding scripts with colleagues. I don't see how RHEL's
tools would help me with that. Actually I could replace all my Docker usage
with few shell scripts, but they have to be written and with Docker they are
already written, many by experienced software maintainers. Docker will die
when popular software will discontinue their docker images.

~~~
fxfan
How's docker on Windows working out for you so far?

~~~
vbezhenar
No problems at all. There was weird problem when I tried to output binary file
to stdout inside docker and redirect it inside cmd which resulted in garbage.
But it wasn’t appropriate usage I guess.

------
nathan_f77
Are there any container filesystems that support multiple inheritance, and
create diff layers? It would be really nice if I could build a few different
things independently, and then merge the final images together. Also if I
could only include files that have changed in a new layer, and ignore
duplicate files (even if the file was touched, or the timestamp has changed.)

Those are my biggest pain points with Docker at the moment. I have a complex
build script that uses multi-stage builds and rsync to achieve this [1], but
it's still a bit slow and inefficient. Would be nice if something supported
this out of the box.

I've worked on a lot of projects where people just reinstall (and recompile)
their entire list of dependencies (Ruby gems or NPM packages), and you have to
jump through hoops to set up a caching layer, or maybe install them into a
volume as a run-time step, instead of at build time. There should be a much
better native solution for this, instead of needing to invent your own thing
or read random blog posts.

[1] [https://formapi.io/blog/posts/fast-docker-builds-for-
rails-a...](https://formapi.io/blog/posts/fast-docker-builds-for-rails-and-
webpack/)

~~~
cpuguy83
I would say check out buildkit, which is the tech behind "docker build"'s new
builder.

I don't know if the Dockerfile format is really suitable for this, but you can
now build your own format and Docker can just build it.

Basically buldkit breaks things down into a frontend format (like Dockerfile)
and a frontend parser which gets specified as an image at the top of your file
(`#syntax=<some image>`), the parser converts the frontend format into an
intermediary language (called llb), buildkit takes the llb and passes it to a
backend worker.

This all happens behind the scenes with `DOCKER_BUILDKIT=1 docker build -t
myImage .`

Docker actually ships new Dockerfile features that aren't tied to a docker
version this way.

Actually there are a number of new Dockerfile features that might get you what
you need, even if the format isn't all that great, at least it's relatively
natural to reason about. Things like cache mounts, secrets, mounting (not
copying) images into a build stage's "RUN" directive, lots of great stuff.

This is all officially supported stuff.

Here's a demo of "docker build" building from a buildpack spec instead of
Dockerfile: [https://github.com/tonistiigi/buildkit-
pack](https://github.com/tonistiigi/buildkit-pack)

\- buildkit -
[https://github.com/moby/buildkit](https://github.com/moby/buildkit) \-
official Docker docs - [https://docs.docker.com/develop/develop-
images/build_enhance...](https://docs.docker.com/develop/develop-
images/build_enhancements/) \- buildkit Dockerfile docs -
[https://github.com/moby/buildkit/blob/master/frontend/docker...](https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md)

------
tnolet
This is a similar discussion to vinyl vs cassette vs cd vs streaming. All will
keep existing in some form or another. Admittedly, some will die out
completely (DCC, Minidisc) I’m pretty skeptical this going to be Docker
though.

Even more, the argument that because Redhat RHEL 8 no longer has a Yum repo
for it Docker is dead in the water is a bit far fetched. According to Wiki
RHEl is a fairly small % of the server market compared to Ubuntu, Debian and
Windows for that matter.
[https://en.m.wikipedia.org/wiki/Usage_share_of_operating_sys...](https://en.m.wikipedia.org/wiki/Usage_share_of_operating_systems)

~~~
jgillich
RHEL also means Fedora and CentOS, the latter is extremely popular for large
scale deployments. The article is for web servers only, I think the real
market share of RHEL and CentOS is more like 30-40%.

Podman just makes more sense because it doesn't require a big fat daemon. You
launch containers like any other service: with systemd. And you can run podman
without root permissions as well, which is a huge win for security.

~~~
tnolet
Well, having worked quite a bit with RHEL and Centos in the past, I found
their day to day usage quite different. Maybe that has changed.

More important, I only saw RHEL at traditional Fortune 500 companies. Not sure
how their market in FANG type companies is. Probably negligible. F500 is still
big of course.

~~~
user5994461
FANG are making their own OS at this point so they're not really relevant.

There are two distributions left in Linux, RedHat derivatives and Debian
derivatives. In terms of install base, it's maybe 1/3 and 2/3\. In terms of
money, expect the other way around, because it's F500 who pay the most for
software and they are on RedHat.

RedHat is actively trying to kill docker (along Google and Amazon). RedHat
removing "docker" CLI and replacing it with its own tools is a major step
toward that. Docker will be de facto dead in enterprise as soon as it stops
being supported by RedHat.

~~~
kcmastrpc
Unless these orgs can replace it with something as feature complete and stable
as Docker I suspect their customers will ultimately have the last word.

~~~
user5994461
I will pass on the joke to call docker either stable or feature complete.

There is nothing that prevents RedHat to ship an alias to their own tools by
default, just like java => openjdk.

It's not the responsibility of RHEL to maintain or support third party
software. If case you didn't know, docker stopped shipping with Debian years
ago.

~~~
kcmastrpc
Nothing is ever feature complete or stable, I know. All software is shit,
right?

The day RHEL customers stop using the docker runtime is when RHEL will stop
supporting it - until then, they'll support it. Case in point: java ->
openjdk.

Anyhow, this article is just click bait, and it's been done at least twice a
year since Dockers inception. I'm disappointed that this community finds
shitposts like this more compelling than the NSA opensourcing a reverse
compiler, but I digress.

Have fun in your bubble.

------
rcdmd
Nope. Docker's value isn't just its software. It's the support built around
it, tutorials, familiarity and common usage, Dockerfiles, huge Docker Hub,
existing setups relying on it and so on. Articles like this tend to overlook
the value of entrenched technology that works well enough.

~~~
user5994461
Funny enough. Kubernetes ignores pretty much all of that in favor of its own,
making experiences in running Docker alone mostly worthless.

------
MindTooth
Based on this, I've tried most of today to make minikube work on macOS. But as
I'm using DNSCrypt-Proxy I had major issues making it work without manual
steps.

So far, the Docker for Mac is a solution that just works.

If someone have some experience, I'll gladly want to know how you make it work
flawless.

------
pknopf
The article mentions Kubernetes using containerd and the OCI is the future,
but also fails to mention that containerd is developed by Docker, which
supports OCI, which largely supported by the Docker company.

As of the latest versions of Docker, the dockerd is now using containerd under
the hood.

I'm not sure how the Docker CLI will exactly die. The posts seem to focus on
the CLI only, and even calls out that "the viability of the company Docker is
outside the scope of the post", but it failed mention my previous points.

~~~
scaryclam
Containerd is a project within the Cloud Native Computing Foundation, which in
turn is part of the Linux Foundation. Docker haven't been directly involved
since 2015, and even then, it's arguable that they've never been (you may be
thinking of runc though, which was donated in 2015).

It's true that Docker uses containerd under to hood, but that's actually part
of what the author is arguing. Docker as a technology is a wrapper platform
around core industry technologies that they neither own or control. That means
they have to compete as a tooling company, and they have already lost ground
there. The more things like Kubernetes and podman join the market, the less
required Docker becomes, which means they're going to be more and more at risk
of failing.

[https://en.wikipedia.org/wiki/Linux_Foundation#Containerd](https://en.wikipedia.org/wiki/Linux_Foundation#Containerd)

edit: added link for reference

~~~
pknopf
> Docker haven't been directly involved since 2015, and even then, it's
> arguable that they've never been

A large majority of even the recent commits to containerd are made by Docker
employees.

[https://github.com/containerd/containerd/commits/master](https://github.com/containerd/containerd/commits/master)

> It's true that Docker uses containerd under to hood, but that's actually
> part of what the author is arguing. Docker as a technology is a wrapper
> platform around core industry technologies that they neither own or control.

I cede your point, but it's irrelevant and isn't want the author implying
(even directly).

From the article: "I do not think there is any reason for us to user docker
any more and therefore Docker as a technology and as a company will slowly
vanish."

The end users do not care how cgroups are setup or mount points are built. The
guts may be standardized, but the docker toolchain (Dockerfile, docker-
compose, docker run) will continue to exist. The "runtime" is irrelevant and
there just isn't a competitor in the "tool chain" arena.

Docker Swarm is the only thing that will vanish.

> The more things like Kubernetes and podman join the market, the less
> required Docker becomes, which means they're going to be more and more at
> risk of failing.

Kubernetes is an entirely different use-case. Nobody is arguing Docker Swarm
will beat it.

You could say the Docker CLI isn't required with the advent of other tools,
but those are incredibly big shoes to fill. Think of all that entails
(Dockerfile, docker-compose, the CLI, cross-platform(-ish) support for
Windows/OSX).

Also, competition leads to better tooling. Did anyone ever say "Unix isn't
required any more, because with have Linux"?

The tooling of the Docker CLI is in a very good spot _as-is_. The guts are
being opened, which I think would relieve the pressure some may feel to jump
from Docker.

------
polskibus
Honest question from someone who only used docker on several occasions to
deploy 3rd party software but wants to invest into containerization. Is it
worth going deep into docker right now? Is he knowledge transferrable to
kubernetes and whatever else is going to replace it? I want easy deployment
that properly deploys containers and telheir dependencies, but many of my
scenarios don't need redundancy, scalability etc. My use cases are in-house so
managed solutions won't be considered. On the other hand I don't want to have
much worse operational complexity than with docker (and that's what I heard
running local kubernetes biggest drawback is). Also many open source solutions
provide dockerfiles, not caring about other containerization solutions. Please
help, I'd rather avoid investing in an already obsolete solution if possible.

~~~
raesene9
Docker the technology isn't obsolete now and I very much doubt it will be in
the future.

As you've noticed there's a load of solutions who make use of Dockerfile
formats and store images on Docker hub.

I'd expect that there will be future developments, but that they'll be
backwardly compatible with existing deployments.

Docker Engine (the daemon running on a single host) and Kubernetes (the
orchestration system) largely serve different needs.

Kubernetes is good for large scale deployments where it's additional
complexity is warranted.

For running groups of containers on a single host (or small number of hosts)
Docker does just fine.

------
gfiorav
So many times have we thought that a technology will prevail or disappear
based on its validaty or adequatecy, and so many times have we seen the most
popular survive.

JavaScript comes to mind. If it’s widespread and easy to learn, it might not
matter how much technically better other alternatives might be.

We’ll see

------
kccqzy
Well I can see kubernetes being the dominant container orchestration, but what
about use cases for docker that simply doesn't need any orchestration? I mean
those containers that are run on a single machine, perhaps manually, for
purposes other than deployment?

------
thruhiker
I've always disliked articles that use this tone. It's meant to be informal,
but comes off as lazy and spoken in muted tones as if the author were in a
pub; Docker sitting in a booth a few meters away blithely sipping on a pint of
bitter. The substance isn't much more enlightening than typical bar banter
either.

Docker played a pivotal role in creating our modern understanding of
containers and we should be thankful for that. I don't understand the value
this article provided beyond pointing out a few container runtimes I hadn't
heard of.

------
ofrzeta
I don't get why not more people are using Swarm. It's simple and it get the
job done for many people who don't need a full Kubernetes setup. Also, for
instance, it encrypts secrets, opposed to Kubernetes which only does Base64
encoding of secrets.

I do understand the market dynamic but still there are niches for other
simpler products.

~~~
dankohn1
Kubernetes supports encryption at rest of secrets:
[https://kubernetes.io/docs/tasks/administer-
cluster/encrypt-...](https://kubernetes.io/docs/tasks/administer-
cluster/encrypt-data/)

They're encrypted in transit via mTLS.

------
btbuildem
Since the author doesn't name any of the alternatives:

[https://containerjournal.com/2019/01/22/5-container-
alternat...](https://containerjournal.com/2019/01/22/5-container-alternatives-
to-docker/)

------
WestCoastJustin
The death of the Docker as a technology is laughable. Every Cloud provider is
racing to add container orchestration features and they all use Docker under
the hood [1, 2, 3, 4]. Cloud providers are dumping hundreds of millions into
this ecosystem through, training, credits, new features, etc. I have not seen
anyone step off the gas. They are all seeing massive growth in container
adoption. My guess is this is only the start. Go look at any DevOps/SRE job
posting and you will see Docker, Containers, Kubernetes. This post is wrong
when is comes to the technology going away anytime soon.

Container orchestration is light years better than anything that we were doing
on the operations side before. This is why I find it extremely hard to believe
we are going back. We would need something much better to jump to and that has
not been invented yet. So, Orchestration/Docker/Containers are here to stay
for the foreseeable future.

The rkt image format tried early on to subvert the Docker Image format and was
unsuccessful. Today, there is much more depending on the existing image
standard. You would totally fragment the existing integrations if you
attempted to change. I am not saying that people will not try. But, I just
cannot see it happening anytime soon. You have all these Clouds betting on it,
you have millions of Dockerfiles out there now, and tons of people trained on
it. All these CI/CD platforms supporting it. You would need some massive
reason to change. Why would they? You might see people under the hood take the
Docker Image and wrap it somehow, like gvisor [5], but they will still likely
accept the Docker Image.

Go look at these charts from Datadog about their customers Docker adoption
[6]. Every single chart is hockey stick growth. This is a sampling of 10,000
companies running 700 million containers in real-world use. I am just offering
a counter to the point that the image format is coming to its death. I do not
see that.

Disclosure: Former Docker employee.

[1] [https://aws.amazon.com/ecs/](https://aws.amazon.com/ecs/)

[2] [https://aws.amazon.com/eks/](https://aws.amazon.com/eks/)

[3] [https://cloud.google.com/kubernetes-
engine/](https://cloud.google.com/kubernetes-engine/)

[4] [https://docs.microsoft.com/en-
us/azure/aks/](https://docs.microsoft.com/en-us/azure/aks/)

[5] [https://github.com/google/gvisor](https://github.com/google/gvisor)

[6] [https://www.datadoghq.com/docker-
adoption/](https://www.datadoghq.com/docker-adoption/)

~~~
nebulous1
He isn't saying containers are dying, he's quite clear on that.

~~~
WestCoastJustin
> upcoming death of Docker as a company (and also perhaps as a technology) ...

The first line says differently.

~~~
dijit
Docker isn’t all container technology. It wasn’t the first, isn’t the best and
is not synonymous with the concept.

------
raverbashing
As much as docker works, I can't help feeling it is a big kludge.

CLI usability is close to zero. One (temp) image per line in the Dockerfile
might have some practical uses, but most of the time it doesn't.

Yes it seems their days are numbered.

~~~
pknopf
> CLI usability is close to zero

What?

> One (temp) image per line in the Dockerfile might have some practical uses,
> but most of the time it doesn't.

I completely disagree. Cached builds being one reason, viewing the
intermediate layers is just a huge benefit. Why _don 't_ you like it. You
could always build with "\--no-cache", which all CI should be using anyway.

~~~
raverbashing
> What?

Yes.

It's all over the place.
[https://docs.docker.com/engine/reference/commandline/cli/](https://docs.docker.com/engine/reference/commandline/cli/)

It has no obvious separation of commands dealing with
images/containers/running containers. Sometimes you refer to them by tags,
sometimes by IDs.

It's non discoverable. It's non intuitive.

I can't know what 'docker rm' does without reading the docs. Oh and there is
'docker rmi' is that for images?

Docker build has this nice example here:

'\--tag , -t Name and optionally a tag in the ‘name:tag’ format'

Ok so you set a name with --tag?

Compare with 'kubectl get pods'

> Cached builds being one reason, viewing the intermediate layers is just a
> huge benefit.

Sure, as I said, it is useful, but you do not want to do this most of the
time.

> You could always build with "\--no-cache", which all CI should be using
> anyway.

Bad defaults are an UX problem, thanks for furthering my point.

~~~
declnz
>It's non discoverable. It's non intuitive.

> I can't know what 'docker rm' does without reading the docs. Oh and there is
> 'docker rmi' is that for images?

I think you're using the old CLI. A couple of years ago, they added new syntax
and "encouraged" its use by default [1] (though IME people still use the
original).

This is definitely more discoverable and logical; the syntax would be `docker
image rm` [2] or `docker container rm` [3], etc

[1] [https://blog.docker.com/2017/01/whats-new-in-
docker-1-13/](https://blog.docker.com/2017/01/whats-new-in-docker-1-13/) [2]
[https://docs.docker.com/engine/reference/commandline/image_r...](https://docs.docker.com/engine/reference/commandline/image_rm/)
[3]
[https://docs.docker.com/engine/reference/commandline/contain...](https://docs.docker.com/engine/reference/commandline/container_rm/)

~~~
raverbashing
Interesting, I used it last year, but I'd say it could have been an older
version.

Thanks for pointing this, and I'm glad they realized the issue as well!

------
peterwwillis
This is sad, because what made Docker useful was a full feature set.

We could already build chroot environments. We could already keep remote
images. We could already set up app-specific networks. Having the tools wasn't
what we needed: it was the confluence of all the features.

If the alternative to Docker becomes learning 50 new tools, we'll have failed
as an industry, and shouldn't call ourselves engineers. You don't replace a
car with a wheel barrow-lawn mower-hand held radio-portable fan. This
obsession with churning out a new tool every year has to stop.

------
wwarner
I basically agree with the technological argument made in the post.
Orchestration plays the role of OS, where resources are defined, managed and
protected. Containers are something analagous to device drivers, presenting
the underlying resource as narrowly and efficiently as possible.

But that's the life I expect, not the one I live today. In the present day I'm
much more reliant on docker than on kubernetes, and I cannot live without
being able to freely mix and match go, python, or Java , really old and really
new, in any host configuration I want.

------
CWuestefeld
_All 3 big cloud providers, now have a managed Kubernetes solution that they
offer to their customers (and as a result will eventually sunset their own
home-made solutions that they built over the years - because there can be only
one)._

I haven't heard anything about this from AWS. Certainly they've introduced
EKS, but I haven't heard anything about setting their ECS orchestration out to
pasture. Does anybody have information about this?

------
miguelmota
Yea there are alternative Docker runtimes and Kubernetes won the orchestration
war but Docker images are here to stay. As far as I know there aren’t better
alternatives for Docker images and even Kubernetes uses Docker images. Docker
as company currently has a Red Hat-like business model and that’s certainly
hard to sustain so agree on that front.

------
nisa
I'm using Swarm on a small cluster (3-5) machines. Is there any reliable
Kubernetes alternative for that? Portainer as Swarm UI was buggy for me - a
usable and stable UI would be nice, but I'm not sure if I should do the full
Kubernetes dance :/

Easy to debug and lightweight would be also great. I didn't really found
something good.

~~~
GordonS
I'd be really interested to know more about your setup and processes if you're
running Swarm in production? For example, how do you handle rolling updates?

~~~
nisa
we are not a SaaS company, we just run some simple nodejs based services on
swarm and my goal is to use swarm as a testing hub for docker - mostly the ci
should push something there for the devs. So uptime does not matter, however
swarm crashed badly a few times on me and at the moment everything important
is well configured outside using your typical LTS linux distribution and I'm
hesitant to move our basic services to it...basically 9-5 it has to work, I
can just run apt-get update && reboot earlier or later - really not sure if I
should move all the stuff into swarm / k8s - also because bus factor would be
one and there is some experience with the old linux way.

For everything that resembles a microservice/service -> Docker, everything
else (ldap, mail) -> oldschool. So far so good.

I'm looking at k3s or minikube with rancher as ui - maybe that's better, or we
just ignore the hype and run docker-compose.yml files by hand but that's also
pretty shitty.

------
std_throwawayay
Providing base technology doesn't seem to lead to financial success. You have
to provide more concrete solutions for companies and people instead of the
abstract infrastructure. I wonder if protection of the base technology through
software patents could have averted this looming financial disaster.

~~~
mixmastamyk
Prior art existed for a decade.

------
rleigh
I do like the fact that docker now works for non-Linux cases, like Docker on
Windows with Windows containers. Would be nice if they would also support
MacOS containers, FreeBSD etc. That would certainly entrench themselves as a
"universal" container technology.

------
codegaucho
Who is providing a container registry that can replace the offering from
Docker (the company)? That seems to be the big contribution of Docker the
company these days: a huge library of container images.

~~~
itomato
..that have stagnated and are now riddled with vulnerable components.

~~~
zapita
Really? The official images have been quite solid and up-to-date in my
experience. Has that changed since last year?

------
redsavagefiero
Still using LXC where it makes sense and never felt the call of Docker...or
Kubernetes or any of this developer popularized opaque tooling over
sophisticated meddling with my systems.

------
Aeolun
Yeah, no. Docker is still here and is finally being embraced by larger
enterprises (if they haven’t already).

I don’t see it being switched out for any unproven technology any time soon.

------
leowoo91
It's just normalized as any other technology, it doesn't mean company is not
ready for it but that's up to how much they've been partying since.

------
sheeshkebab
Docker should just merge with Npmjs. Then they might get big enough to get
bought up by Microsoft, Oracle or IBM (which I think is their end game
anyway).

------
ilaksh
What do you replace Docker Hub with?

------
EliRivers
_Docker was the company that changed the world_

I think I've seen one person use docker once, back in about 2013, to create
and freeze a very particular Latex setup. Everybody sees a tiny slice of the
world, but to each individual, that slice looks like all of it.

~~~
itomato
How many people have you seen printing money?

------
hartator
The main thing is Docker has been used as a way to distribute binaries with
all dependencies builtins.

Bypassing all OS packaging systems. It’s a hack on top of our broken packaging
models. That does work but it is extremly fat and not very elegant.

------
Rafuino
Are Kata and Clear containers actually a Docker replacement?

~~~
raesene2
Not really. They can be used to replace one element of the docker stack, runc

------
windexh8er
The death of Docker could potentially happen, but not for the reasons the
author lays out. The last argument in the article is interesting. The author
implies because RHEL dropped Docker - then game over. But if we want to
analyze this it isn't because Docker lost a technology battle. It's because
RHEL is protecting their ecosystem. The article is correct in making the case
there is choice. However the author doesn't put two and two together with
regard to RedHat. Just because you can alias podman with docker doesn't really
mean anything. Someone seems to have dipped into the non-reality of Docker-is-
a-bad-word, by author Dan Walsh of RedHat. Also keep in mind containerd was
contributed by Docker and is the graduated runtime in CNCF. You definitely
don't need Docker to run containers. But it's easy to use Docker to run and
build. And Docker is still a one line installation on all Linux platforms
([https://get.docker.com](https://get.docker.com))

Next is Docker Enterprise. Can they sell it? This is where Docker is making
mistake after mistake IMO. Docker does have a leg up in Windows. But does it
matter? Today Swarm is your only orchestration option and while I understand
k8s support is coming it still remains to be seen if GMSA is included in the
first release (doubtful). If you're a Windows developer and you're deep in the
Microsoft ecosystem, you won't be able to productively use Windows Server and
k8s this year is my guess given the timelines.

Regardless, Docker has done a poor job capitalizing on Windows. Maybe it's
that people just don't need it as much as Docker would have you to believe? Or
maybe it's too jarring of a workflow change. Regardless, it isn't working.

Next is the products you get for the money: Docker Universal Control Plane
(UCP), Docker Trusted Registry (DTR) and Docker Enterprise Engine (EE).

They're all OK. But the sell is, again, tough. UCP is basically a dashboard
for centralizing RBAC for Swarm and UCP in the enterprise, and don't get me
wrong - the enterprise needs this. But UCP feels lacking. The UI is
distracting and has a lot of oddities. Managing it is a decent amount of work.
It's just OK. Also Docker UCP still doesn't support k8s PaaS while competitors
do. Remember that Docker hired Kal De away from VMW as CTO. In his time at
Docker not much has changed externally, but Docker needs to move faster. Much
faster.

DTR is interesting because everyone who's doing containers in the enterprise
needs an image repository with, again, those enterprise requirements. And DTR
delivers this, but Docker doesn't want to sell you DTR only. You can't buy DTR
by itself like it's competitors. Docker wants you to only use all of their
bits and it's dumb because Docker claims choice as a core pillar then tries to
lock you into UCP+DTR+EE. Oh I see Steve... Choice as in what DOCKER defines
for me as choice. Choice of bare metal or hypervisor, but not how I manage
containers and orchestration? Back to DTR: it's decent, but again there are
other choice and a lot of them are better standalone offerings.

Finally EE. There are about three reasons you really need EE. The first is
enterprise support and long term releases. The second is runtime enforcement
of signed images at the engine level. The third is oddities like FIPS support.
But that's it. And, again, Docker doesn't want you to buy just Engine. They
want to sell you the 'platform'.

At the end of the day Docker has good products. They're not exceptional when
taken into context of other enterprise software. They're just OK. Docker
doesn't realize this yet from what I've gathered. They have recently changed
pricing to very VMW-esque CPU count pricing. And Docker doesn't have the
market capital to make that leap, unfortunately. At the end of the day my
opinion is that Docker is marred by Docker. Not by the competitive landscape.
Yes, that contributes to Docker overall but Docker doesn't seem to make sure
they have the best product above everything else. Docker's CEO, Steve Singh,
is the wrong guy for the job. He is constantly stating cash flow positive by
end of this fiscal year yet doesn't seem to care about how good or bad the
product actually is or why people wouldn't buy Docker over something else.

So yeah... Docker has done great things. Those great things are all of the
contributions Docker has made to the community and continues to do so. But
Docker on the enterprise side feels weak, lackluster and very confused. Docker
is marred by Docker management and executive ranks from within. I'll give them
2 years make or break and I think, as it stands today it's a coin flip. The
acquisition of RedHat by IBM was a good thing for Docker, but may be the nail
in the coffin for RHEL long term. If Docker does flounder any longer look for
them to be acquired by Microsoft. And then... Docker will be set to sail off
into the ether.

------
fxfan
I have an honest question- is docker even useful for most projects or is it
just a preparation for solving imaginary scaling issues that most won't even
reach?

I ask because I'm wondering if I should care about docker in the beginning of
my project which will have very few concurrent users/requests and would run
fine on a single machine.

EDIT: Thank you to all the kind answers!

~~~
jaabe
It depends on what you deploy to. If it’s AWS or Azure web-app type stuff,
then maybe not, but if it’s your own infrastructure in anyway then fuck yes.

Ten years ago we build things in C#, used MSSQL and deployed to IIS. All
Microsoft tech, all pretty straightforward, except it wasn’t. We never kept
track of the “it works in dev, but it explodes in prod and I don’t know why”
hours, but I wish we did. Because it’s in the thousands, and those thousands
of hours is exactly why we use docker.

We also use docker because it lets us build things that our IT crew isn’t
certified in running infrastructure for (and the lovely security issues that
brings to the table) but we mainly do it because it works.

Docker might not have a monopoly on that, but they have enough of a brand that
the word “docker” is to containers what “google” is to search. At least in my
circles.

~~~
bsder
> We also use docker because it lets us build things that our IT crew isn’t
> certified in running infrastructure for (and the lovely security issues that
> brings to the table) but we mainly do it because it works.

It is amazing how many technologies get traction simply because "It lets us
bypass IT."

~~~
jaabe
In our case it’s the compromise between developers and a operations department
which has 5 technicians to support the infrastructure of a municipality with
7000 employees and around 300 IT-systems.

To manage our IT had to make certain infrastructure decisions and build their
competency around those. This clashes with a lot of modern development, but we
manage with docker and an increasing Azure presence. It’s not optimal, but
sometimes it’s just necessary. Don’t get me wrong, we’re trying to improve and
build Devops that isn’t handing off a container, but it’s a challenge in
sectors where digitisation and IT aren’t priorities despite being an inherent
part of any business process in 2019.

There is a real danger in there of course, but we have strategic choices for
our development platforms as well. They just need to move a little faster than
IT.

------
oftenwrong
s/Goodbye/So Long/

------
yuriko
Dupe of
[https://news.ycombinator.com/item?id=19350571](https://news.ycombinator.com/item?id=19350571)

~~~
tomhoward
It's not a dupe if the earlier submission(s) got few/no comments/upvotes.

------
auslander
> .. All the cool kids on the block are no longer using docker as the
> underlying runtime.

Cool kids are not using containers at all, they use cloud VMs and autoscaling
:) Simple is not easy (tm)

------
sbhn
If you are a coder, and you want to implement auto devops yourself, for free,
then see this video.
[https://youtu.be/Qlj6NiOy5jM](https://youtu.be/Qlj6NiOy5jM) You can do it to,
remember the proffesionals will shout at you for using this method, it makes
them obsolete.

