
Why doesn’t anyone weep for Docker? - jumpingdeeps
https://www.techrepublic.com/article/why-doesnt-anyone-weep-for-docker/
======
013a
There's this prevalent false position that Kubernetes is successful because of
Google.

Yeah, Kubernetes initially learned a ton because of Borg and Google's deep
investment into containers dating back a very long time. But, arguably,
Kubernetes is successful because Google Let It Go. Its a true open source
project, with governance by a wide number of industry advocates, underneath
the Linux Foundation.

By comparison, Docker is a VC-backed profit-minded startup. Of course it was
going to lose this race, for the same reason Windows isn't the dominant OS in
the cloud.

Fundamentally: You can't build a hyperscale startup based on a technology. It
doesn't appear to work anymore. The best case is the Docker/Kubernetes or
Oracle/Postgres/MySQL case: someone else does it, maybe better, open sources
it, community forms around it, you're toast. The worst case is the MongoDB/AWS
or Elastic/AWS case; a cloud provider copies you, probably does it worse, but
its cheaper and more integrated with the cloud, so they still win.

Docker was doomed; they could have been a very nice business, but the issue is
taking on huge valuations and capital, scaling like mad, and then finding out
you have no ground underneath your feet to support that valuation.

~~~
jjtheblunt
Parenthetically, Linux being an open source reincarnation of Solaris seems
also an example, no?

~~~
Someone
It would surprise me if Linus even knew about Solaris when he worked on the
first release of Linux.

Solaris’ first release was in June 1992, with first use of the name in
marketing materials in September 1991
([https://en.wikipedia.org/wiki/Solaris_(operating_system)#His...](https://en.wikipedia.org/wiki/Solaris_\(operating_system\)#History))

Linus’ famous message was from the same time (September 17, 1991)

Calling Linux an open source version of Minix is more appropriate, but it
still wouldn’t be a good example of this.

Minix isn’t dead. It moved to a BSD license, and is deployed in millions of
hundreds of millions of Intel CPUs
([https://en.wikipedia.org/wiki/Intel_Management_Engine#Hardwa...](https://en.wikipedia.org/wiki/Intel_Management_Engine#Hardware))

~~~
mumblemumble
I think that, once you're going that far back in history, it's pretty critical
to keep track of the GNU/Linux distinction. Linus just wrote a kernel. And
then the GNU userland, which had already been in development since the mid
80s, but was still somewhat lacking a workable kernel, was adopted as the
official userland to use with the Linux kernel.

And at roughly the same time, IIRC, when Sun decided to migrate their Unix
from a BSD flavor to a SysV flavor, which came to be called Solaris, they also
used some GNU bits. Which might explain some similarities between the two.

~~~
AkshatM
Is this Richard Stallman? ;)

~~~
iforgotpassword
No, he wouldn't have called it GNU/Linux:
[https://www.sudosatirical.com/articles/richard-stallman-
inte...](https://www.sudosatirical.com/articles/richard-stallman-interjects-
local-mans-funeral/)

------
joshpadnick
This article seems to be arguing that Docker’s primary downfall was being
hostile to its open source community. Without having an opinion on whether
that’s true, I suspect the core issue was not that but their business model
and execution.

Before Kubernetes was the dominant container tech, they were pushing Swarm but
I remember being confused about where Docker “standalone” stopped and where
Swarm began. Perhaps it would have been better as a separate tool with a more
clear open core model?

Then there was Docker Hub, whose UI was never great and which always seemed
light on features.

I don’t recall seeing any kind of container introspection tool from them for a
while either, despite others coming out.

Meanwhile, they represented a threat to the cloud providers if you could truly
run anything in a container on any cloud. But the cloud providers all
neutralized that threat by the classic “commoditizing the complement” strategy
where the Docker cluster and registry tech were all either open source or
commoditized.

Once Kubernetes emerged as the winner and de-valued Swarm while the cloud
providers all offered their own Kubernetes and Docker registry offerings, I’m
not sure how much more profit there was for Docker to claim.

Honestly, startups are hard. Sometimes really hard. It’s hard to know if a
different team would have gotten different results in this space.

~~~
brutus1213
Totally agree with your first paragraph. Regarding execution, I'm not sure
what Docker could have done differently that did not lead to the outcome we
have today. I don't think that playing nice with other opensource devs would
have made a difference (as the article claims).

Also .. the K was hardened at Google is BS. The ideas, maybe. But I am quite
skeptical about the amount of prod internal google code went into early K (pls
don't point at the Borg paper .. I'm talking about actual working code). I
recall doing a deep comparison of swarm vs K circa 2015 and Swarm was clearly
superior in both design and implementation. Today, K is better and has an
ecosystem .. maybe the issue isn't that core Docker containers played nice
with opensource .. rather .. swarm should have focused much more on playing
well with others.

One point of contrast is Hashicorp .. they are in the workload orchestration
and mgmt space and seem to be doing really well. Kudos to them!

~~~
smarterclayton
Kubernetes (from open sourcing to about 1.3 or 1.4) is a second system mostly
written by senior engineers (from several companies) with deep experience in
the problem domain and strong architectural guidance, and a willingness to
stop at “just good enough” and then let stuff mature. Kube was mostly “done”
from a design perspective in early 2015.

Swarm was 2-3 people in the early days, without as much strong opinionation
about what exactly they were building, which meant while it was a tighter,
simpler system, it couldn’t evolve as easily.

I’m obviously biased - I was the first non googler to have commit on the repo.
But it’s much easier to build something when you know upfront exactly what it
looks like and you have a set of committed and experienced engineers with good
leadership.

~~~
SPascareli13
> I’m obviously biased - I was the first non googler to have commit on the
> repo.

How well was your PR received?

~~~
forgot-my-pw
Looks to be successful in general:
[https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+aut...](https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+author%3Asmarterclayton)

------
alias_neo
My experience agrees with this.

I'm a huge fan of Docker, I've actively taken part since the early days,
attending meetups and using it actively day to day.

Unfortunately, when I brought several issues to GitHub, or +1'd other people's
issues that were affecting the usability within our company, the attitude was
very much "f* you and your problems" because Docker want things to be one way
and that's how it'll be.

There were issues raised 4+ years ago and are still open, for solutions to
problems that would have mooted a need for us to use something like K8s (which
doesn't work anyway for our requirements).

I believe Docker locking the community out of valuable features had also done
harm and (possibly) failed to be the monitiser they'd hoped for.

After so long, I no longer go to Docker to solve problems that could be solved
in Docker (secrets anyone? without the "hacks"), and just look towards the
other tools solving the problems.

I'll continue to use Docker, but I don't consider it a friend.

~~~
pacala
I wonder how much of that attitude was caused by an overstretched team with no
effective scaling mechanism in place. No, "open source" does not automatically
mean "scalable team". Part of Kubernetes success is its ability to scale up
the community, empowering multiple entities to meaningfully contribute.

~~~
0xDEFC0DE
At the same time, I gather it's really, really hard to write software that
satisfies a lot of use cases across a lot of businesses without having to be
somewhat opinionated.

~~~
pacala
True! Another part of Kubernetes success is the fact that it's core
architecture is sound, scalable for both workloads and features.

------
paule89
The problem i have with cubernetes is the following: I as a small developer
and small server owner don't have the ressources to even get started. The
first thing i see at cubernetes is a cluster. Why a cluster. Do i need to
cluster my Raspberry pi's to get something out of it? Do i need to buy 3
servers just to run 5 containers?

In docker its easy. Download Docker. Start container. Install container
manager like platformio. Done.

But true to the article. Docker seems very hostile towards the community and
towards getting revenue. If i think about Kubernetes and Revenue i hear IBM,
Red Hat. And i am too cheap of a person and too small of a customer to ever
need those guys. So i will still keep using Docker.

And because i know more about Docker i will probably try to use it at work as
well. Easy as that.

But i am open to suggestions.

~~~
moksly
> as a small developer

Do you need kubernetes?

I know the hype cycle is mad for copying big tech, but if stackoverflow can
operate on a couple of IIS instances I’d argue that you almost never need
kubernetes.

~~~
raxxorrax
For me containerization was always about deterministic environments and ease
of deployment instead of performance and clustering. But even with these
advantages I am currently not using any solution for that.

For cloud services this is probably a good idea, even for users to a degree if
the provider doesn't already give you a fitting box.

But otherwise it is not a must have in my opinion. Maybe that is a mistake and
the apps I develop today are not going to work in 10 years. Well, worst case:
I have to be paid again.

~~~
sergiosgc
> For me containerization was always about deterministic environments and ease
> of deployment instead of performance and clustering. But even with these
> advantages I am currently not using any solution for that.

You can get 99% of the way using a stable distribution and a configuration
management system (ansible, chef and the like). It's much much simpler than
running an orchestration service. I feel most people don't need containers and
orchestration, just config management running redundant system designs.

~~~
ptsneves
To be honest I only very recently got to know ansible and related techs so I
maybe missing an opportunity to learn something. Even so I think you are
forgetting the DEV part. With ansible and chef you can make a deployment to
the real infra. With containers you can have infra locally in your DEV
environment and have clean slates. The similitude of the DEV environment and
the production are crucial for devops. There is nothing more annoying for
developers than having something work locally and then needing some weird
quirk for the production/ci. A lot of political infighting and hate for
devops. I saw this being a tech lead for build system in a fortune 500
company. Ah they have redhat based distro. Ultra stable! Problem is nothing
from outside the company works out of the box, leading to blessed machines. A
disaster that lead to so much unofficial workarounds that it is not funny. Lol
the kernel is so old it cannot run docker:) Ubuntu is better but ultra stable
machines will tend to massive customizations that are very hard to keep when
you finally want to upgrade. It was very common to reach End of Life of LTS
distros, and then have the server upgrade being a nightmare due to the long
evolution that happened in the mean time.

~~~
rlpb
> With containers you can have infra locally in your DEV environment and have
> clean slates.

True, but you can do that with plain system containers such as with lxd,
rather than having that bundled with the huge paradigm shift that Docker comes
with.

~~~
ptsneves
My experience with lxd is very limited. Actually I worked with liblxc which is
the underlying paradigm, and i kind of disagree with you. The paradigm of lxd
is much more foreign to me than docker. I am pretty familiar with my
application and the distro of the container in a user perspective. I am
definitely very insecure about cgroups and kernel namespaces. In the end my
application is connected with my business/work orders. Kernel minutiae is not
and the technical skill requirements is much higher. That will put a higher
price tag on my team's human resources.

~~~
rlpb
> The paradigm of lxd is much more foreign to me than docker.

The paradigm of lxd is pretty much exactly the same as the paradigm of a
regular distribution installed on bare metal or inside a VM. If you can
operate a regularly installed distribution, then you can operate inside a lxd
container. The commands to create and destroy lxd containers are trivial ("lxc
launch ubuntu:bionic" for example).

> Kernel minutiae is not and the technical skill requirements is much higher.

I'm not sure why you think you need to know kernel minutiae, cgroups or kernel
namespaces. Operating lxd needs none of that.

> I am pretty familiar with my application and the distro of the container in
> a user perspective.

That's all you need.

------
psv1
Can anyone offer a good guide to DevOps for people who don't directly use
these tools but work with engineers who do and would like to learn more? The
whole ecosystem of servers, cloud infrastructure (and all of the different
offerings there), Docker, Kubernetes, CICD tools etc is a bit overwhelming to
get into.

~~~
dsr_
Sure. None of the things you mentioned are DevOps.

DevOps is two things:

1\. Applying the methods of modern software development (version control,
automation, DSLs...) to operations (provisioning, config, deployment,
monitoring, backups...).

2\. Reducing silo barriers between devs and ops groups so that everyone is
working together as a team, rather than blaming each other for poor
communication and the resulting messes.

Then there are all the DevOps hijacking attempts, such as equating it to Agile
or Scrum or XP, or insisting that it's a way to stop paying for expensive
operations experts by making devs do it, or a way to stop paying for expensive
devs by making ops do it, or a way to stop paying for expensive hardware by
paying Amazon/Google/$CLOUD to do it.

No matter what your software-as-a-service company actually does, it will need
to execute certain things:

\- have computers to run software

\- have computers to develop software

\- have computers to run infrastructure support

You can outsource various aspects of these things to different degrees.
Anywhere you need computers, you have a choice of buying computers (and
figuring out where to put them and how to run them and maintain them), or
leasing computers (just a financing distinction), or renting existing
computers (dedicated machines at a datacenter) or renting time on someone
else's infrastructure. If you rent time, you can do so via virtual machines
(which pretend to be whole servers) or containers (which pretend to be
application deployments) or "serverless", which is actually a small auto-
scaled container.

Docker is a management scheme for containers. VMWare provides management
schemes for virtual machines. Kubernetes is an extensive management scheme for
virtual machines or containers.

A continuous integration tool is, essentially, a program that notes that you
have committed changes to your version control system and tries to build the
resulting program. A continuous deployment system takes the CI's program and
tries to put it into production (or, if you're sensible, into a QA deployment
first).

~~~
movedx
At last, someone who gets it. Absolutely nailed it. Great answer. I never log
into my HN account anymore, but for this response I just had to say: yes. Well
said.

When you boil the Cloud, DevOps, CloudOps, SecOps, *Ops, CI, CD, Containers,
VMs, and all the other technologies we've devised over the past ten years, you
always end up at the basic building blocks.

You eventually come to the conclusion that all we're really doing with all
these new tools is adding software layers on top of those building blocks in
an attempt to make them easier and faster to consume.

And how have we done overall?

Not bad, if you ask me. Some solutions are overkill for most people (K8s is an
example of over kill for a start up and even an SME.) But Terraform, Ansible
and GitLab (CI) are something I'm currently developing a highly opinionated
video training course on because I believe they strike the right balance of
improving on prior experiences without taking the absolute piss.

~~~
heurisko
I am a developer, who also dealt with ops in a small business context. I agree
with Ansible striking a good balance between prior experience and the future
of automating server configuration.

I did a write-up on how I used it on my blog:
[https://heuristicservices.co.uk/2019/08/13/staging-and-
produ...](https://heuristicservices.co.uk/2019/08/13/staging-and-production-
servers-with-vagrant-and-ansible/)

The workflow worked really well, provisioning Vagrant servers in staging and
Digital Ocean droplets in production.

~~~
GordonS
Thanks, I appreciated this blog post - I've struggled to get started with
Ansible before, and this was just what I needed!

------
dasyatidprime
Maybe this isn't quite the perspective the article's taking—but damn near no
one visibly wept for LXC when Docker stomped all over it in terms of “what
people think containers just Are”. And now the news asks why I don't weep for
them? Live by the stomp, die by the stomp.

~~~
roryrjb
I weep for Solaris Zones and FreeBSD Jails. Granted I don't really have much
experience of them, I do have some experience of containers on Linux via
Docker but also in constructing a minimal container runtime in C (not OCI
compatible or anything), but my point is there was a lot of work in this area
before Docker and especially in the case of Zones, freely available today in
illumos distributions, are completely overlooked. I mean I could be completely
missing something here, but Joyent for example seem to have made some really
good innovations with Manta, i.e. spinning up containers to run UNIX pipeline
equivalent jobs directly in the cloud on the data, but as with illumos vs
Linux, Zones vs Docker and Joyent vs AWS/GCP/Azure, it seems to me a david vs
goliath kind of battle, even if the tech is better.

~~~
jiveturkey
As do I. Solaris in general and Zones in particular are so much better. There
just wasn't an ecosystem around it. Solaris was too late to make the shift to
open source. It might not matter; had they done so "in time" it might have
killed them anyway!

------
djsumdog
I don't get why k8s is the dominant scheduler. If you have a 3 ~ 6 person
platform team that can set one up, or build a secure terraform of CFN codebase
to establish an AWS/EKS system, they can be nice. But I've also worked at
DCOS/marathon shops where it worked just as well.

The trouble with all these schedulers is they can't go from just one node
(where scheduling and processes run on the same node .. and minikube is a
hack; not a production system) to 100. You can't just setup a small k8s, and
then add a node, and another node, and scale up. You go from a single docker
system, to a big managed k8s system.

There needs to be more competition. It's the same deal with the dominance of
systemd as the only system layer. Only the small startups seem to be using
more lightweight stuff like Nomad, k3s, RancherOS (Rancher is mostly going the
managed k8s solution anyway; even though they have their own k3s
implementation).

A running k8s system can be okay, but there is a lot of room for improvement
(in terms of making it simpler). Both DCOS and k8s seem to waste a lot of
resources. Docker could have competed in this space, but everyone complained
about all the bugs in Swarm and it never really went anywhere.

I did a writeup on container orchestration systems late last year:

[https://penguindreams.org/blog/my-love-hate-relationship-
wit...](https://penguindreams.org/blog/my-love-hate-relationship-with-docker-
and-container-orchestration-systems/)

------
goatinaboat
For a very long time there was a gaping security hole in Docker: anyone who
could run a container could mount anything on the underlying host as root.
This says to me that Docker (the company) don’t really consider any use cases
beyond “fooling around on a personal laptop”. Meanwhile other container
projects took seriously from day 1 that they would need to run in production.

Docker (the company) certainly helped to raise the profile of containerisation
but they invented very little of it and did a poor job of implementing what
they did do. Good riddance to them.

~~~
raesene9
A couple of things :-

You can still mount filesystems as root from a container, _if_ you have Docker
command rights. In Docker's security model access to run docker commands on a
given host == root, that's a design choice AFAIK, not an oversight.

It's perfectly possible to mitigate that issue, by restricting who can run
containers and also ensuring that all containers specify and use a non-root
user account (or enable user namespaces at the Docker daemon level)

Also, many early stage technologies don't prioritise security . For example,
for several early releases of Kubernetes all you needed was remote access to a
single port (10250/TCP) and you could get root access to the underlying host
without any authentication...

~~~
derriz
If you run in your container as a non-root user, it makes working with volumes
a pain. Who knows what the container user UID will map to on the host and
whether this host user, if any, will have permissions to access files in the
volume.

Otherwise you can hard code a UID when creating the user in the Dockerfile but
that means your containers aren't generally portable.

In the end, the path of least resistance is to run as root within the
container and simply accept the security implications if using volumes.

~~~
budhajeewa
In the Dockerfile, get UID and GID as ARGs, and make sure those variables are
available in your host environment. Then when creating the user in Dockerfile,
use that UID and GID. Volumes will work like a charm.

That's what I am doing for local development setups with Docker.

See [https://github.com/a2way-com/template-docker-
laravel/blob/ma...](https://github.com/a2way-com/template-docker-
laravel/blob/master/Dockerfile) and its README.

~~~
hbogert
That means your docker file is portable, but your images are not, which is
what your parent is referring to. It's a friggin mess. It's still the same as
when I started using docker.

------
i386
In my experience having worked at two developer tools companies where we
wanted to partner and co-market products, Docker would never pick up the
phone. There was definitely a “we don’t need you attitude” whenever I
approached them and I had the same experience repeated to me by friends at
other companies trying to do the same thing.

~~~
wolco
This is the core reason why they are where they are.

They acted like you were bothering them and in fairness you probably were.
Even today they wouldn't pick up.

When you start believing the hype reality becomes distorted.

~~~
i386
The kind of activities we were approaching about were for integrations,
conferences, blog posts and webinars - great methods to get leads and remain
in the zeitgeist.

------
kjgkjhfkjf
Google doesn't actually use Kubernetes much, so the "operation hardened
internally" argument isn't valid.

~~~
flukus
This reminds me of Joel Spolsky's fire and motion piece
([https://www.joelonsoftware.com/2002/01/06/fire-and-
motion/](https://www.joelonsoftware.com/2002/01/06/fire-and-motion/)). To
paraphrase a little bit:

> Fire and Motion. You move towards the enemy while firing your weapon. The
> firing forces him to keep his head down so he can’t fire at you. ... The
> companies who stumble are the ones who spend too much time reading tea
> leaves to figure out the future direction of Google. People get worried
> about kubernetes and decide to rewrite their whole architecture for
> kubernetes because they think they have to. Google is shooting at you, and
> it’s just cover fire so that they can move forward and you can’t

~~~
aitchnyu
Felt the same about Polymer. Starry eyed devs open a pre-webpack build
terminal, import the if (no else!) and for keywords, build dom nodes manually
and tolerate a terrible debugging experience. Youtube loads a few seconds
slower in Firefox, since it gets served a polyfil of a runtime and a slower
build; Chrome had a head start with a native runtime. Sites built by starry
eyed developers simply break on Firefox.

------
dangerface
I think the reason they failed is that they tried to make a platform for micro
services but micro services is an anti pattern, people really just wanted
containerisation.

------
zimbatm
Companies don't have feelings. The only ones weeping are the VCs that invested
into Docker :-D

Docker played its role and introduced the majority developers to
containerization. This is a major success for the industry.

------
hadsed
Whenever the topic of building online open source communities comes up I feel
compelled to share the work of the great Pieter Hintjens, the guy who wrote
zero MQ. He wrote a book about this topic which I thought was quite good:
[https://www.goodreads.com/book/show/30121783-social-
architec...](https://www.goodreads.com/book/show/30121783-social-architecture)

~~~
dguaraglia
Pieter was an outstanding writer. Everything I've read from him was top notch,
from his ZeroMQ guide to the last blog posts explaining how he was dealing
with the unthinkable process of getting his affairs in order because he know
he'd die soon. I'll definitely add this to my reading list.

------
luckylittle
I also feel sorry for Docker, in a way. Was it their arrogance, or just
incompetence?

The came up with this amazing tool, that lot of companies started using, but
they did not have a business strategy on how to make money in a long term.
They tried to keep up (Docker Swarm, Docker Hub Premium, Tutum, Moby, Docker
Community vs Docker Enterprise etc). But at the end they just seem like they
don't really know how to approach it.

~~~
budhajeewa
What did Moby try to do anyway?

~~~
de_watcher
A rebrand that made things more confusing.

~~~
budhajeewa
Is it dead now? It's still in GitHub, so as Docker.

~~~
de_watcher
AFAIK it's a different name for the same Docker.

~~~
budhajeewa
Hmm hmm...

------
jimmymcsales
No, nobody weeps for Docker. But, everybody cheers for `docker`.

------
raesene9
What's interesting, to me, about Docker as a company perhaps not doing well is
how that'll impact Microsoft.

Microsoft have done a _load_ of work on getting containers running well on
Windows servers and that work relies on Docker EE as the container runtime
engine (you get a free Docker EE license to run on Windows servers AFAIK)

If Docker get bought up (by someone other than Microsoft), then that would
seem to possibly place Microsoft's container efforts at risk...

~~~
praseodym
Microsoft has already heavily invested in getting Kubernetes running on
Windows as a fist-class citizen: [https://docs.microsoft.com/en-
us/virtualization/windowsconta...](https://docs.microsoft.com/en-
us/virtualization/windowscontainers/kubernetes/getting-started-kubernetes-
windows)

------
FreeHugs
Weep? Docker's downfall? What happened?

As far as I can tell, everybody and his grandmother is using Docker. Why
should we weep about it?

~~~
dagw
'Everybody' is using docker the software, 'nobody' is using (ie paying) Docker
the company. The article makes it clear they're talking about the company.

~~~
budhajeewa
My small company pays for private image storage there.

We actually used Google Cloud Platform's Docker Image hosting service, and
that was expensive.

Yay!?

What else can we pay them for? All the stuff we need are available from them
for free, except private storage. If they had a container hosting solution,
we'd pay for it.

~~~
bvm
yeh we pay them like $15/month for Hub/Cloud/Private Image Storage/whatever
it's called this week.

It actually seems quite cheap...we have something like 2TB of tags up there,
and they don't charge for network I/O. I did feel slightly bad when a hidden
crash looping pod set to always download lay undiscovered for a month...that's
a LOT of I/O.

------
derefr
K8s is a “datacenter operating system”, just like VMWare’s own VSphere, or
Mesos, Mosix, etc. These solutions also compete for mindshare with mainframe
solutions like IBM’s; and with “control planes” like OpenStack, Canonical’s
Landscape, or (I think?) Microsoft’s System Center. This space is very, very
profitable.

None of this applies to Docker itself. Docker is “just” a virtualization
technology. Sure, Docker Swarm _exists_ , but at this point it’s mostly used
as a shimming UI for connecting the Docker client and daemon to the
abstractions mentioned above, not a clustering solution in its own right.
Swarm lost in the DCOS market. And the market for pure virtualization
solutions isn’t anywhere near the market for DCOSes.

~~~
Blackstone4
Doesn't K8s typically run on virtual machines? In which case it's K8s + VMs to
get to the “datacenter operating system” model?

~~~
gtaylor
It can run on VMs but doesn't have to. There are many bare metal k8s users out
there doing some really neat things with the stack.

------
orthoxerox
The only thing Docker is now useful for is Docker Desktop. Unlike other
desktop container software, it actually works on locked down machines in
enterprise environments.

K8s can run on any CRI-compatible runtime, and IBM/RedHat don't even want you
to install Docker on RHEL8.

~~~
raesene9
Whilst k8s can run on any CRI compliant runtime, I've never actually seen a
prod. deployment use anything other than Docker.

------
tofflos
I jumped on the container bandwagon late and immediately fell in love with
Docker. It built on my existing skill set so I was quickly able to get
something up and running.

Then I started tinkering with Docker Compose and for a while things were
great. But after a while I started running into issues. Compose felt
artificially crippled. No secrets? No health checks? Pushing me towards Docker
Swarm?

Eventually I just sucked it up and switched to Kubernetes even though I think
it's overkill for my applications.

~~~
orthoxerox
My thoughts exactly. You cannot scale Docker. You can to a point, but there's
always k8s looming ahead of you, saying, "sooner or later you will have to
learn __me __instead ". So most people learn it sooner than later to migrate
to it while their workloads are still small. Of course, they often don't grow
large enough for the benefits of k8s to kick in, but that's another story.

------
GordonS
A little OT, but is there anything remotely competitive with k8s these days?
By "competitive", I mean: good feature set, thriving community, active
development.

I still use Docker Swarm for small scale stuff, and am pretty happy with it -
it's simple, easy to use and doesn't eat resources. But it very much feels
like Docker have given up on it.

I'm particularly interested to know if there is anything _simpler_ than k8s
that's competitive?

------
notyourday
For the same reason that no one weeps for a company that markets hammers even
if it invented a new way to hold a hammer and raised lots of money because of
it. We do not care about hammers, we just use them when we need to hit
something. Our customers do not care about hammers either, they care about the
result that we deliver.

------
ben_jones
Docker sold its soul for money at the cost of its core product. The second you
take as much money as they did so you can have your luxury box at AT&T or
whatever else I’m going to find it increasingly hard to sympathize with your
future mistakes.

------
thrower123
I regret the time and money I spent thinking about learning Docker. I'm sure
containers solve somebody's problems, but it's not any problems that I have.

------
mmanfrin
It's minor and maybe I'm being petty, but my sympathy for Docker ended the
moment they forced you to register an account and log in to download Docker
CE.

------
windsurfer
The first and only experience I had with Docker as a company was requiring me
to sign up to download their mac osx client. They seem to have since changed
that policy but it really made me resent them, and made them feel pretty
unfriendly.

~~~
choward
Are you sure they changed it?
[https://github.com/docker/docker.github.io/issues/6910](https://github.com/docker/docker.github.io/issues/6910)

------
nova22033
>Kubernetes "was operation hardened internally at Google

Is this true? Isn't kubernetes "based" on work done at google but also a
complete rewrite.

------
octosphere
[https://i.imgflip.com/24ac74.jpg](https://i.imgflip.com/24ac74.jpg)

------
alexandercrohde
Why cry at all? Isn't this the point of open source?

Can't the same be said for Git? Linux? Python? (That they didn't make the
creator billions, and the creator is fine with that)

------
redwood
This article is real rich coming from a guy who works at AWS. The amount of
absurd hubris and doublespeak entering this community unchecked is shocking to
me.

------
lioeters
Ugh, Google in the middle, please change the following URL:

[https://www.google.com/url?sa=i&source=web&cd=&ved=0ahUKEwiS...](https://www.google.com/url?sa=i&source=web&cd=&ved=0ahUKEwiShovJrrvkAhU8FzQIHe7GCHEQzPwBCAM&url=https%3A%2F%2Fwww.techrepublic.com%2Farticle%2Fwhy-
doesnt-anyone-weep-for-
docker%2F&psig=AOvVaw3uqDkT8mg8UfnHaiOmd7aL&ust=1567830683958295)

..to:

[https://www.techrepublic.com/article/why-doesnt-anyone-
weep-...](https://www.techrepublic.com/article/why-doesnt-anyone-weep-for-
docker/)

EDIT: The article itself is quite interesting, on the rise of Kubernetes, its
adoption by VMWare, and the reason why Docker failed to capture market value
as much as it could have.

~~~
greglindahl
Yes, it would be smart for HN to reject links that are google search engine
tracking links.

