Hacker News new | past | comments | ask | show | jobs | submit login
Goodbye Docker and Thanks for all the Fish (technodrone.blogspot.com)
378 points by Corrado 16 days ago | hide | past | web | favorite | 303 comments



I’m failing to see the argument here. The author suggests that the advent of viable Docker competitors will inevitably bring about Docker’s death. Why would that be the case? Competition is great, but it’ll be a while before others can match Docker’s maturity and ubiquity. Even then, there’s no guarantee that any of them will be better than Docker, never mind good enough to warrant switching.

What’s wrong with Docker? Why would I want to switch to something else? Are these other solutions really so superior that they warrant the significant time investment that it would take for me to learn how to use them?

The author doesn’t actually answer any of these questions. No arguments against Docker are made, nor are any arguments made in favor of competitors.


You're missing the point of the author. It's not so much that Docker as a technology is bad; there's nothing wrong with it. The point is that Docker, the company, cannot monetize upon the technology, because the moment they start charging for their containerization technology, people will switch to an alternative container technology, especially now that Kubernetes has won the orchestration "wars" and made it so easy to switch the underlying technology.

Personally, I have to agree with the author and think it's difficult to see a bright future for the company that lives up to its $1.3B valuation.


Docker Inc is fine. It would be silly to charge for the actual containerization because that's not really the value of their platform.

Things Docker can/does charge for and could do very well with.

- Support: Businesses are happy to pay maintenance contracts for fixed LTS versions of Docker which is their EE product.

- Kube: Docker has pivoted their UCP product into a turnkey on-prem Kubernetes distribution. Plenty of room in that space to grow.

- Registry; You wouldn't run random images from DockerHub in production, right? Similar to Red Hat's product in this space there's a lot of money to be made in having officially supported images. Images with a pedigree, aduit trail, CVE reporting, yada yada. Partner with Canonical and since it's way easier to do this when you already have distro maintainers and you have a solid RH competitor that devs will like more.

- Security: Audit your images that have been sitting around and not updated in ages.

- Hosting: They'll be one of many but there's plenty of space in providing some ergonomics compared to Google/AWS's offerings.


Dockerhub doesn’t even support MFA, not to mention their IAM is all or nothing (there’s only 2 roles). I don’t think they’re focusing on the things that they should be.


Well, if you're focused on MFA and elaborate IAM for Dockerhub as "big problems", then I don’t think you're focusing on the things that you should be.

Those are more like enterprise checklist items than real needs...


Yes. As a response to a comment talking about enterprise support contracts as a main source of revenue for Docker, I think enterprise checklist items are entirely valid items to focus on.


I guess if the focus is "enterprise support contracts", that's legit.


Well this is the inherent push-pull of the industry, isn't it? The ones with money generally have insane compliance requirements (because they have money, they have more risk); the places without money care more about getting things done (since they're trying to make money...).

If it weren't so, we'd all just sell our hot new WhateverAsAService to banks straight away, right?


Also: container orchestration if _far_ from solved problem. Kubernetes is a nice try, but it screams for a better layer above it that would allow devs to deploy services _easily_.

There is plenty of ways Docker Inc. can create and capture value in this space. Their unique value is that whatever product they make, the world will check it out... They "just" need to make it good and figure out the business behind it.


That's not a billion dollars company though.


Seems like it's more valuable and ubiquitous than mongo db, and their valuations exceed 1.5B


Ubiquitous does not equal value. Docker doesn’t do anything you can’t do with lxc tooling and an object store (registry). That’s the benefit: containers are free, the orchestration is free, and the registry tooling is free and easily replicated.

Mongo equally so (JSON in PostgreSQL). These valuations are fantasy.


Building a scalable service on top of that is not free, nor easy :)


You're spending those scaling dollars regardless (developers with devops knowledge, infrastructure/SRE folks). Better to spend it on competent staff than a company lighting VC money on fire, no?


> Better to spend it on competent staff than a company lighting VC money on fire, no?

False dichotomy though. You either acknowledge that getting a vendor to do that for you also implies competent staff or you acknowledge that hiring "competent staff" is not like flipping a switch and requires training, ramp-up time, few wrong hires, getting managers to guide the teams, etc

I'm not sure which way is better, but I don't think the industry has decided either :)


i think it's a bit more complicated than that. very skilled developers that know this tech won't jump ship to many borderline broke startups which could easily just get started with docker with less than 'rockstar' devs



That's the point of parent and the article: It has a billion dollar valuation, but they doubt it can back that up with actual revenue and profits.


Of course not, a product company like docker is built for acquisition.


I can think of a few companies that might want to buy Docker. I can think of no company that would buy it at a billion dollar valuation.


Microsoft's containerization strategy heavily depends on Docker (they've got a licensing deal so that Windows server can use Docker EE for free) they might well pay good money to by Docker


Didn't IBM just buy Red Hat for $34 Billion? Can you imagine a future where Amazon acquires Docker to optimize it to run on AWS and perhaps perform a bit worse on GCP. Any small advantage you could get in the space would equate to large changes with as ubiquitous Docker is and as large as AWS is.


Perhaps but given AWS is a heavy Xen and KVM shop something like Kata Containers might suit them better.


I can think of a few companies that may pay an absurd vaulation for Docker; IBM and Microsoft.


Fun fact: if you use Red Hat, you have to use Docker EE. You don't even have the option of getting CE and taking responsibility for Docker support yourself.


Could you explain that a bit more?


> if you use Red Hat...

"Red Hat" as in RHEL specifically, or are you meaning any of the family (eg CentOS)?

Asking because I've not hit any problems using Docker CE with CentOS 7. Well, aside from general bugs (etc). But nothing seems to force the use of EE instead of CE.


Docker CE is officially supported (well, it's not supported because its CE, but it's packaged, repos, install guide) etc. but only EE is packaged for RHEL. RedHat also has their own docker package like many other distros, so you can still get it free easily, but you have to get it through them and wait on them for updates.


If you're using RHEL why wouldn't you default to using Red Hat's build of Docker? I definitely prefer it to the official builds. Red Hat in general seems very annoyed at Docker and carries quite a few QoL patches like being able to add registries to search.


That isn't true. You can also use podman, and cri-o in the case of Kubernetes.


Understood, and I'm fairly certain that a large corporation that uses and markets Docker extensively will look to acquire it sometime in the future. Docker's ubiquity is exactly what makes it valuable.

Could be any of the FAANG companies but I posit Microsoft since they noticeably try to make Docker on Windows a seamless experience and also push Docker within their cloud offerings to modernize legacy apps towards Azure. They've already purchased a large community-oriented company (GitHub) and have acquired companies at a surprisingly high price (LinkedIn).


And considering most all docker containers are based on and continuously updated from a GitHub repo, a docker acquisition makes a lot of sense. Gitlab already has its own container registry.


> The point is that Docker, the company, cannot monetize upon the technology, because the moment they start charging for their containerization technology, people will switch to an alternative container technology...

I see the thought process but disagree with the premise that they need to charge for their container technology directly.

For example, GitHub does not charge for git, but instead for the convenient layers that they add on top of it.

Another example is that Google does not charge for Kubernetes but you can buy support which every enterprise company wants and since GKE happens to be the most convenient way to get a K8s cluster rolling in the cloud (for most circumstances) and also now on a box now with GKE On-Prem, many will choose that path so Google still gets their money, just from product / services / support on top of the free core.

They can also monetize the "fallout paths" — turns out you're in over your head running all of that K8s stuff yourself? Come pay us more and use our K8s PaaS instead! Not sure if they actually have one yet, but consider something like GKE Serverless here.

Enterprise features and support around running containers in production are worth a lot of money.

Docker can continue charging for all of its porcelain layers on top like Docker Hub and Docker Enterprise and make plenty of money off of big fish.

They could also monetize by being acquired by Microsoft, Google, or Amazon.


1. GitHub was struggling badly. They had lay-offs, etc.

2. Google still makes the lion's share of it's money from search/ads. Google Compute is struggling and being subsidized by search/ads.

IMHO, Docker's only exit strategy is to sell. They don't have enough time left to grow an organic business. Silicon Valley money always wants paid back, sooner rather than later.

The layoffs and new CFO at Docker, IMHO, are all about cleaning up the finances so that a sale is possible at a good price.

Their last round of investment barely averted a down round. The value bump was minimal and the investors are mostly from a bank in Brazil that likely doesn't understand exactly what they bought into (a lot of uneducated money out there right now because there is no place to put it). None of the original investors participated which shows they don't believe in the company anymore.

Their most valuable asset is Windows support. Hence, the most likely aquirerer would be Microsoft. I'll bet a paycheck they sell for somewhere between 1.5B and 2.5B. Keep this thread for posterity :-)

It's painfully obvious to anybody that has been in this game for a while...

(opinions and analysis my own)


1 - Having too many people is more an indicator that they hired too fast or more than they could sustainably afford rather than indicating anything about their inherent monetization strategy and approach. Every company has a ceiling somewhere, even if it's getting acquired by Microsoft for $7.5B. That's a pretty nice ceiling to have.

2 - That seems fine and besides the point... Google is massive. Google Cloud (excluding the half that comes from G Suite) brings in $2B per year in revenue. That might only be a few percent of Google's overall, but a few percent of $80B is still a massive number.


There are actually a lot of issues with Docker around breakage and stability. That's alluded to in the article.

Docker itself is in trouble because they got the wrong use case from the start. They completely missed out about orchestration.

Containerization requires to both have containers (docker images) and deploy them on fleet of servers (kubernetes orchestration). The later is where the value is.


> Docker itself is in trouble because they got the wrong use case from the start. They completely missed out about orchestration.

That's actually how they started -- they were a PaaS called dotCloud before they were Docker!

This was before the term "orchestration" became common, but the point was that they were running apps in multiple languages. They were like Heroku except they supported more than one language (back when Heroku was basically Rails apps).

They didn't get much traction and had to pivot to Docker. I think the underlying reason is that writing a PaaS / distributed OS like Kubernetes is extremely difficult, and they were spread too thin. They didn't really nail a single use case like Heroku nailed Rails apps. They were trying to run everything and ended up running nothing.

----

This "rhyming" of history makes me smile, and makes me think how important timing is. dotCloud was the right product, done too early and done poorly. Docker was arugably the wrong product at the right time! I say "wrong" because I never thought it was a sustainable product or business, but it did come about at a time when people really needed a quick and dirty solution.

And now the surrounding ecosystem has changed drastically and I agree with the OP that Docker may be left out of it.

----

Another thing I find funny is that App Engine was already "serverless" 10+ years ago... you wrote Python apps and uploaded them as .zip files (web or batch processing). You didn't manage virtual machines. So somehow they were also a bit early, and maybe the product was lacking in some ways as a result.


Agreed.

I'm not sure that dotcloud and App Engine though were the right products though.

They have shown that a fully managed cloud solution is not the answer. Barely no company could use it, let alone migrate existing software to a whole new platform. It's limited to a very small niche.

Docker and Kubernetes are standalone products. Companies can use them internally and make it work with what they already have.


Amazon did it first with elastic apps, uploading a plain WAR file, and letting it run somehow, with transparent mySQL connections.


I read the finer point of the argument to be that Docker got the layer of abstraction wrong for monetization. That Kubernetes (which orchestrates containers) is the tool at the layer of abstraction where a company could provide a product that has enterprise demand similar to Red Hat. And that Docker is a replaceable piece of technology in the market of managing cloud architecture.


Would a $32B valuation of Red Hat make you reconsider?


That's a good point, there's huge market demand for technology and talent in the cloud native space.

However I don't see how Docker Inc can be substainable in the long run.

What's most likely to happen is Docker being acqui-hired by someone like Microsoft.


[flagged]


Linux containers have a lot to learn from mainframe containers.


The story of Docker, Inc should be read as a cautionary tale to those who think that every random bit of open source should be “monetized” by seeking rent for its continued use and support.


Or that open source isn’t always a good idea, at least from the point of view of certain parties.


Exactly, when will Silicon valley investors understand that Open Source:

1. Doesn't grow revenue at the speeds they want returns 2. Isn't "free marketing" (dear God I have seen this one too many times, CTOs even write it out in blogs sometimes) 3. Only really makes sense when you get out more than you put in, aka is community drive . Community driven open source is really the only profitable open source.


> What’s wrong with Docker?

What's wrong with not using Docker? Outside of cloud/SOA/webscale sector, people having hard time to simply comprehend the drama.

I have not seen any justification for using containers on any scaleable system on real hardware for example.

In "ad-farm" world, running ad farms on pure hardware with netboot provisioning has been the "gold standard" since time immemorial.

Few points:

1. Netboot with root at tmpfs takes less time to boot, seconds over 10G (most of the time is spent on POST and DHCP actually.)

2. Netboot images can be stripped to the barest minimum of life-sustainance. Less software running — less problems.

3. Less attack surface for hackers.

4. Hard lockup proof - hardware watchdogs are more or less foolproof.

5. Linkup to the real network is as simple as it gets, you can use broadcasts for self-configuration and service discovery on the network.

6. If you deal with RDMA, well, you will be already spending most of your efforts on just getting software work as advertised on the bare system, and the additional troubles of containerisation will not worth dealing with.


Former Amazon engineer here. Despite the downvotes, you are making an excellent point.

Amazon, among other FAANG companies, never needed containers shipping entire bundled OSes.

Deploying a "base OS" (using PXE or netboot) can be done very reliably both on a hypervisors and VMs.

A simple build system can then generate traditional packages and push them into VMs allocated for a specific product.

It provides better security isolation and the ability to receive security updates on OSes and also on dynamically linked libraries.

Building and deploying new versions of you product is also much faster, especially at large scale: only your application package is rebuilt and deployed.


That's technically what companies with refined processes do by using "scratch" docker images that only contain the binaries needed for the application and nothing else.

The fact that many other containers run entire OS is because it's easier to get started, or there are runtimes needed, or people just build and deploy in the same image.


I agree that developed companies are doing what is described, but that is a completely separate thing than starting from scratch.

People have hurriedly forgotten why Linux distros exist: https://opensource.com/article/19/2/linux-distributions-stil...

Even when you write a lot of software, environments always drag in existing tools and infrastructure (aka other open source). 99% it's easier to consume pre-built SME knowledge from. Linux distro #HailTheMaintainers


Containerization is a cost effective alternative to building and running immutable VM images, which is a best practice in terms of CD. Another way to put it is: don't build in production, instead build in CI, test your image and then just deploy it on your environments (you probably should have at least two: staging and production).


You kinda missed the point that docker is more about packaging the applications than running them. Running apps is easy. Packaging in such a way that they could be run without hassle - hard.


> could be run without hassle - hard.

Strict versioning of every dependency - exactly the thing a packaging system solves.

Specific system configuration? Also not a problem at all with netboot images.


Can I run netboot images anywhere easily from my command line?


Few companies that used them that I dealt with had their own tooling, but that was nothing special.

CI picks up your image build, then commits its record to a DB from which DHCP server looks up a boot image for given MAC, then it either SSHed a reboot command or used an IPMI reboot. Voila

Today, some server motherboard firmwares can boot images off HTTP, vlan, and can talk with the boot server itself on HTTP


Much more straightforward than docker!


You could have if HashiCorp had beaten Docker to market with a simple VagrantHub and bud tool...

It could have easily been VMs instead of containers. IMHO, it was all the packaging...


~$ qemu diskimage.img


That sounds like a pretty solid system, but it’s not at all what the author is advocating. The author states that Docker is destined to be replaced by other containerization frameworks. The functionality differences won’t be as stark as what you’ve described, so this isn’t really a good comparison.

That being said, I’d agree that netboot + tmpfs has a lot of advantages; unfortunately, it’s not always practical, especially at smaller scales. Containers are convenient and don’t require any special hardware accommodations.


One reason I see as an open source developer is low touch adoption and environment control. Over with build support on many Linux flavors (redhat from the past, archlinux from the ghetto :) ), use dockerized version instead. This might be my narrow corner of the world but users have driven me to dockerize much of what I do code, and my support hassle has lowered, a lot.


Exactly, we have POCed Docker and realised we have a solution for every aspect that it is trying to address. I found one and exactly one use case for Docker, running CI/CD workload.


Exactly.

I see Docker as a framework. It's the "get me from nothing" to a fully featured solution for building, running and distributing containerized applications. And it happens to do it for anyone running Windows, MacOS or Linux.

It's the same reason why I would want to use a web framework. I don't necessarily want to write and string together a bunch of super low level things. I want to start at a higher level of abstraction so I can focus on writing the applications I care about.

As an end user, I don't necessarily care too much about OCI or the runtime spec. I care about running docker-compose up and having everything work.


The article is about the company, not the product. Which bits of what Docker provides would you pay for, given free alternatives exist now for basically all of it?


I don't think I would pay for any of the core features, but that's mainly because I've been using Docker for 5 years with the current strategy of just about everything is open source. You pay if you want support and other enterprise features. Private hub repos are also reasonable to buy for most people because it's priced pretty competitively with rolling your own private registry on a cloud provider (ie. $7 / month for 5 repos isn't much more than $5 / month for the cheapest DO droplet but now you need to manage that machine yourself which is not worth the hassle for $2 less).


I would pay for an ecosystem of images that are vetted for security issues. They could even build an industry around security and privacy compliance and certification, which risk-averse companies (and those who want to do business with them) would pay for.


To be fair, the article is about both. To quote "in my humble opinion the days for Docker as a company are numbered and maybe also a technology as well."


> Why would I want to switch to something else?

The core argument is that you wouldn't care if you were using something else. I use Kubernetes. That's what I care about. If I'm on AKS, GKE, EKS, any other managed Kubernetes service, the only time I ever even see the word Docker is in the filename of the Dockerfile. They could switch their default runtime to anything else and I might not notice.

> Are these other solutions really so superior that they warrant the significant time investment that it would take for me to learn how to use them?

This is the exact same problem that Oracle has with Java, and MongoDB now has with their product. You can't make money off of an API, and implementation's are very rarely a differentiator. Another product will come along (actually, several already exist) that says "we support dockerfiles, we emulate the docker api, swap us out". There's nothing new to learn. The only thing that product needs is a benefit over normal docker, and if you think Amazon, Google, or Microsoft can't add value through more native integration with their cloud services, you're crazy.

Docker isn't going anywhere, but they become commoditized far too quickly than their valuation would suggest.


"Why would that be the case?"

One argument could be that their moat is mostly brand. And, increasingly, other brands are working around them. Like Kubernetes. They already offer non-docker runtimes. They could, at some point, offer and promote their own packaging tool too. People might use a K8S tool solely because it's the "default setting".


I have a ton of docker images already that work perfectly fine. Why would I switch to something else?


There will be more created in the future, than was in the past.


The OCI standards make it so that it doesn't matter if you switch or not. Your images will work anywhere with any tool (aka Cloud Providers, Kubernetes, Podman, Buildah, etc.) Everybody has OCI compliant tools, even Docker - Docker founded the OCI.


Same point though. If docker works, why switch to anything else?


"The Kubernetes project is excited to announce kocker, a new docker compatible image management tool. Kocker will support features you've been asking for, like non-priv-by-default containers, enhanced performance in k8s..."

Made up, but I imagine something like that.


That's podman you are describing an it alrwady exists.


These container images will continue to work.


Docker is drifting towards being a standardizes deployment unit. This is especially apparent with advent of Alpine Linux based images or even more so with distroless.

With these changes going on, docker becomes just a fancy zip file, and since now there is a committee to design a standardized container there really is nothing left for for the Docker company.

Kubernetes IMO just used Docker's popularity to gain its own, but at this point the actual docker itself is no longer essential.


> What’s wrong with Docker? Why would I want to switch to something else?

Security, performance, reliability and following best engineering practices. Everything else is fine.

https://arxiv.org/pdf/1804.05039.pdf

https://hackernoon.com/another-reason-why-your-docker-contai...

https://www.scylladb.com/2018/08/09/cost-containerization-sc...

https://thehftguy.com/2017/02/23/docker-in-production-an-upd...

http://catern.com/posts/docker.html


It's kinda useless when it's not compared against alternatives.

A crappy working solution is infinitely times better than an ideal non-working one (like Linux vs Hurd).



Yep, you just gave me an idea of a blog post.


The alternative solution has been working for decades: software packaging.


Except it hasn't been working; with few exceptions we still deal with dependency hell / isolation issues in system package managers. Surely if it worked as well as you posit we would have long ago had a Kubernetes built upon apt or yum?


I am not sure if you have experience working with one of the leading tech companies but most of the problems you mention were solved long time ago, before containers were a thing. Apt and yum are not the right set of tools for this job, Amazon for example had an internal build and deployment system that solved dependency problems as well as environment variables, configuration management and additional aspects that even Kubernetes does not solve today.


> I am not sure if you have experience working with one of the leading tech companies but most of the problems you mention were solved long time ago, before containers were a thing.

I'm not sure what you're advocating for exactly, but the systems that the "leading tech companies" originally built were largely VM orchestration systems that required humans to pack software packages into VM images. This was suboptimal so Google et al pioneered containers and others followed suit. This is exactly what gave rise to Docker and Kubernetes. This seems a lot like saying, "ships solved the transportation problem so planes are not useful".

Never mind that most people don't have the resources to build, maintain, and operate their own orchestration systems.

> Amazon for example had an internal build and deployment system that solved dependency problems as well as environment variables, configuration management and additional aspects that even Kubernetes does not solve today.

Build tooling is an important but orthogonal concern (you can use Bazel and friends to build container images). I'm not sure what you mean when you say Kubernetes doesn't solve those problems today.

To be clear, I'm not a Docker/K8s fanatic; they can be frustrating at times and something better will surely come along and replace them eventually. However, they're a lot better than what existed before (which to be clear were not off-the-shelf solutions but rather patterns that a large, wealthy, competent organizations could use to build their own solution).


> I'm not sure what you're advocating for exactly

I am not advocating for anything just pointing out that the features atributed to Docker existed before.

> but the systems that the "leading tech companies" originally built were largely VM orchestration systems that required humans to pack software packages into VM images.

Absolutely not. Amazon never had such system. You are confusing Google with all leading tech companies probably.

> Build tooling is an important but orthogonal concern

Is this why the same features set claimed by both things?

Again, what is the problem you are trying to solve with Docker/K8?


> I am not advocating for anything just pointing out that the features atributed to Docker existed before.

No one contends that they existed before in isolation or even as a proprietary assemblage; the contention is whether or not they were available as an off-the-shelf tool or a simpler assemblage that is affordable to companies who are not "leading tech companies".

> Absolutely not. Amazon never had such system. You are confusing Google with all leading tech companies probably.

Why do you think "Amazon never had such a system" is a definitive rebuttal to "most leading tech companies had a system like X"? In any case, tell me about the systems that existed at leading tech companies that were comparable to Docker and accessible/practical for non-industry leaders...

> Is this why the same features set claimed by both things?

I'm not sure how to parse this, but it sounds like you might be implying something like "because Docker has a naive image building story, its primary purpose is to be an softare build system".

> Again, what is the problem you are trying to solve with Docker/K8?

Docker solves for standard software packaging and distribution; kubernetes solves for standard, efficient software orchestration. Of course you can throw really smart, well-paid engineers at the problem to devise, build, maintain, and operate a bespoke system that accomplishes a similar end.


I used the Amazon system a lot and saying that "Apt and yum are not the right set of tools for this job" is incorrect.

APT and yum simply install packages assuming the dependencies resolve correctly. The [unnamed ;)] system could be modified to generate a set of .deb and .rpm files (with locked versions) to deploy a product.


It's still working very well for Amazon, and the company evaluated alternatives and choose to stick with packages because they have the right granularity.

Not even remotely accessible to end users like containers are today.


A shame, really, but not a huge surprise that products like Swarm fell by the wayside. I feel it could've occupied a nice middleground for teams that didn't need the full capabilities (or overhead of supporting) Kubernetes, even though I think K8s is an exceptional project.

I chose Swarm as a pragmatic choice in an enterprise environment that didn't have any prior experience with Docker at all, really, at the operational level. We had to support financial models, often many different versions of the code at the same time, on top of the usual stack of applications to go with that. The choice was a "success", albeit one muted by the wacky networking layer they use. Compound that with RHEL's older kernels, and we had to deal with oddball issues like iptables/arp table getting out of sync with what's actually running, resulting in connectivity issues. And don't get me started on removing and redeploying a stack; that would occasionally wedge things so badly we had to cycle the docker daemon.

Still, a shame. The gap between "Look, I wrote a compose file" and running something on a small cluster is tiny, and that was its main strength, even if it did suffer from some serious heisenbugs. Why they decided to add and remove features between versions and do their damndest not to make a compose file 100% "forwards compatible" with Swarm is another mystery.


I've got a customer that uses docker to make binary distribution easier. All runs on a single machine and a docker-compose file is all they need. I don't think they'll ever need kubernetes unless they change the way they sell their products.

Docker is useful for developers because we pull containers and run on Ubuntu and OSX them without having to install anything. Much easier than Vagrant.


Swarm was a pretty big mistake. I think based on just relative resource investment compared to Kube, it ought to have been obvious that it’d never be relevant if it wasn’t extremely specialised.


That's true. However, consider this environment:

- The ops team are in a different country, and are wedded to very old-fashioned views of administration ("Automation? But I like manually running commands from a runbook!")

- You work with a team of people who are quants/actuaries/scientists/engineers but not professional developers, but you want them to have a turn-key environment so they can Get On With It. When they need new python packages or god forbid upgrade pandas or something else, there's a full CI chain that'll make sure that what they do here also works there.

- Swarm is (from personal experience) easy enough to teach people who don't know anything about Docker. You can show them how to query the state, modify it, look at logs, etc. all without the hassle and overhead of configuring and running K8s, even though it will always be my #1 choice for tech-literate orgs. Swarm, for many, including myself, was a pragmatic choice -- 80% of the immediate benefit of container orchestration with 20% of the cognitive overhead for the chaps in another country who had to maintain it if things went south.


Sure and then google release “kubelite” or some guy writes a convenience script and there goes your competitive advantage for the use case :)

Docker didn’t have enough of an advantage even though Swarm game shipped with it, so de facto if you had Kube you would have had Swarm first...


I made a tradeoff; keep in mind, it's not always the case that the best technology wins. Running <Technology X> is all well and good, but if you cannot keep it running perfectly, or it results in unacceptable downtime due to operator error, then that reflects poorly on the architect/lead in charge of picking the tools.

I am more of a mind to make sure that I can solve the task(s) that I am given, such as it is, with the resources available (people, knowledge, time, etc.). That inevitably means tradeoffs. In a parallel universe I would have used K8s instead, as I think it's exceptional and far superior to Swarm. However, with the limited resources available, I chose Swarm, and for all its faults it's running fine.


I agree with your pragmatism (and admire it). I would only urge you to add a couple of tools to your toolbelt for analysis:

1. Open source politics aka is the project viable? 2. Where's the money coming from? Aka, what products/companies build solutions off the tech? 3. Look for growth, not survival. If a company is not growing, it is dying.

These extra three test "gates" help me select what technology I will use, learn, and bet my career on....


Keep in mind, when this solution was adopted Docker were still wedded to Swarm. Even if they stop caring about it -- as they pretty much have now -- we have a system that works at rest. Two years on, the team(s) that handle the production support and operations are more comfortable with Docker & co, because of this. Not to forget, this is a very large (and risk-averse) enterprise. You don't always get to pick whatever you like!


FWIW it appears K3s is starting to become the kubelite you are referring to.


For the curious:

https://github.com/rancher/k3s

> k3s - 5 less than k8s

> Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 40mb.


Rancher K3s is really targeting embedded, IoT and edge computing use cases.

Not really as a simpler Kubernetes installation model.


Smarm had and still has a huge advantage over Kubernetes - it's its simplicity. I wouldn't call Swarm "simple" but it's "tolerably complex".

Unlike Kubernetes, Swarm deployments are actually maintainable by a small team (or even a solo sysadmin). If there is a bug (there always are), you can actually diagnose or even debug it without the feeling that you're wrestling with a 800-lb gorilla.


Similar story for me. Pretty much the only time I am disappointed with Docker Swarm is during monthly host OS patching and the networking gets messed up and we have to restart daemons and undeploy/redeploy stacks.


I use Docker Compose for running all the things for my dev environment (RabbitMQ, Postgres, a few others), and it works really well. We'll, now it does - I had no end of networking troubles with it in the past, on both Linux and Windows.

I love how easy it is to spin up something new with a simple YAML file, but I've no idea what it would be like in production - how are rolling updates handled, for example?


I agree, there was a gap, especially for about 2 years. But, even when that gap existed, it was pretty clear that the cloud providers (and other software companies) would build/enable a one click Kube distros.

As the cloud providers adopted Kube, I knew Swarm would loose. Now, the answer is, "use an API" don't run it local if you don't have to.


Not a containers expert and I never used docker swarm, but I will still go with Nomad for container orchestration over Kubernetes anytime I can, gets the job done with minimal fuss.


See my three rules above. Engineering is a zero sum game, even in open source. Nomad has a very dark future. Single vendor, no community, very little usage, no competitors adopting the tech. If your competitors don't adopt your tech is dead (basic open source 101)...


Would you still do this even if you had K8s available as a managed service?


Was it RHEL 6 by any chance? RHEL 6 was never supported by Docker.


Nope - RHEL 7.x. Various incarnations, too.


I never understood why "modern" tools like Docker have to provide everything: networking, firewall, repository, you name it...

I understand somebody wanting to type "docker run xxx" and have everything setup automatically, but if you're running anything but default networking and actually care where the xxx image comes from, it's gonna fail miserably. Coming from the VM world, I found it much easier to work with macvlan interfaces that lxd supports for example - the container gets it's own interface and IP address and all networking can be prepared and set up on the host instead of some daemon thinking it knows my firewall better than me...


Three Words . "Ease Of Use"

Containerization has obviously been around for ages before Docker came on the scene, yet it's adoption has dwarfed that of other similar solutions, why is that?

People have to start with a technology somewhere, if they get frustrated with the process, often they'll discount it and move on, that first moment of success of "ohh that was easy" is really important.

Docker has that. Sure it hides a wide variety of complexity under the skin that, in more complex deployments, can come back and bite you , but for people getting started it's much easier than the alternatives.

The "App store" like nature of Docker Hub is another part of that, the ability to easily find base images that you can use to prototype solutions is super-useful as a beginner.

Of course once you've been using it a while, you might have questions about image provenance, vulnerability management etc, but those typically aren't part of the initial evaluation.


>>> Containerization has obviously been around for ages before Docker came on the scene, yet it's adoption has dwarfed that of other similar solutions, why is that?

Containerization was mostly limited to Solaris and BSD. It took a while to get to Linux, in the form of Docker.


Well Linux had containerization before Docker came along. The initial release of LXC was in 2008, a decent distance after Jails/Zones but still 5 years before Docker. OpenVZ was even earlier than that, starting in 2005


Even earlier was Linux VServer, which was 2001 :) So yeah, Linux containers have been around in one form or another for over a decade before Docker helped popularise things.

This makes some interesting reading: https://blog.aquasec.com/a-brief-history-of-containers-from-...


Lxc predates docker, and other lesser forms of containerization (or perhaps "precursors") like simple chroot have been around much longer.


Docker was originally written to use LXC for containerization. It wasn't until right before 1.0 that Docker was written to use its own go based library for container handling. https://blog.docker.com/2014/03/docker-0-9-introducing-execu...


What about OpenVZ (2005) ? It’s not containerizing application but whole VPS of course, but still.

I use it for more than a decade myself and I liked the simplicity of it.


Doesn't that require out of tree kernel patches?


It sure did back then, that was its biggest downside (not sure if it's still required today).

We used it extensively at my work in a Web company to package and deploy all our stack as far as 2006.


Yeah, it feels too monolithic. Just to showcase it can run Hello World in one "docker run" command I guess?

Another thing people use Docker for but shouldn't is application packaging. Using Docker you build one fossilized fat package with both all OS and app dependencies baked in. Then some day after years of using that Docker image you need to upgrade your OS version in the image but you can't replicate the app build because you didn't pin exact library versions and the global app repository's (pip, npm) later version of the package is no longer compatible with your app.

Application packaging is better to be done in proper packaging systems like rpm od deb or other proprietary ones and stored in organization's package repository. Then you can install these rpm packages in your Docker images and deploy them into cloud.

The difference in OS dependencies and app dependencies are clear when looking at contents of actual dockerfiles. OS dependencies are installed leveraging rpm or deb ecosystem. Apps are cobbled together using a bunch of bash glue and remote commands to fetch dependencies. Why not use proper packaging for both OS and apps and then just assemble the Docker image using that?


> Using Docker you build one fossilized fat package with both all OS and app dependencies baked in.

Exactly. Most uses of Docker are like a junk drawer: neat on the outside, a total mess on the inside.

People stuff their python 2 app in there and forget what their dependencies are, or where they got them from.

Good luck upgrading that 2-3 years from now.


To be fair, this is how people build applications. "Oh there is this library let me just pull that in"


I LOVE the junk drawer analogy. I will credit you on my next blog ;-)


Well mostly because some of us are just devs and we don t know about it ? How about a blog writeup, I'd read it :)


I don't think I ever came across a developer who made a RPM/DEB package. I'm not sure I came across a devops/sysadmin who made a RPM/DEB package in recent years.

I wouldn't waste my time learning that if I were you. Software like python needs pip to build, they can't be built with the OS package manager alone.


> Software like python needs pip to build, they can't be built with the OS package manager alone.

Well yeah, OS packaging formats are for putting together your build artifacts. You can use whatever to build the software.

If you want to do it 'right' you would use pip to build packages for any missing runtime dependencies from your OS repos and the package your application. I swear it's much easier than it sounds.

But nothing says you have do it like an OS maintainer though. You can also just vendor all your dependencies, slurp them all up in an RPM and install in /opt.


+1. Back in the day, we used to mirror all of CPAN, because even as a young buck, I saw the gap in rebuilding apps without the source copied...


Well my team does that for one. We use Python packaging ecosystem, specify Python dependencies using standard tools like setup.py, requirements.txt and pip. All Python dependencies are baked into a fat Python package using PEX format[1]. Also tried Facebook's xar format[2], without success yet. What matters is to have a statically linked dependencies packaged in one executable file. Like a Windows exe file.

Then you proceed with bundling in higher level OS dependencies, because each app is not just a Python package but also a collection of shell scripts, configs, configuration of paths for caches, output directories, system variables, etc. For this we throw everything into one directory tree and run FPM [3] command on it which turns the directory tree into a RPM package. We use a fpm parameter to specify installation location of that tree to /opt or /usr elsewhere.

The way to bundle it properly is to actually use two rpm packages linked together by rpm dependency. One for the app and the other for the deployment configuration. The reason is you only have one executable, but many possible versions of deploying it. Ie. depending on environment (dev, staging, prod) or you just simply want to run the same app in parallel with different configuration.

eg. one rpm package for the app executable and statix files

my_app_exe-2.0-1.noarch.rpm

and many othe related config rpms

my_app_dev-1.2-1.noarch.rpm (depends on my_app_exe > 2.0)

my_app_prod-3.0-1.noarch.rpm (depends on my_app_exe == 2.0)

You then install only the deployment packages and rpm system fetches the dependencies for you automatically.

There are other mature places who use similar techniques for deployments, for example [4].

All of this then can be wrapped in even higher level of packaging, namely Docker.

[1] https://github.com/pantsbuild/pex [2] https://code.fb.com/data-infrastructure/xars-a-more-efficien... [3] https://github.com/jordansissel/fpm [4] https://hynek.me/articles/python-app-deployment-with-native-...


Everything in Amazon is packaged and deployed without containers since decades.

All engineers have to learn how to package their stuff.


Our team still uses debs. In fact I made a tool to greatly simplify the process: https://github.com/hoffa/debpack


I fail to see how RPM/Deb would be better at this (and I say this as someone who has a lot of experience in both). You will still need to pin dependencies with RPM/Deb, you still have to deal with OS release updates, and in the end it's just a matter of ensuring you frequently update and test your upstream dependencies.


You want to separate app and OS dependencies because in the future you could have the need to update the system which would mean rebuilding the docker package. In the future it can happen you are no longer able to reproduce your app build. But when you have the app package separate in some repository you can just create a new docker image but reuse the old app package without rebuilding it.


Because then you depend on your OS to upgrade your application dependencies.


But you can have a macvlan setup in Docker.

https://docs.docker.com/network/macvlan/

And you can tell also Docker not to touch your iptables rules.

https://docs.docker.com/network/iptables/

I don't think I've ever run into a container in the wild that is dependent on a specific network plug-in.


> I understand somebody wanting to type "docker run xxx" and have everything setup automatically

Well there's your answer. Simple things gain traction.


You can use macvlan (or ipvlan) with Docker, it's built in.

But yes I agree (as a former Docker Inc employee and current maintainer of moby which is what Docker is built from), the default networking in Docker is often problematic... firewall handling is annoying, at least from a sysadmin perspective.


That's because Docker was really designed to replace Vagrant, and that shows.

The use case is an individual developer who wants to get a test environment up and running quickly without needing to understand how IP routing works. It's great for that use, not so much for workloads in production.


Yeah exactly...and those developers who just want to run something quickly are not really your ideal customers. People using it for workloads in production are...


Because most people don’t know anything about networking or firewalls and don’t want to.


Most people nothing about technology and don't want to. Not sure where that argument goes, but I am guessing jobless...


Docker provides NAT but what do you mean it provides a firewall ? I see that quite often people deploy docker without a firewall, and at some point notice that they exposed services they didn't want to internet.


They mean it pokes holes in the firewall which are difficult to filter.


> When people understand that they can easily make the choice to swap out the container runtime, and the knowledge is out there and easily and readily available, I do not think there is any reason for us to user docker any more and therefore Docker as a technology and as a company will slowly vanish

How about MySQL as a counter argument? It's always been feature-weak compared to PostgresQL, with a less business friendly license and now owned by one of the most despised software companies by technologists. But it's probably still the default relational database people pick up. Habit, defaults and massive installed base can go a long, long way.


> It's always been feature-weak compared to PostgresQL,

That's not really a meaningful statement to make. A singular "better" seldom exists outside children's discussions. MySQL for a long time had a more compelling multi-master situation, for example, and was simpler to operate as there was no need to schedule vacuums and fewer reasons to stop the database during normal operation.

But none of that is really relevant for choosing a database. It's an enabler technology, just like an operating system. You seldom set out to use a certain database, you set out to use a certain application and then choose a database and an operating system that fits the application's use case.

None of that is true for containerization technology. Any requirements come instead from the infrastructure layer, and the application is made to fit those instead of the other way around. Other than that, you are free to choose whatever has the least complex operational requirements. If I was an investor in Docker Inc. I'd carefully consider what that means.


Container engine is just a wrapper around couple of Linux APIs mainly cgroups & namespaces & iptables/BPF - there is most of hard work done. A good example how trivially it can be implemented is this: https://news.ycombinator.com/item?id=9925896

Now, implementing DBMS with even basic SQL query language, is not a trivial job.


That's funny because getting these things right (without security issues) is quite difficult.


Its very likely that most of the challenging security stuff will continue to move down into the kernel itself. Its important to remember that the entire concept of a container wasn't really a single unified concept in the Kernel until very recently as they've gained popularity; instead, they were a amalgamation of a few different capabilities in the kernel.


Perhaps, you caught news of something I haven't seen, but AFAIK, "container" is still defined in user space. Talking to Eric Biederman, that's what the kennel team wants - people to experiment in user space, remixing kernel tech together...


There's a new-ish (from Feb) LWN article about container's as an object in the kernel: https://lwn.net/Articles/780364/

Reception still doesn't seem great.


Until few years ago PostgreSQL was dramatically slower than MySQL. As for features it depends how you look for it. From what it can do in terms of queries it was more powerful than MySQL, because MySQL just stopped on SQL-92. PostgreSQL also lacked in operational functionality. For example in older version it was your responsibility to take care of shopping WAL files to standby nodes. The PostgreSQL unlike MySQL has a rule that is they implement a feature they want to do it right. It is better to not have the feature than have it and it could cause data corruption. MySQL on the other hand starts with MVP so they can put a check mark and then later work on improving it and fixing bugs.

In past year, PostgreSQL not only caught up but also surpassed MySQL in terms of performance (I think MySQL might still be faster if you mostly read the data, but I'm not sure) and operational features, so at this point not only you have as fast (and actually faster for writing workload) but also have more features and much much better reliability. So it is no brainier to use it right now, but earlier you had to choose what was more important to you.


> It's always been feature-weak compared to PostgresQL

For many years MySQL had a much clearer multi-master replication story than PostgresQL. For most use cases, other features are secondary to replication.


For most use cases, other features are secondary to replication.

I'd say the opposite - for most use cases, replication doesn't even figure into it. I guess we both need to find more solid evidence than our own opinions.


I suppose if you reason from some other criteria than avoiding downtime and data loss, a different conclusion was possible at the time.

To each their own.


Replication is not the best way to avoid data loss. And it's not the only way to avoid downtime (e.g., the AWS Aurora PostgreSQL servers I run don't use the normal pg replication).

And there are a lot of systems for which downtime really is acceptable, at least for the amount of time it takes to restore a backup. Not everything is a front-end, mission critical system. I manage plenty of systems that are only used in a back-end batch context.

So I think it's crazy to suggest that how well a db engine's replication work outweighs all other factors. Depending on your situation it may well be the most important thing, but there's a lot of room for others to have well-architected systems that have different priorities.


You are right, but it is hard to see how for profit Docker Inc benefits from that. MySQL had a business model that was similarly challenging, which lead to where it is today.


MySQL has a strong ecosystem model and it's a pain to switch databases. Young developers used to get a LAMP stack, Linux Apache MySql Php and that comes with MySQL. There are many products, one could say even languages, that require MySQL.

The database is the hardest component to switch and even if the company were to disappear today, it would take decades to migrate off.

Docker has none of this. It only happened a few years ago, it will be gone as quickly. It doesn't even have a strong brand name, people want kubernetes not docker, docker is merely a dependency that can be replaced.


Yeah, I agree. MySQL was in a more robust position, and even then the outcome was disappointing. Open source business is tough.


MySQL always had a really good, free GUI, Postgres didn't. There is pgAdmin now, but a lot of people really don't like it (I've had lots of problems with it myself).

MySQL also had (relatively) straightforward replication features since way back, something that is much newer to Postgres.

For info, I used MySQL for over a decade, and am a relatively recent convert to Postgres - and I love it!


I have an honest question- is docker even useful for most projects or is it just a preparation for solving imaginary scaling issues that most won't even reach?

I ask because I'm wondering if I should care about docker in the beginning of my project which will have very few concurrent users/requests and would run fine on a single machine.

EDIT: Thank you to all the kind answers!


> is docker even useful for most projects or is it just a preparation for solving imaginary scaling issues that most won't even reach?

I use docker for a small server at work, where I know there will only ever be one container per image.

Where docker is incredibly useful is with non-free software which works on Linux. Usually, they rely on a specific version of Debian or Ubuntu. Yes, this can be achieved with VMs, but that requires running 5 or 6 kernels. So for us it's a huge resource win over VMs.

For me, Docker sits between chroot and full-blown VMs. If you drop privileges you actually do have some security advantages over chroots and separate processes – although I don't think Docker emphasises this enough. The way Docker is used and the way it is documented seem to have a disconnect.

> I ask because I'm wondering if I should care about docker in the beginning of my project which will have very few concurrent users/requests and would run fine on a single machine.

If it doesn't solve any problems you have I personally wouldn't bother – you can always "dockerize" after, which should be a relatively straightforward process. At this stage it's just a distraction. Good luck.


Docker doesn't have anything to do with scaling. All it does is to bundle shared libs, executables, etc. of your app in a container image. If the world could agree on a common Linux userland (same builds, features and names/locations of common userland libs and language runtimes), then Docker wouldn't exist. So you could say Docker just solves a problem of our own making - that of too many Linux distros. Of course, now that those libs are contained in the image, and often invisibly so, the problem of updating those dependencies for eg. security updates is unsolved. That is, the entire reason why these dependencies are provided as shared libs in the first place - that of removing vulnerabilities without having to rebuild entire apps - is lost.

The other thing that Docker does is run in a namespaced environment such that apps don't have access to the local file systems on your host by default. This is also a problem that's been solved since the dawn of Unix by eg. using file permissions (though admittedly not every scenario is served well). Docker apps don't have access to the host's access control infrastructure (/etc/password, pam, etc.) though, so Dockerized apps need to invent their own ad-hoc auth.

The only problem that Docker is really solving is that you can pack more apps on a single host compared to full-blown VMs. Basically, Docker is infrastructure for cloud providers to sell you lots of pods cheaply (for them, that is). Since many images are based off Debian (pull in deps via apt on first start), it could also be argued that Docker is acting as GPL circumvention device of sorts.


+1 tannhaeuser. I live in a more heterogeneous environment than what docker can provide, and it seems to me that the ground docker covers is like you said, application packaging. This is frustrating since in a lot of cases, "applications" are only being provided as docker images. What the heck can I do with a docker image on something that doesn't provide docker? How do I have assurance that when teams pull docker images in from the wild, that they're actively maintained and not full of 0-day vulnerabilities? I've switched to pkgsrc and highly recommend it for solarish (and solaris), *bsd, centos, debian, mingw+windows, osx and whatever else one would feel like building binaries and dependencies for. The key to me is separating the operating system package manager, and the application dependencies such that when operations runs their mandatory apt/pkg/yum/dnf/whatever updates it doesn't break application dependencies. And on the flip side when applications want to screw with things that aren't in apt/yum/etc those custom needs can be met. This approach also doesn't preclude using the respective os container mechanism (zones/containerd/vmware/hyperv/chroot/etc). We package custom internal packages in our own internal pkgsrc repo, along side of the main repository that provides north of 10,000 packages.

More people should become hip to pkgsrc.


So only to get on the the same page as you and not to be a smartass- you're saying Docker is like an AppImage without access to host files (I didn't get the last part about GPL circumvention) whose purpose is to provide non-emulated VMs?

If all I use is Debian on my server with a file that always installs the required packages and I don't need isolation between services, should I care about docker as an "end-developer"?


Difficult to tell. I'd probably stay within a plain dev environment as long as possible. Just keep in mind you can't rely on host auth within Docker if you ever wanted to use it. Besides, it's never wrong to learn something new, and Docker isn't difficult to pickup.


Often times you want to run a newer version of a package but it has new dependency requirements that conflict with other packages. Most solutions around this involve tweaking the package, or using a chroot. A container lets you avoid this problem as each container has it's own rootfs and can be a different OS and can run different versions of software however you want without worrying about conflicts.

It's somewhat similar to static linking, but at a higher level.


It depends on what you deploy to. If it’s AWS or Azure web-app type stuff, then maybe not, but if it’s your own infrastructure in anyway then fuck yes.

Ten years ago we build things in C#, used MSSQL and deployed to IIS. All Microsoft tech, all pretty straightforward, except it wasn’t. We never kept track of the “it works in dev, but it explodes in prod and I don’t know why” hours, but I wish we did. Because it’s in the thousands, and those thousands of hours is exactly why we use docker.

We also use docker because it lets us build things that our IT crew isn’t certified in running infrastructure for (and the lovely security issues that brings to the table) but we mainly do it because it works.

Docker might not have a monopoly on that, but they have enough of a brand that the word “docker” is to containers what “google” is to search. At least in my circles.


I use Docker for development stuff too. I (and team members) have spent countless hours battling with setting up things like RabbitMQ, and I only wish I'd done it sooner - being able to instantly spin up a working, consistent development environment is amazing.

I know you can do this with VMs too, with tools like Vagrant, but containers start in a couple of seconds - great for integration tests.


> We also use docker because it lets us build things that our IT crew isn’t certified in running infrastructure for (and the lovely security issues that brings to the table) but we mainly do it because it works.

It is amazing how many technologies get traction simply because "It lets us bypass IT."


In our case it’s the compromise between developers and a operations department which has 5 technicians to support the infrastructure of a municipality with 7000 employees and around 300 IT-systems.

To manage our IT had to make certain infrastructure decisions and build their competency around those. This clashes with a lot of modern development, but we manage with docker and an increasing Azure presence. It’s not optimal, but sometimes it’s just necessary. Don’t get me wrong, we’re trying to improve and build Devops that isn’t handing off a container, but it’s a challenge in sectors where digitisation and IT aren’t priorities despite being an inherent part of any business process in 2019.

There is a real danger in there of course, but we have strategic choices for our development platforms as well. They just need to move a little faster than IT.


What are some docker alternatives? And how does Docker on windows work? I know it uses Hyper-V underneath but does MS include a base linux kernel image?


Docker on Windows is interesting. For windows containers, it can do both process based isolation (similar to Docker on Linux) and Hyper-V based isolation, which uses a very cut down VM as a base. Compared to Linux containers the base images are large, but there are only a few commonly used base images, so once you've got them it's not too horrible due to the overlay filesystem in use.

For Linux containers there is LCoW, which uses a Linuxkit VM to host the containers.

There is also some chance that in the future Linux containers will run natively on Windows without a VM, via WSL. This is very hacky at the moment but there have been reports of people being able to run Docker engine in WSL with no VM.


I always thought WSL was only meant to be a dev-tool. Do you have any links where there is a mention of a plan for them to be prod? (I know that's not what you're mentioning but wondering if that's how you guessed...)


So it's mainly in GH issue comments things like https://github.com/Microsoft/WSL/issues/2291#issuecomment-38... show that people have got it working.

Whether MS ever use that tech. for production linux containers remains to be seen. It sounds kind of cool and might be good enough for dev/test but I imagine sorting out all the wrinkles to make it prod. ready could be tricky...


Docker is immensely useful when starting projects that use web services because it lets developer download and run instances of any service on demand with a single CLI command, and lets any teammember do the same with your work.

Then with docker-compose you are able to instantiate custom deployments of your projects, including dbms and gateways and messa buses and IAM services, with a single command (docker-compose up).

So yes, it's very useful.

However, the point of the article is that Docker is no longer the only option, and although right now it's the default go-to container tool, there is nothing but inertia keeping the world using Docker.


I don't even think Docker is useful for production, but it's indispensable as a development tool since I don't want to sully my Linux install with a bunch of databases and dependencies I'll inevitably forget about.


Clearly, you haven't had outages during app deployments because somebody forgot to update the accompanying packages from the OS.


It makes it trivial to package, build, and configure your host machine as a VM exactly how is supposed to be setup. Drastically reduces the amount of "well this works great locally" BS.

Then it's trivially fast to pull and run the image on any machine with Docker or K8s installed.


Docker is not a tech for scaling, its a tech for repeatable environemnts (prod dev). Its very useful for this. Container orchestration is for scaling (k8s). Docker is very useful when you have more team members with different computers.


I've found it really useful on embedded systems (e.g. Raspberry Pi). It takes a long time to install anything complicated, and you occasionally need to make kludges. Having a Dockerfile means (a) I can remember what I did to make everything work, and in what order (b) using a hub, I can easily duplicate the environment to another Pi without waiting overnight for all the applications to build.

Yes, you can clone the SD card, but I think it's cleaner to use a version-controlled Dockerfile. Otherwise you always need some master SD card to clone from (and keep track of a multi-GB image file), and you have to faff with resizing images if the new card is smaller.

A fresh system install is then: flash Raspbian, update system, setup some init scripts, install docker, pull the image and clone the latest version of the code from github.

This is also the approach embraced by Balena, who (conveniently) provide Docker base images for a bunch of common embedded systems.

Another reasonably big user-base is machine learning.


What stuff are you putting on your Pi through docker? Just getting started and interested to hear how others leverage it.


It's for a drone-based sytsem. Python, OpenCV, ROS are the main parts, plus some machine vision camera SDKs. I've also put in some optimised machine learning libraries which are a bit finnicky to setup. ROS is an absolute mule to get right and I like having it in a closed off place so it can't mess around with the rest of the system. I have the actual ROS workspace in a shared volume so they're persistent if I fiddle with the Dockerfile.

None of it really requires docker, but it's nice to have the whole environment encapsulated, and having a record of what I had to do to get some of these things to install is invaluable.

I have a shell script which launches the container (I just run a new one) every time the Pi boots.


Think about this: Wouldn't it be nice if installing apps on your desktop was as simple as just a folder? And everything lived in that folder instead of sprawling out into the OS everywhere? Copy that folder to run more, or replace it to update, or delete it to remove everything.

That's what Docker does. It lets you wraps everything into a single isolated package (a container) that can run whatever you want without affecting anything else on the system, and then cleans up perfectly. You can connect it to networks and disks but There were APIs for most of these things already in Linux but Docker brought it all together into a simple interface.

The other big innovation is the Docker registry that makes images easily available over HTTP. No more complicated downloads or package systems, instead you could just point to a simple host/image:label address and download whatever you need. That simplicity and flexibility is what made it take off.


As others are saying, Docker is a tool for repeatable environments.

I have an application with a long, tedious setup process involving dozens of apt dependencies that I don’t maintain. In the past, I’ve had issues with things not updating properly, inconsistencies between versions, spontaneous breakages... Using Docker, I built an image that contains all the dependencies that are unrelated to my code or configuration, then another image relies on this and contains all of my stuff. My deployment process only needs to be concerned with updating this second image and all of the frightening dependencies are guaranteed to be consistent and reliable. This second image is deployed to each box. All of the production systems are identical and if I need a new production box, I can have it ready to go in minutes and be confident that it will work and behave reliably.


> [...] I built an image that contains all the dependencies [...]

How, may I ask? I've been looking for a good way to do this for a long time; nowadays I'm using a checklist and AMIs on AWS.

Ansible is too damn slow, does not know how config files work and clutters the terminal/my home directory/my known_hosts.

Dockerfile is a badly written shell script using && instead of errexit.


I'm not sure if mine is a good way of doing it but it's working for me. I took the relevant portion of my own badly written shell script, moved it into its own Dockerfile that has its own CI project and its own private repo on Quay, and I rebuild/republish as needed. The remainder of my original shell script went into the Dockerfile for the project that holds my code. The first line of this Dockerfile just starts with my base image.

    FROM quay.io/my-acct/my-image:latest


The point of the && in a Dockerfile is because every single RUN directive adds another layer to the final image. Less layers = slimmer images = faster and lighter deploys.


Yes. Also, the only thing that survives between RUN directives is the disk image so && saves state:

start-daemon && configure-daemon-while-running && stop-daemon

The point is, I've seen far more && than RUN directives so the decision was backwards.


It makes deployment really easy so long as you have the infrastructure to deploy to. I have most experience with Amazon ECS but you could also manage it with Kubernetes on AWS, Azure or GCP.

My company now deploys every app that can run on Linux as a Docker container, regardless of whether it needs a single instance or 50.

It's not as though it doesn't introduce its own problems for example - you might patch software on the host but you still need to patch it on the container and it can be difficult to get an inventory of that, say there's a zero-day found in nginx. You still require organisational discipline, that never goes away.


Even if you don't use Docker in production, I think it's still very useful tool for local development, combined with docker-compose.

When I'm working on one service, I often depend on a few other services too. I usually need a storage layer, perhaps API calls to related services, it may even to talk some Amazon services like S3 or SQS. Having a simple way to spin up every dependency locally, even AWS (lots of great AWS API compatible images out there) is really useful if only for local development.


I use it a lot for ML work. Nvidia-docker makes it easy to setup CUDA and partition access to GPUs (if you have more than one). Docker makes it easy to setup tensorflow, Keras, python, etc. which I find to be a royal pain otherwise.


Anytime a potential new hire submits their sample project in a docker container and I can stand it up and everything 'just works' I am happy. No need to mess with any dependencies and it should work as they intended it.


The Docker CLI will be around for a long time. This post is just FUD.

There isn't another runtime that is as easy to use as Docker.


Part of the article calls out how the alternatives (like CRI-O) are shipping docker compatible CLIs, so that users can keep going `docker build` but with a completely different runtime.


CRI-O isn't comparable to Docker for developers, it's a replacement for Docker when used as part of a Kubernetes cluster.

The thing is that whilst various operations are shipping bits of the functionality that Docker provides in various different tools, for developers working with containers none of them (that I've seen) have an easy to use, cross-platform setup.


The post was primarily about the company. The CLI can be around for decades, but that doesn't necessarily mean Docker the company can profit from it.


It's fair to say that in the hype cycle Docker (the technology) has passed its peak, but I don't think the conclusion that as a technology, it's finished, is warranted.

Containerization is still on the rise and Docker is part of that. The toolset is still a good, fairly easy to use, place for individual developers to use containers on the client-side whilst creating containerized applications.

For smaller deployments (without orchestration) a single Docker engine with things like compose, can still work well.

Obviously on the orchestration side, Kubernetes has won, although it too will face the inevitable trough of disillusionment when people realise that all technologies have downsides.

Personally, for simpler workloads, I think Docker swarm can still be a good answer, as it's a lot less complex than Kubernetes to set-up and maintain.

The idea that the Redhat container stack (podman, CRI-O, et al) necessarily mean the end of Docker, doesn't really follow at all to me.

If anything the increased use of containerd directly is a bigger threat to Docker's market share.


I think the main point to consider is, if Kubernetes is the technology for large deployments, who is left to buy enterprise solutions from Docker the company?

And without enterprise sales, what is going to fund docker development for the small and simple docker workloads?


Well Docker the company is a different game. They have gone quite heavily for the enterprise market and support Kubernetes as part of their Docker EE product.

Of course, whether that will give them enough success to justify their valuation, is another matter. Personally, I had thought they would get bought out by one of the large tech. players who is heavily investing in containerization (e.g. Microsoft) but that doesn't seem to have happened so far.


Amazon might have a say in this, both their ECS and EKS solutions are based on docker.


The Docker company may not survive, but Docker will.

Especially considering the work the Docker company is doing to support OCI. The dockerd internally uses containerd now, which supports both Docker images and OCI images.


I always tend to think that the reason Docker developers decided to split the Docker products codebase (CE/EE) into the Moby components 2 years ago was that they foresaw the inevitable death of Docker in the long-term.


>>> It's fair to say that in the hype cycle Docker (the technology) has passed its peak, but I don't think the conclusion that as a technology, it's finished, is warranted.

That's confusing the product and the technology.

The technology is containerization, not Docker. Containerization is on the rise and will continue.

Docker as a product AND a company, is doomed. They have no business model and no sales. All of Google/Amazon/RedHat/Linux are working on a substitute so they don't have to depend on it.

The hype is in Kubernetes now and managed cloud solutions. Docker is an accidental dependency of Kubernetes, until it's finally get rid off.


That's a view for sure, and as a company I'm not sure what Docker's endgame is now.

However, I think it's premature to write off Docker the technology. There's a load of scripts/tooling/mindshare baked into the Docker model for executing containers, and I don't think that's going to switch to another model overnight.

As many people have commented on this thread, the killer app. for Docker the technology isn't running in production, it's running in development, where you're unlikely to want a full k8s cluster.


Docker got over a $100M in VC funding, they can continue to operate without revenue for years.

I hate to break it up to you but the model already switched overnight.

The job market is the biggest indicator for sure. Any experience with Docker is mostly worthless at this point. No company will hire somebody with Docker experience alone, without Kubernetes.

Kubernetes has replaced most of what you mention with its own idioms and tools. All the way from the lips of the CEO to the engineers in a F50, it's talking Kubernetes, not a mention of Docker.


Well you've got your experiences and I've got mine, I don't think one set necessarily invalidates the other :)

My experiences are that every k8s cluster I've seen uses Docker as the underlying CRI and in development land, Docker as a product is very prevalent.


I don't intend to deny your experience at all. Docker is everywhere about now, same for me.

There is another thread discussing that RHEL 8 is replacing the docker command with its own command line tool. Looks like it's happening sooner rather than later. There is a lot of work going on industry wide to replace Docker, can't deny that either.

I personally find it very ironic that Kubernetes only supports docker (more or less) while it's trying hard to never use the word docker and it has a whole concept of replaceable container engines.


That Redhat don't like Docker is not surprising :) Whether their tooling will supplant Docker is more debatable.

From what I've seen they don't have huge traction outside of Redhat for their stuff, so all the other major players in the space (Google, Microsoft, Amazon etc) are not shifting to use Redhat's stack.

so the challenge is, can they convert developers from using the easy to use Docker, to their newer more complex setup? I'm not sure I'd bet on that. Anecdotely, it doesn't feel to me like Openshift is winning the "Kubernetes distro" fight, if what I see on reviews is anything to go by it's mostly kubeadm, kops, Rancher or GKE/AKS/EKS.

So Redhat replace bits with their tooling, Google do the same (kaniko, GVisor et al) Amazon continue to push Fargate and serverless, there's not a unified "replace docker" effort, more a splintering of tooling/services.

My personal view is that on Dev. desktops Docker still provides the best experience. That will continue to keep them relevant until someone else can replicate/exceed that experience.


well actually it's clear that docker is bad at building images, thats why a lot of solutions are coming around to build them. I mean google has over 3 different ways of building containers, kaniko, jib and even a way in bazel.

also security you can't run shared environments with docker or container runtimes. thats why they invented givsor. however docker will still live as a runtime and probably a lot of companies still keep docker as the runtime. I mean containerd will probably be the default one in k8s, sooner or later, but that will only happen because containerd was designed just for that. being a awesome container runtime designed to have a good api.

docker basically has only lost because they focused on too much. docker can do all things, but way worse than all the special tooling.


What is wrong with Docker? This does not address any technical or other shortcoming and only seeks to replace one set of over engineered tools with another with the exact same problems. [1]

The is yet more of the ecosystem continuing to push over engineered tooling and 'winners' breathlessly without basic technical scrutiny that leaves end users dealing with needless complexity and debt.

Containers can be useful as a lightweight, efficient alternative to VMs and those who want containers untouched by questionable ideas should try the LXC project on which all this was based.

Any additional layer on top of this be it non standard OS environment or layers should meet technical scrutiny for end user benefits and most users will be surprised by the results.

[1] https://www.flockport.com/guides/say-yes-to-containers


It's not clear to me that the author has a solid understanding of how Docker makes it money. You don't make money just by giving away software, and you also don't make money by providing support software for no one uses. Docker has been pretty smart to build a well-known brand around supporting a specific set of technologies -- some theirs, and some not. When it became clear they were unable to own every part of the container ecosystem, they made smart decisions around supporting k8s and engaging the open container standard.

Docker's got lots of runway providing enterprise support contracts, so I'm not worried about them.

I think the author's also not noticing that orchestration was really a bit of a stumbling block that would eventually be removed. Sure, you've still got to use k8s in self-hosted, GCP, and Azure, but those of us on AWS have the option to use ECS with Fargate and have many of the core features of something like Kubernetes fully managed.

So anyway, this post is a bit dramatic, and maybe has a few blinders on.


This post links to podman. But the podman website is completely useless, because it does not tell me what it does differently/better than docker, only that

    What is Podman? Simply put: `alias docker=podman`
So, for someone who occasionally uses Docker for running services or creating specific build environments (manylinux1): what are the benefit of podman over Docker?


Podman doesn't have a daemon like Docker does. It also more tightly integrates with buildah, which the article doesn't expand on. Have a look at this (very brief) overview to get a bit better idea of their relationship: https://github.com/containers/buildah#buildah-and-podman-rel...

Podman also uses the same notion of pods, and it doesn't support docker-compose syntax/files, because RedHat strongly believes that Kubernetes has already won. Basically, podman/podlib allow you an easy migration path from your local computer to a k8s cluster, with the same images and same concepts. Have a look here: https://github.com/containers/buildah#buildah-and-podman-rel...


> Podman also uses the same notion of pods, and it doesn't support docker-compose syntax/files, because RedHat strongly believes that Kubernetes has already won.

Could you expand on that please? Almost everything I run locally (be it a self-hosted service or app devel) with docker is a docker-compose stack. It allows me to easily manage/monitor services via CLI or Portainer. How does Podman and other modern tools offer to solve this case, or is it proposed now to use K8s locally?

I got enthusiastic about Podman not having a daemon and running Podman containers as a non-root user[1].

[1] https://opensource.com/article/18/10/podman-more-secure-way-...


The second link is the same as the first, did you mean to put a different one?


for me the problem of the RH container landscape is that it's way more complex for new users than Docker.

In docker-land, `apt install docker.io` gets you a working container system with image management, network management and container operation.

To get the same functionality on the RH side it is (from what I can see)

skopeo+buildah+podman+CNI for networking, then if you want to run that under Kubernetes add in CRI-O

Sure for more advanced use cases breaking that all apart could well have benefits, but for ease of getting started, Docker seems better.


Dev computer? Use podman.

Server? Use CRI-O + kube.

CI/CD (that needs to build a container)? Use buildah.

And use skopeo for copying images around when needed

You could argue this makes it more difficult to run containers on the server without kube, and you'd be right. Whether or not that is a bad of red hat to do I'll leave up to others.


On your dev computer though you might want to ..

build containers to confirm that your build steps all work ok, so Podman + buildah, then you might want to

... push them to a repository to collaborate with a colleague without setting up CI/CD, so Podman + Buildah + skopeo.

Then you might want to execute the containers to test to see whether your code is running ok....

the point I was making didn't relate to running containers on servers where typically things like CI/CD will be part of the process, but to new developers and devops people getting started with containers.


Podman and Buildah have registry push/pull built in, so technically you don't need Skopeo except in CI. (Honestly I'm still a little unsure what Skopeo's ideal use case is.) Podman also has a `build` subcommand that takes a Dockerfile, so I'd argue if you're on a development workstation where you only use Dockerfiles to build containers, all you need is Podman. (Buildah supports more interesting build pipelines that can be driven without a Dockerfile.)


I just looked at https://github.com/containers/libpod/blob/master/docs/tutori... .

For debian/ubuntu users (of which there are a fair quantity) whilst podman's install process looks like that, I don't see it taking over from Docker any time soon.

Not to mention Windows/OSX users.

Ease of use should not be overlooked when it comes to developer tooling. Podman can be the most technologically advanced solution, but if it's a pain for developers to get going with it, it's not going to replace Docker any time soon...


I agree that developer experience is paramount. I've actively made choices to choose tooling with better developer experience (Rust and Elixir over C++ and Erlang, for instance), and Podman will need to have Debian/Ubuntu packages eventually. If you're on Fedora, openSUSE, or RHEL, though, Podman is a yum/dnf/zypper install away and works quite well (better than Docker in my experience wrt SELinux issues).


>"CI/CD (that needs to build a container)? Use buildah."

Is buildah an alternative to Kaniko then?


This article contains really no good reasons and spends many words to say that unnamed alternatives exists.

One argument is that there are no new big features. But that is completely normal when things become stable.


Agreed. I came for rational but saw none. There seems to be a lot of hate for Docker I don’t understand. Is it community management? Is it some esoteric tech concern? Can someone more knowledgeable pipe in?


I agree.

Why do people hate Docker, the CLI. Anyone? I mean, it's slowly going OCI, so no vendor lock-in.


HN crowd likes to gaze at FANG-like corps and treat it like gods, FANGs use Kubernetes? well then for sure docker is passe :) It doesn't matter that for some use cases swift and straightforward solutions like docker and it's ecosystem are better.


For me Docker's value is to be able to easily run some Linux services with Windows. They got a lot of ready to use recipes on their website, so it's really easy to run e.g. wordpress with mysql. I would spend at least hour to manually setup VM with Ubuntu, install that stuff there. Also I'm using it to run postgresql for development. While technically I can do it just from Windows, I feel safer concentrating all that stuff inside disposable VM, also it's easy to share onboarding scripts with colleagues. I don't see how RHEL's tools would help me with that. Actually I could replace all my Docker usage with few shell scripts, but they have to be written and with Docker they are already written, many by experienced software maintainers. Docker will die when popular software will discontinue their docker images.


For me docker is a really convenient way to self host multiple services on one host. In the past I did it without docker and almost every service requires you to add a repo and after you do this a few times you almost always end up breaking your system and having to start again. Docker keeps things clean and easy to manage.


How's docker on Windows working out for you so far?


No problems at all. There was weird problem when I tried to output binary file to stdout inside docker and redirect it inside cmd which resulted in garbage. But it wasn’t appropriate usage I guess.


Are there any container filesystems that support multiple inheritance, and create diff layers? It would be really nice if I could build a few different things independently, and then merge the final images together. Also if I could only include files that have changed in a new layer, and ignore duplicate files (even if the file was touched, or the timestamp has changed.)

Those are my biggest pain points with Docker at the moment. I have a complex build script that uses multi-stage builds and rsync to achieve this [1], but it's still a bit slow and inefficient. Would be nice if something supported this out of the box.

I've worked on a lot of projects where people just reinstall (and recompile) their entire list of dependencies (Ruby gems or NPM packages), and you have to jump through hoops to set up a caching layer, or maybe install them into a volume as a run-time step, instead of at build time. There should be a much better native solution for this, instead of needing to invent your own thing or read random blog posts.

[1] https://formapi.io/blog/posts/fast-docker-builds-for-rails-a...


I would say check out buildkit, which is the tech behind "docker build"'s new builder.

I don't know if the Dockerfile format is really suitable for this, but you can now build your own format and Docker can just build it.

Basically buldkit breaks things down into a frontend format (like Dockerfile) and a frontend parser which gets specified as an image at the top of your file (`#syntax=<some image>`), the parser converts the frontend format into an intermediary language (called llb), buildkit takes the llb and passes it to a backend worker.

This all happens behind the scenes with `DOCKER_BUILDKIT=1 docker build -t myImage .`

Docker actually ships new Dockerfile features that aren't tied to a docker version this way.

Actually there are a number of new Dockerfile features that might get you what you need, even if the format isn't all that great, at least it's relatively natural to reason about. Things like cache mounts, secrets, mounting (not copying) images into a build stage's "RUN" directive, lots of great stuff.

This is all officially supported stuff.

Here's a demo of "docker build" building from a buildpack spec instead of Dockerfile: https://github.com/tonistiigi/buildkit-pack

- buildkit - https://github.com/moby/buildkit - official Docker docs - https://docs.docker.com/develop/develop-images/build_enhance... - buildkit Dockerfile docs - https://github.com/moby/buildkit/blob/master/frontend/docker...


Nix[0] does this, sort of. It doesn't merge everything into one tree at runtime, but instead builds every package to a unique path, which gets embedded into downstream dependees. It assembles the dependency tree into a DAG, and uses this to automatically parallelize builds (where safe). Builds automatically run in ephemeral containers, to ensure that all dependencies are accounted for. There are also importers for many languages, and an exporter for Docker images (where each Nix package becomes one Docker layer, up until a limit that you specify).

[0]: https://nixos.org/nix/


Buildah gives you access to the workdir for the build so you can do interesting things like copy or rsync your build artifacts into your container. Check out the example here - https://github.com/umohnani8/Demos/blob/commons/security/sec... and use of dnf install to root dir (which could be replaced by debootstrap or equivalent) https://github.com/umohnani8/Demos/blob/commons/security/sec....


That sounds somewhat like my current project [0]. Currently it's in the prototyping phase, so it won't be released for a while, but it's coming along nicely.

Your use case is one I've been planning to support all along, although I think about it a bit differently - in terms of composition of (partial) images rather than multiple inheritance.

[0]: Dev blog: https://www.narrowband.org.uk/


This is a similar discussion to vinyl vs cassette vs cd vs streaming. All will keep existing in some form or another. Admittedly, some will die out completely (DCC, Minidisc) I’m pretty skeptical this going to be Docker though.

Even more, the argument that because Redhat RHEL 8 no longer has a Yum repo for it Docker is dead in the water is a bit far fetched. According to Wiki RHEl is a fairly small % of the server market compared to Ubuntu, Debian and Windows for that matter. https://en.m.wikipedia.org/wiki/Usage_share_of_operating_sys...


RHEL also means Fedora and CentOS, the latter is extremely popular for large scale deployments. The article is for web servers only, I think the real market share of RHEL and CentOS is more like 30-40%.

Podman just makes more sense because it doesn't require a big fat daemon. You launch containers like any other service: with systemd. And you can run podman without root permissions as well, which is a huge win for security.


Well, having worked quite a bit with RHEL and Centos in the past, I found their day to day usage quite different. Maybe that has changed.

More important, I only saw RHEL at traditional Fortune 500 companies. Not sure how their market in FANG type companies is. Probably negligible. F500 is still big of course.


FANG are making their own OS at this point so they're not really relevant.

There are two distributions left in Linux, RedHat derivatives and Debian derivatives. In terms of install base, it's maybe 1/3 and 2/3. In terms of money, expect the other way around, because it's F500 who pay the most for software and they are on RedHat.

RedHat is actively trying to kill docker (along Google and Amazon). RedHat removing "docker" CLI and replacing it with its own tools is a major step toward that. Docker will be de facto dead in enterprise as soon as it stops being supported by RedHat.


Unless these orgs can replace it with something as feature complete and stable as Docker I suspect their customers will ultimately have the last word.


I will pass on the joke to call docker either stable or feature complete.

There is nothing that prevents RedHat to ship an alias to their own tools by default, just like java => openjdk.

It's not the responsibility of RHEL to maintain or support third party software. If case you didn't know, docker stopped shipping with Debian years ago.


Nothing is ever feature complete or stable, I know. All software is shit, right?

The day RHEL customers stop using the docker runtime is when RHEL will stop supporting it - until then, they'll support it. Case in point: java -> openjdk.

Anyhow, this article is just click bait, and it's been done at least twice a year since Dockers inception. I'm disappointed that this community finds shitposts like this more compelling than the NSA opensourcing a reverse compiler, but I digress.

Have fun in your bubble.


Docker runs on too much of technical debt. They might never recover from that. And if they do - it will be quite a different product, in the end.

There are just too many little details that have not been fixed year-on-year.


Agreed. I doubt docker is going to die anytime soon. They simply have a much bigger footprint than one distro family.


Nope. Docker's value isn't just its software. It's the support built around it, tutorials, familiarity and common usage, Dockerfiles, huge Docker Hub, existing setups relying on it and so on. Articles like this tend to overlook the value of entrenched technology that works well enough.


Funny enough. Kubernetes ignores pretty much all of that in favor of its own, making experiences in running Docker alone mostly worthless.


Based on this, I've tried most of today to make minikube work on macOS. But as I'm using DNSCrypt-Proxy I had major issues making it work without manual steps.

So far, the Docker for Mac is a solution that just works.

If someone have some experience, I'll gladly want to know how you make it work flawless.


The article mentions Kubernetes using containerd and the OCI is the future, but also fails to mention that containerd is developed by Docker, which supports OCI, which largely supported by the Docker company.

As of the latest versions of Docker, the dockerd is now using containerd under the hood.

I'm not sure how the Docker CLI will exactly die. The posts seem to focus on the CLI only, and even calls out that "the viability of the company Docker is outside the scope of the post", but it failed mention my previous points.


Containerd is a project within the Cloud Native Computing Foundation, which in turn is part of the Linux Foundation. Docker haven't been directly involved since 2015, and even then, it's arguable that they've never been (you may be thinking of runc though, which was donated in 2015).

It's true that Docker uses containerd under to hood, but that's actually part of what the author is arguing. Docker as a technology is a wrapper platform around core industry technologies that they neither own or control. That means they have to compete as a tooling company, and they have already lost ground there. The more things like Kubernetes and podman join the market, the less required Docker becomes, which means they're going to be more and more at risk of failing.

https://en.wikipedia.org/wiki/Linux_Foundation#Containerd

edit: added link for reference


> Docker haven't been directly involved since 2015, and even then, it's arguable that they've never been

A large majority of even the recent commits to containerd are made by Docker employees.

https://github.com/containerd/containerd/commits/master

> It's true that Docker uses containerd under to hood, but that's actually part of what the author is arguing. Docker as a technology is a wrapper platform around core industry technologies that they neither own or control.

I cede your point, but it's irrelevant and isn't want the author implying (even directly).

From the article: "I do not think there is any reason for us to user docker any more and therefore Docker as a technology and as a company will slowly vanish."

The end users do not care how cgroups are setup or mount points are built. The guts may be standardized, but the docker toolchain (Dockerfile, docker-compose, docker run) will continue to exist. The "runtime" is irrelevant and there just isn't a competitor in the "tool chain" arena.

Docker Swarm is the only thing that will vanish.

> The more things like Kubernetes and podman join the market, the less required Docker becomes, which means they're going to be more and more at risk of failing.

Kubernetes is an entirely different use-case. Nobody is arguing Docker Swarm will beat it.

You could say the Docker CLI isn't required with the advent of other tools, but those are incredibly big shoes to fill. Think of all that entails (Dockerfile, docker-compose, the CLI, cross-platform(-ish) support for Windows/OSX).

Also, competition leads to better tooling. Did anyone ever say "Unix isn't required any more, because with have Linux"?

The tooling of the Docker CLI is in a very good spot as-is. The guts are being opened, which I think would relieve the pressure some may feel to jump from Docker.


containerd is a project started entirely by Docker and contributed to the CNCF by Docker, and Docker is still a primary contributor and maintainer on the project.

Where do you get this information from?


Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: