I’m failing to see the argument here. The author suggests that the advent of viable Docker competitors will inevitably bring about Docker’s death. Why would that be the case? Competition is great, but it’ll be a while before others can match Docker’s maturity and ubiquity. Even then, there’s no guarantee that any of them will be better than Docker, never mind good enough to warrant switching.
What’s wrong with Docker? Why would I want to switch to something else? Are these other solutions really so superior that they warrant the significant time investment that it would take for me to learn how to use them?
The author doesn’t actually answer any of these questions. No arguments against Docker are made, nor are any arguments made in favor of competitors.
You're missing the point of the author. It's not so much that Docker as a technology is bad; there's nothing wrong with it. The point is that Docker, the company, cannot monetize upon the technology, because the moment they start charging for their containerization technology, people will switch to an alternative container technology, especially now that Kubernetes has won the orchestration "wars" and made it so easy to switch the underlying technology.
Personally, I have to agree with the author and think it's difficult to see a bright future for the company that lives up to its $1.3B valuation.
Docker Inc is fine. It would be silly to charge for the actual containerization because that's not really the value of their platform.
Things Docker can/does charge for and could do very well with.
- Support: Businesses are happy to pay maintenance contracts for fixed LTS versions of Docker which is their EE product.
- Kube: Docker has pivoted their UCP product into a turnkey on-prem Kubernetes distribution. Plenty of room in that space to grow.
- Registry; You wouldn't run random images from DockerHub in production, right? Similar to Red Hat's product in this space there's a lot of money to be made in having officially supported images. Images with a pedigree, aduit trail, CVE reporting, yada yada. Partner with Canonical and since it's way easier to do this when you already have distro maintainers and you have a solid RH competitor that devs will like more.
- Security: Audit your images that have been sitting around and not updated in ages.
- Hosting: They'll be one of many but there's plenty of space in providing some ergonomics compared to Google/AWS's offerings.
Also: container orchestration if _far_ from solved problem. Kubernetes is a nice try, but it screams for a better layer above it that would allow devs to deploy services _easily_.
There is plenty of ways Docker Inc. can create and capture value in this space. Their unique value is that whatever product they make, the world will check it out... They "just" need to make it good and figure out the business behind it.
Dockerhub doesn’t even support MFA, not to mention their IAM is all or nothing (there’s only 2 roles). I don’t think they’re focusing on the things that they should be.
Yes. As a response to a comment talking about enterprise support contracts as a main source of revenue for Docker, I think enterprise checklist items are entirely valid items to focus on.
Well this is the inherent push-pull of the industry, isn't it? The ones with money generally have insane compliance requirements (because they have money, they have more risk); the places without money care more about getting things done (since they're trying to make money...).
If it weren't so, we'd all just sell our hot new WhateverAsAService to banks straight away, right?
Ubiquitous does not equal value. Docker doesn’t do anything you can’t do with lxc tooling and an object store (registry). That’s the benefit: containers are free, the orchestration is free, and the registry tooling is free and easily replicated.
Mongo equally so (JSON in PostgreSQL). These valuations are fantasy.
You're spending those scaling dollars regardless (developers with devops knowledge, infrastructure/SRE folks). Better to spend it on competent staff than a company lighting VC money on fire, no?
> Better to spend it on competent staff than a company lighting VC money on fire, no?
False dichotomy though. You either acknowledge that getting a vendor to do that for you also implies competent staff or you acknowledge that hiring "competent staff" is not like flipping a switch and requires training, ramp-up time, few wrong hires, getting managers to guide the teams, etc
I'm not sure which way is better, but I don't think the industry has decided either :)
i think it's a bit more complicated than that. very skilled developers that know this tech won't jump ship to many borderline broke startups which could easily just get started with docker with less than 'rockstar' devs
Microsoft's containerization strategy heavily depends on Docker (they've got a licensing deal so that Windows server can use Docker EE for free) they might well pay good money to by Docker
Didn't IBM just buy Red Hat for $34 Billion? Can you imagine a future where Amazon acquires Docker to optimize it to run on AWS and perhaps perform a bit worse on GCP. Any small advantage you could get in the space would equate to large changes with as ubiquitous Docker is and as large as AWS is.
Fun fact: if you use Red Hat, you have to use Docker EE. You don't even have the option of getting CE and taking responsibility for Docker support yourself.
"Red Hat" as in RHEL specifically, or are you meaning any of the family (eg CentOS)?
Asking because I've not hit any problems using Docker CE with CentOS 7. Well, aside from general bugs (etc). But nothing seems to force the use of EE instead of CE.
Docker CE is officially supported (well, it's not supported because its CE, but it's packaged, repos, install guide) etc. but only EE is packaged for RHEL. RedHat also has their own docker package like many other distros, so you can still get it free easily, but you have to get it through them and wait on them for updates.
If you're using RHEL why wouldn't you default to using Red Hat's build of Docker? I definitely prefer it to the official builds. Red Hat in general seems very annoyed at Docker and carries quite a few QoL patches like being able to add registries to search.
Understood, and I'm fairly certain that a large corporation that uses and markets Docker extensively will look to acquire it sometime in the future. Docker's ubiquity is exactly what makes it valuable.
Could be any of the FAANG companies but I posit Microsoft since they noticeably try to make Docker on Windows a seamless experience and also push Docker within their cloud offerings to modernize legacy apps towards Azure. They've already purchased a large community-oriented company (GitHub) and have acquired companies at a surprisingly high price (LinkedIn).
And considering most all docker containers are based on and continuously updated from a GitHub repo, a docker acquisition makes a lot of sense. Gitlab already has its own container registry.
> The point is that Docker, the company, cannot monetize upon the technology, because the moment they start charging for their containerization technology, people will switch to an alternative container technology...
I see the thought process but disagree with the premise that they need to charge for their container technology directly.
For example, GitHub does not charge for git, but instead for the convenient layers that they add on top of it.
Another example is that Google does not charge for Kubernetes but you can buy support which every enterprise company wants and since GKE happens to be the most convenient way to get a K8s cluster rolling in the cloud (for most circumstances) and also now on a box now with GKE On-Prem, many will choose that path so Google still gets their money, just from product / services / support on top of the free core.
They can also monetize the "fallout paths" — turns out you're in over your head running all of that K8s stuff yourself? Come pay us more and use our K8s PaaS instead! Not sure if they actually have one yet, but consider something like GKE Serverless here.
Enterprise features and support around running containers in production are worth a lot of money.
Docker can continue charging for all of its porcelain layers on top like Docker Hub and Docker Enterprise and make plenty of money off of big fish.
They could also monetize by being acquired by Microsoft, Google, or Amazon.
1. GitHub was struggling badly. They had lay-offs, etc.
2. Google still makes the lion's share of it's money from search/ads. Google Compute is struggling and being subsidized by search/ads.
IMHO, Docker's only exit strategy is to sell. They don't have enough time left to grow an organic business. Silicon Valley money always wants paid back, sooner rather than later.
The layoffs and new CFO at Docker, IMHO, are all about cleaning up the finances so that a sale is possible at a good price.
Their last round of investment barely averted a down round. The value bump was minimal and the investors are mostly from a bank in Brazil that likely doesn't understand exactly what they bought into (a lot of uneducated money out there right now because there is no place to put it). None of the original investors participated which shows they don't believe in the company anymore.
Their most valuable asset is Windows support. Hence, the most likely aquirerer would be Microsoft. I'll bet a paycheck they sell for somewhere between 1.5B and 2.5B. Keep this thread for posterity :-)
It's painfully obvious to anybody that has been in this game for a while...
1 - Having too many people is more an indicator that they hired too fast or more than they could sustainably afford rather than indicating anything about their inherent monetization strategy and approach. Every company has a ceiling somewhere, even if it's getting acquired by Microsoft for $7.5B. That's a pretty nice ceiling to have.
2 - That seems fine and besides the point... Google is massive. Google Cloud (excluding the half that comes from G Suite) brings in $2B per year in revenue. That might only be a few percent of Google's overall, but a few percent of $80B is still a massive number.
There are actually a lot of issues with Docker around breakage and stability. That's alluded to in the article.
Docker itself is in trouble because they got the wrong use case from the start. They completely missed out about orchestration.
Containerization requires to both have containers (docker images) and deploy them on fleet of servers (kubernetes orchestration). The later is where the value is.
> Docker itself is in trouble because they got the wrong use case from the start. They completely missed out about orchestration.
That's actually how they started -- they were a PaaS called dotCloud before they were Docker!
This was before the term "orchestration" became common, but the point was that they were running apps in multiple languages. They were like Heroku except they supported more than one language (back when Heroku was basically Rails apps).
They didn't get much traction and had to pivot to Docker. I think the underlying reason is that writing a PaaS / distributed OS like Kubernetes is extremely difficult, and they were spread too thin. They didn't really nail a single use case like Heroku nailed Rails apps. They were trying to run everything and ended up running nothing.
----
This "rhyming" of history makes me smile, and makes me think how important timing is. dotCloud was the right product, done too early and done poorly. Docker was arugably the wrong product at the right time! I say "wrong" because I never thought it was a sustainable product or business, but it did come about at a time when people really needed a quick and dirty solution.
And now the surrounding ecosystem has changed drastically and I agree with the OP that Docker may be left out of it.
----
Another thing I find funny is that App Engine was already "serverless" 10+ years ago... you wrote Python apps and uploaded them as .zip files (web or batch processing). You didn't manage virtual machines. So somehow they were also a bit early, and maybe the product was lacking in some ways as a result.
I'm not sure that dotcloud and App Engine though were the right products though.
They have shown that a fully managed cloud solution is not the answer. Barely no company could use it, let alone migrate existing software to a whole new platform. It's limited to a very small niche.
Docker and Kubernetes are standalone products. Companies can use them internally and make it work with what they already have.
I read the finer point of the argument to be that Docker got the layer of abstraction wrong for monetization. That Kubernetes (which orchestrates containers) is the tool at the layer of abstraction where a company could provide a product that has enterprise demand similar to Red Hat. And that Docker is a replaceable piece of technology in the market of managing cloud architecture.
The story of Docker, Inc should be read as a cautionary tale to those who think that every random bit of open source should be “monetized” by seeking rent for its continued use and support.
Exactly, when will Silicon valley investors understand that Open Source:
1. Doesn't grow revenue at the speeds they want returns
2. Isn't "free marketing" (dear God I have seen this one too many times, CTOs even write it out in blogs sometimes)
3. Only really makes sense when you get out more than you put in, aka is community drive . Community driven open source is really the only profitable open source.
What's wrong with not using Docker? Outside of cloud/SOA/webscale sector, people having hard time to simply comprehend the drama.
I have not seen any justification for using containers on any scaleable system on real hardware for example.
In "ad-farm" world, running ad farms on pure hardware with netboot provisioning has been the "gold standard" since time immemorial.
Few points:
1. Netboot with root at tmpfs takes less time to boot, seconds over 10G (most of the time is spent on POST and DHCP actually.)
2. Netboot images can be stripped to the barest minimum of life-sustainance. Less software running — less problems.
3. Less attack surface for hackers.
4. Hard lockup proof - hardware watchdogs are more or less foolproof.
5. Linkup to the real network is as simple as it gets, you can use broadcasts for self-configuration and service discovery on the network.
6. If you deal with RDMA, well, you will be already spending most of your efforts on just getting software work as advertised on the bare system, and the additional troubles of containerisation will not worth dealing with.
Former Amazon engineer here. Despite the downvotes, you are making an excellent point.
Amazon, among other FAANG companies, never needed containers shipping entire bundled OSes.
Deploying a "base OS" (using PXE or netboot) can be done very reliably both on a hypervisors and VMs.
A simple build system can then generate traditional packages and push them into VMs allocated for a specific product.
It provides better security isolation and the ability to receive security updates on OSes and also on dynamically linked libraries.
Building and deploying new versions of you product is also much faster, especially at large scale: only your application package is rebuilt and deployed.
That's technically what companies with refined processes do by using "scratch" docker images that only contain the binaries needed for the application and nothing else.
The fact that many other containers run entire OS is because it's easier to get started, or there are runtimes needed, or people just build and deploy in the same image.
Even when you write a lot of software, environments always drag in existing tools and infrastructure (aka other open source). 99% it's easier to consume pre-built SME knowledge from. Linux distro #HailTheMaintainers
Containerization is a cost effective alternative to building and running immutable VM images, which is a best practice in terms of CD. Another way to put it is: don't build in production, instead build in CI, test your image and then just deploy it on your environments (you probably should have at least two: staging and production).
You kinda missed the point that docker is more about packaging the applications than running them. Running apps is easy. Packaging in such a way that they could be run without hassle - hard.
Few companies that used them that I dealt with had their own tooling, but that was nothing special.
CI picks up your image build, then commits its record to a DB from which DHCP server looks up a boot image for given MAC, then it either SSHed a reboot command or used an IPMI reboot. Voila
Today, some server motherboard firmwares can boot images off HTTP, vlan, and can talk with the boot server itself on HTTP
That sounds like a pretty solid system, but it’s not at all what the author is advocating. The author states that Docker is destined to be replaced by other containerization frameworks. The functionality differences won’t be as stark as what you’ve described, so this isn’t really a good comparison.
That being said, I’d agree that netboot + tmpfs has a lot of advantages; unfortunately, it’s not always practical, especially at smaller scales. Containers are convenient and don’t require any special hardware accommodations.
One reason I see as an open source developer is low touch adoption and environment control. Over with build support on many Linux flavors (redhat from the past, archlinux from the ghetto :) ), use dockerized version instead. This might be my narrow corner of the world but users have driven me to dockerize much of what I do code, and my support hassle has lowered, a lot.
Exactly, we have POCed Docker and realised we have a solution for every aspect that it is trying to address. I found one and exactly one use case for Docker, running CI/CD workload.
I see Docker as a framework. It's the "get me from nothing" to a fully featured solution for building, running and distributing containerized applications. And it happens to do it for anyone running Windows, MacOS or Linux.
It's the same reason why I would want to use a web framework. I don't necessarily want to write and string together a bunch of super low level things. I want to start at a higher level of abstraction so I can focus on writing the applications I care about.
As an end user, I don't necessarily care too much about OCI or the runtime spec. I care about running docker-compose up and having everything work.
The article is about the company, not the product. Which bits of what Docker provides would you pay for, given free alternatives exist now for basically all of it?
I don't think I would pay for any of the core features, but that's mainly because I've been using Docker for 5 years with the current strategy of just about everything is open source. You pay if you want support and other enterprise features. Private hub repos are also reasonable to buy for most people because it's priced pretty competitively with rolling your own private registry on a cloud provider (ie. $7 / month for 5 repos isn't much more than $5 / month for the cheapest DO droplet but now you need to manage that machine yourself which is not worth the hassle for $2 less).
I would pay for an ecosystem of images that are vetted for security issues. They could even build an industry around security and privacy compliance and certification, which risk-averse companies (and those who want to do business with them) would pay for.
To be fair, the article is about both. To quote "in my humble opinion the days for Docker as a company are numbered and maybe also a technology as well."
The core argument is that you wouldn't care if you were using something else. I use Kubernetes. That's what I care about. If I'm on AKS, GKE, EKS, any other managed Kubernetes service, the only time I ever even see the word Docker is in the filename of the Dockerfile. They could switch their default runtime to anything else and I might not notice.
> Are these other solutions really so superior that they warrant the significant time investment that it would take for me to learn how to use them?
This is the exact same problem that Oracle has with Java, and MongoDB now has with their product. You can't make money off of an API, and implementation's are very rarely a differentiator. Another product will come along (actually, several already exist) that says "we support dockerfiles, we emulate the docker api, swap us out". There's nothing new to learn. The only thing that product needs is a benefit over normal docker, and if you think Amazon, Google, or Microsoft can't add value through more native integration with their cloud services, you're crazy.
Docker isn't going anywhere, but they become commoditized far too quickly than their valuation would suggest.
One argument could be that their moat is mostly brand. And, increasingly, other brands are working around them. Like Kubernetes. They already offer non-docker runtimes. They could, at some point, offer and promote their own packaging tool too. People might use a K8S tool solely because it's the "default setting".
The OCI standards make it so that it doesn't matter if you switch or not. Your images will work anywhere with any tool (aka Cloud Providers, Kubernetes, Podman, Buildah, etc.) Everybody has OCI compliant tools, even Docker - Docker founded the OCI.
"The Kubernetes project is excited to announce kocker, a new docker compatible image management tool. Kocker will support features you've been asking for, like non-priv-by-default containers, enhanced performance in k8s..."
Docker is drifting towards being a standardizes deployment unit. This is especially apparent with advent of Alpine Linux based images or even more so with distroless.
With these changes going on, docker becomes just a fancy zip file, and since now there is a committee to design a standardized container there really is nothing left for for the Docker company.
Kubernetes IMO just used Docker's popularity to gain its own, but at this point the actual docker itself is no longer essential.
Except it hasn't been working; with few exceptions we still deal with dependency hell / isolation issues in system package managers. Surely if it worked as well as you posit we would have long ago had a Kubernetes built upon apt or yum?
I am not sure if you have experience working with one of the leading tech companies but most of the problems you mention were solved long time ago, before containers were a thing. Apt and yum are not the right set of tools for this job, Amazon for example had an internal build and deployment system that solved dependency problems as well as environment variables, configuration management and additional aspects that even Kubernetes does not solve today.
> I am not sure if you have experience working with one of the leading tech companies but most of the problems you mention were solved long time ago, before containers were a thing.
I'm not sure what you're advocating for exactly, but the systems that the "leading tech companies" originally built were largely VM orchestration systems that required humans to pack software packages into VM images. This was suboptimal so Google et al pioneered containers and others followed suit. This is exactly what gave rise to Docker and Kubernetes. This seems a lot like saying, "ships solved the transportation problem so planes are not useful".
Never mind that most people don't have the resources to build, maintain, and operate their own orchestration systems.
> Amazon for example had an internal build and deployment system that solved dependency problems as well as environment variables, configuration management and additional aspects that even Kubernetes does not solve today.
Build tooling is an important but orthogonal concern (you can use Bazel and friends to build container images). I'm not sure what you mean when you say Kubernetes doesn't solve those problems today.
To be clear, I'm not a Docker/K8s fanatic; they can be frustrating at times and something better will surely come along and replace them eventually. However, they're a lot better than what existed before (which to be clear were not off-the-shelf solutions but rather patterns that a large, wealthy, competent organizations could use to build their own solution).
I am not advocating for anything just pointing out that the features atributed to Docker existed before.
> but the systems that the "leading tech companies" originally built were largely VM orchestration systems that required humans to pack software packages into VM images.
Absolutely not. Amazon never had such system. You are confusing Google with all leading tech companies probably.
> Build tooling is an important but orthogonal concern
Is this why the same features set claimed by both things?
Again, what is the problem you are trying to solve with Docker/K8?
> I am not advocating for anything just pointing out that the features atributed to Docker existed before.
No one contends that they existed before in isolation or even as a proprietary assemblage; the contention is whether or not they were available as an off-the-shelf tool or a simpler assemblage that is affordable to companies who are not "leading tech companies".
> Absolutely not. Amazon never had such system. You are confusing Google with all leading tech companies probably.
Why do you think "Amazon never had such a system" is a definitive rebuttal to "most leading tech companies had a system like X"? In any case, tell me about the systems that existed at leading tech companies that were comparable to Docker and accessible/practical for non-industry leaders...
> Is this why the same features set claimed by both things?
I'm not sure how to parse this, but it sounds like you might be implying something like "because Docker has a naive image building story, its primary purpose is to be an softare build system".
> Again, what is the problem you are trying to solve with Docker/K8?
Docker solves for standard software packaging and distribution; kubernetes solves for standard, efficient software orchestration. Of course you can throw really smart, well-paid engineers at the problem to devise, build, maintain, and operate a bespoke system that accomplishes a similar end.
I used the Amazon system a lot and saying that "Apt and yum are not the right set of tools for this job" is incorrect.
APT and yum simply install packages assuming the dependencies resolve correctly. The [unnamed ;)] system could be modified to generate a set of .deb and .rpm files (with locked versions) to deploy a product.
It's still working very well for Amazon, and the company evaluated alternatives and choose to stick with packages because they have the right granularity.
A shame, really, but not a huge surprise that products like Swarm fell by the wayside. I feel it could've occupied a nice middleground for teams that didn't need the full capabilities (or overhead of supporting) Kubernetes, even though I think K8s is an exceptional project.
I chose Swarm as a pragmatic choice in an enterprise environment that didn't have any prior experience with Docker at all, really, at the operational level. We had to support financial models, often many different versions of the code at the same time, on top of the usual stack of applications to go with that. The choice was a "success", albeit one muted by the wacky networking layer they use. Compound that with RHEL's older kernels, and we had to deal with oddball issues like iptables/arp table getting out of sync with what's actually running, resulting in connectivity issues. And don't get me started on removing and redeploying a stack; that would occasionally wedge things so badly we had to cycle the docker daemon.
Still, a shame. The gap between "Look, I wrote a compose file" and running something on a small cluster is tiny, and that was its main strength, even if it did suffer from some serious heisenbugs. Why they decided to add and remove features between versions and do their damndest not to make a compose file 100% "forwards compatible" with Swarm is another mystery.
I've got a customer that uses docker to make binary distribution easier. All runs on a single machine and a docker-compose file is all they need. I don't think they'll ever need kubernetes unless they change the way they sell their products.
Docker is useful for developers because we pull containers and run on Ubuntu and OSX them without having to install anything. Much easier than Vagrant.
Swarm was a pretty big mistake. I think based on just relative resource investment compared to Kube, it ought to have been obvious that it’d never be relevant if it wasn’t extremely specialised.
- The ops team are in a different country, and are wedded to very old-fashioned views of administration ("Automation? But I like manually running commands from a runbook!")
- You work with a team of people who are quants/actuaries/scientists/engineers but not professional developers, but you want them to have a turn-key environment so they can Get On With It. When they need new python packages or god forbid upgrade pandas or something else, there's a full CI chain that'll make sure that what they do here also works there.
- Swarm is (from personal experience) easy enough to teach people who don't know anything about Docker. You can show them how to query the state, modify it, look at logs, etc. all without the hassle and overhead of configuring and running K8s, even though it will always be my #1 choice for tech-literate orgs. Swarm, for many, including myself, was a pragmatic choice -- 80% of the immediate benefit of container orchestration with 20% of the cognitive overhead for the chaps in another country who had to maintain it if things went south.
I made a tradeoff; keep in mind, it's not always the case that the best technology wins. Running <Technology X> is all well and good, but if you cannot keep it running perfectly, or it results in unacceptable downtime due to operator error, then that reflects poorly on the architect/lead in charge of picking the tools.
I am more of a mind to make sure that I can solve the task(s) that I am given, such as it is, with the resources available (people, knowledge, time, etc.). That inevitably means tradeoffs. In a parallel universe I would have used K8s instead, as I think it's exceptional and far superior to Swarm. However, with the limited resources available, I chose Swarm, and for all its faults it's running fine.
I agree with your pragmatism (and admire it). I would only urge you to add a couple of tools to your toolbelt for analysis:
1. Open source politics aka is the project viable?
2. Where's the money coming from? Aka, what products/companies build solutions off the tech?
3. Look for growth, not survival. If a company is not growing, it is dying.
These extra three test "gates" help me select what technology I will use, learn, and bet my career on....
Keep in mind, when this solution was adopted Docker were still wedded to Swarm. Even if they stop caring about it -- as they pretty much have now -- we have a system that works at rest. Two years on, the team(s) that handle the production support and operations are more comfortable with Docker & co, because of this. Not to forget, this is a very large (and risk-averse) enterprise. You don't always get to pick whatever you like!
Smarm had and still has a huge advantage over Kubernetes - it's its simplicity. I wouldn't call Swarm "simple" but it's "tolerably complex".
Unlike Kubernetes, Swarm deployments are actually maintainable by a small team (or even a solo sysadmin). If there is a bug (there always are), you can actually diagnose or even debug it without the feeling that you're wrestling with a 800-lb gorilla.
Similar story for me. Pretty much the only time I am disappointed with Docker Swarm is during monthly host OS patching and the networking gets messed up and we have to restart daemons and undeploy/redeploy stacks.
I use Docker Compose for running all the things for my dev environment (RabbitMQ, Postgres, a few others), and it works really well. We'll, now it does - I had no end of networking troubles with it in the past, on both Linux and Windows.
I love how easy it is to spin up something new with a simple YAML file, but I've no idea what it would be like in production - how are rolling updates handled, for example?
I agree, there was a gap, especially for about 2 years. But, even when that gap existed, it was pretty clear that the cloud providers (and other software companies) would build/enable a one click Kube distros.
As the cloud providers adopted Kube, I knew Swarm would loose. Now, the answer is, "use an API" don't run it local if you don't have to.
Not a containers expert and I never used docker swarm, but I will still go with Nomad for container orchestration over Kubernetes anytime I can, gets the job done with minimal fuss.
See my three rules above. Engineering is a zero sum game, even in open source. Nomad has a very dark future. Single vendor, no community, very little usage, no competitors adopting the tech. If your competitors don't adopt your tech is dead (basic open source 101)...
I never understood why "modern" tools like Docker have to provide everything: networking, firewall, repository, you name it...
I understand somebody wanting to type "docker run xxx" and have everything setup automatically, but if you're running anything but default networking and actually care where the xxx image comes from, it's gonna fail miserably. Coming from the VM world, I found it much easier to work with macvlan interfaces that lxd supports for example - the container gets it's own interface and IP address and all networking can be prepared and set up on the host instead of some daemon thinking it knows my firewall better than me...
Containerization has obviously been around for ages before Docker came on the scene, yet it's adoption has dwarfed that of other similar solutions, why is that?
People have to start with a technology somewhere, if they get frustrated with the process, often they'll discount it and move on, that first moment of success of "ohh that was easy" is really important.
Docker has that. Sure it hides a wide variety of complexity under the skin that, in more complex deployments, can come back and bite you , but for people getting started it's much easier than the alternatives.
The "App store" like nature of Docker Hub is another part of that, the ability to easily find base images that you can use to prototype solutions is super-useful as a beginner.
Of course once you've been using it a while, you might have questions about image provenance, vulnerability management etc, but those typically aren't part of the initial evaluation.
>>> Containerization has obviously been around for ages before Docker came on the scene, yet it's adoption has dwarfed that of other similar solutions, why is that?
Containerization was mostly limited to Solaris and BSD. It took a while to get to Linux, in the form of Docker.
Well Linux had containerization before Docker came along. The initial release of LXC was in 2008, a decent distance after Jails/Zones but still 5 years before Docker. OpenVZ was even earlier than that, starting in 2005
Even earlier was Linux VServer, which was 2001 :) So yeah, Linux containers have been around in one form or another for over a decade before Docker helped popularise things.
Yeah, it feels too monolithic. Just to showcase it can run Hello World in one "docker run" command I guess?
Another thing people use Docker for but shouldn't is application packaging. Using Docker you build one fossilized fat package with both all OS and app dependencies baked in. Then some day after years of using that Docker image you need to upgrade your OS version in the image but you can't replicate the app build because you didn't pin exact library versions and the global app repository's (pip, npm) later version of the package is no longer compatible with your app.
Application packaging is better to be done in proper packaging systems like rpm od deb or other proprietary ones and stored in organization's package repository. Then you can install these rpm packages in your Docker images and deploy them into cloud.
The difference in OS dependencies and app dependencies are clear when looking at contents of actual dockerfiles. OS dependencies are installed leveraging rpm or deb ecosystem. Apps are cobbled together using a bunch of bash glue and remote commands to fetch dependencies. Why not use proper packaging for both OS and apps and then just assemble the Docker image using that?
I don't think I ever came across a developer who made a RPM/DEB package. I'm not sure I came across a devops/sysadmin who made a RPM/DEB package in recent years.
I wouldn't waste my time learning that if I were you. Software like python needs pip to build, they can't be built with the OS package manager alone.
> Software like python needs pip to build, they can't be built with the OS package manager alone.
Well yeah, OS packaging formats are for putting together your build artifacts. You can use whatever to build the software.
If you want to do it 'right' you would use pip to build packages for any missing runtime dependencies from your OS repos and the package your application. I swear it's much easier than it sounds.
But nothing says you have do it like an OS maintainer though. You can also just vendor all your dependencies, slurp them all up in an RPM and install in /opt.
Well my team does that for one. We use Python packaging ecosystem, specify Python dependencies using standard tools like setup.py, requirements.txt and pip. All Python dependencies are baked into a fat Python package using PEX format[1]. Also tried Facebook's xar format[2], without success yet. What matters is to have a statically linked dependencies packaged in one executable file. Like a Windows exe file.
Then you proceed with bundling in higher level OS dependencies, because each app is not just a Python package but also a collection of shell scripts, configs, configuration of paths for caches, output directories, system variables, etc. For this we throw everything into one directory tree and run FPM [3] command on it which turns the directory tree into a RPM package. We use a fpm parameter to specify installation location of that tree to /opt or /usr elsewhere.
The way to bundle it properly is to actually use two rpm packages linked together by rpm dependency. One for the app and the other for the deployment configuration. The reason is you only have one executable, but many possible versions of deploying it. Ie. depending on environment (dev, staging, prod) or you just simply want to run the same app in parallel with different configuration.
eg. one rpm package for the app executable and statix files
my_app_exe-2.0-1.noarch.rpm
and many othe related config rpms
my_app_dev-1.2-1.noarch.rpm (depends on my_app_exe > 2.0)
my_app_prod-3.0-1.noarch.rpm (depends on my_app_exe == 2.0)
You then install only the deployment packages and rpm system fetches the dependencies for you automatically.
There are other mature places who use similar techniques for deployments, for example [4].
All of this then can be wrapped in even higher level of packaging, namely Docker.
I fail to see how RPM/Deb would be better at this (and I say this as someone who has a lot of experience in both). You will still need to pin dependencies with RPM/Deb, you still have to deal with OS release updates, and in the end it's just a matter of ensuring you frequently update and test your upstream dependencies.
You want to separate app and OS dependencies because in the future you could have the need to update the system which would mean rebuilding the docker package. In the future it can happen you are no longer able to reproduce your app build. But when you have the app package separate in some repository you can just create a new docker image but reuse the old app package without rebuilding it.
You can use macvlan (or ipvlan) with Docker, it's built in.
But yes I agree (as a former Docker Inc employee and current maintainer of moby which is what Docker is built from), the default networking in Docker is often problematic... firewall handling is annoying, at least from a sysadmin perspective.
That's because Docker was really designed to replace Vagrant, and that shows.
The use case is an individual developer who wants to get a test environment up and running quickly without needing to understand how IP routing works. It's great for that use, not so much for workloads in production.
Yeah exactly...and those developers who just want to run something quickly are not really your ideal customers. People using it for workloads in production are...
Docker provides NAT but what do you mean it provides a firewall ? I see that quite often people deploy docker without a firewall, and at some point notice that they exposed services they didn't want to internet.
> When people understand that they can easily make the choice to swap out the container runtime, and the knowledge is out there and easily and readily available, I do not think there is any reason for us to user docker any more and therefore Docker as a technology and as a company will slowly vanish
How about MySQL as a counter argument? It's always been feature-weak compared to PostgresQL, with a less business friendly license and now owned by one of the most despised software companies by technologists. But it's probably still the default relational database people pick up. Habit, defaults and massive installed base can go a long, long way.
> It's always been feature-weak compared to PostgresQL,
That's not really a meaningful statement to make. A singular "better" seldom exists outside children's discussions. MySQL for a long time had a more compelling multi-master situation, for example, and was simpler to operate as there was no need to schedule vacuums and fewer reasons to stop the database during normal operation.
But none of that is really relevant for choosing a database. It's an enabler technology, just like an operating system. You seldom set out to use a certain database, you set out to use a certain application and then choose a database and an operating system that fits the application's use case.
None of that is true for containerization technology. Any requirements come instead from the infrastructure layer, and the application is made to fit those instead of the other way around. Other than that, you are free to choose whatever has the least complex operational requirements. If I was an investor in Docker Inc. I'd carefully consider what that means.
Container engine is just a wrapper around couple of Linux APIs mainly cgroups & namespaces & iptables/BPF - there is most of hard work done. A good example how trivially it can be implemented is this: https://news.ycombinator.com/item?id=9925896
Now, implementing DBMS with even basic SQL query language, is not a trivial job.
Its very likely that most of the challenging security stuff will continue to move down into the kernel itself. Its important to remember that the entire concept of a container wasn't really a single unified concept in the Kernel until very recently as they've gained popularity; instead, they were a amalgamation of a few different capabilities in the kernel.
Perhaps, you caught news of something I haven't seen, but AFAIK, "container" is still defined in user space. Talking to Eric Biederman, that's what the kennel team wants - people to experiment in user space, remixing kernel tech together...
Until few years ago PostgreSQL was dramatically slower than MySQL. As for features it depends how you look for it. From what it can do in terms of queries it was more powerful than MySQL, because MySQL just stopped on SQL-92. PostgreSQL also lacked in operational functionality. For example in older version it was your responsibility to take care of shopping WAL files to standby nodes. The PostgreSQL unlike MySQL has a rule that is they implement a feature they want to do it right. It is better to not have the feature than have it and it could cause data corruption. MySQL on the other hand starts with MVP so they can put a check mark and then later work on improving it and fixing bugs.
In past year, PostgreSQL not only caught up but also surpassed MySQL in terms of performance (I think MySQL might still be faster if you mostly read the data, but I'm not sure) and operational features, so at this point not only you have as fast (and actually faster for writing workload) but also have more features and much much better reliability. So it is no brainier to use it right now, but earlier you had to choose what was more important to you.
> It's always been feature-weak compared to PostgresQL
For many years MySQL had a much clearer multi-master replication story than PostgresQL. For most use cases, other features are secondary to replication.
For most use cases, other features are secondary to replication.
I'd say the opposite - for most use cases, replication doesn't even figure into it. I guess we both need to find more solid evidence than our own opinions.
Replication is not the best way to avoid data loss. And it's not the only way to avoid downtime (e.g., the AWS Aurora PostgreSQL servers I run don't use the normal pg replication).
And there are a lot of systems for which downtime really is acceptable, at least for the amount of time it takes to restore a backup. Not everything is a front-end, mission critical system. I manage plenty of systems that are only used in a back-end batch context.
So I think it's crazy to suggest that how well a db engine's replication work outweighs all other factors. Depending on your situation it may well be the most important thing, but there's a lot of room for others to have well-architected systems that have different priorities.
You are right, but it is hard to see how for profit Docker Inc benefits from that. MySQL had a business model that was similarly challenging, which lead to where it is today.
MySQL has a strong ecosystem model and it's a pain to switch databases. Young developers used to get a LAMP stack, Linux Apache MySql Php and that comes with MySQL. There are many products, one could say even languages, that require MySQL.
The database is the hardest component to switch and even if the company were to disappear today, it would take decades to migrate off.
Docker has none of this. It only happened a few years ago, it will be gone as quickly. It doesn't even have a strong brand name, people want kubernetes not docker, docker is merely a dependency that can be replaced.
MySQL always had a really good, free GUI, Postgres didn't. There is pgAdmin now, but a lot of people really don't like it (I've had lots of problems with it myself).
MySQL also had (relatively) straightforward replication features since way back, something that is much newer to Postgres.
For info, I used MySQL for over a decade, and am a relatively recent convert to Postgres - and I love it!
It's fair to say that in the hype cycle Docker (the technology) has passed its peak, but I don't think the conclusion that as a technology, it's finished, is warranted.
Containerization is still on the rise and Docker is part of that. The toolset is still a good, fairly easy to use, place for individual developers to use containers on the client-side whilst creating containerized applications.
For smaller deployments (without orchestration) a single Docker engine with things like compose, can still work well.
Obviously on the orchestration side, Kubernetes has won, although it too will face the inevitable trough of disillusionment when people realise that all technologies have downsides.
Personally, for simpler workloads, I think Docker swarm can still be a good answer, as it's a lot less complex than Kubernetes to set-up and maintain.
The idea that the Redhat container stack (podman, CRI-O, et al) necessarily mean the end of Docker, doesn't really follow at all to me.
If anything the increased use of containerd directly is a bigger threat to Docker's market share.
I think the main point to consider is, if Kubernetes is the technology for large deployments, who is left to buy enterprise solutions from Docker the company?
And without enterprise sales, what is going to fund docker development for the small and simple docker workloads?
Well Docker the company is a different game. They have gone quite heavily for the enterprise market and support Kubernetes as part of their Docker EE product.
Of course, whether that will give them enough success to justify their valuation, is another matter. Personally, I had thought they would get bought out by one of the large tech. players who is heavily investing in containerization (e.g. Microsoft) but that doesn't seem to have happened so far.
The Docker company may not survive, but Docker will.
Especially considering the work the Docker company is doing to support OCI. The dockerd internally uses containerd now, which supports both Docker images and OCI images.
I always tend to think that the reason Docker developers decided to split the Docker products codebase (CE/EE) into the Moby components 2 years ago was that they foresaw the inevitable death of Docker in the long-term.
>>> It's fair to say that in the hype cycle Docker (the technology) has passed its peak, but I don't think the conclusion that as a technology, it's finished, is warranted.
That's confusing the product and the technology.
The technology is containerization, not Docker. Containerization is on the rise and will continue.
Docker as a product AND a company, is doomed. They have no business model and no sales. All of Google/Amazon/RedHat/Linux are working on a substitute so they don't have to depend on it.
The hype is in Kubernetes now and managed cloud solutions. Docker is an accidental dependency of Kubernetes, until it's finally get rid off.
That's a view for sure, and as a company I'm not sure what Docker's endgame is now.
However, I think it's premature to write off Docker the technology. There's a load of scripts/tooling/mindshare baked into the Docker model for executing containers, and I don't think that's going to switch to another model overnight.
As many people have commented on this thread, the killer app. for Docker the technology isn't running in production, it's running in development, where you're unlikely to want a full k8s cluster.
Docker got over a $100M in VC funding, they can continue to operate without revenue for years.
I hate to break it up to you but the model already switched overnight.
The job market is the biggest indicator for sure. Any experience with Docker is mostly worthless at this point. No company will hire somebody with Docker experience alone, without Kubernetes.
Kubernetes has replaced most of what you mention with its own idioms and tools. All the way from the lips of the CEO to the engineers in a F50, it's talking Kubernetes, not a mention of Docker.
I don't intend to deny your experience at all. Docker is everywhere about now, same for me.
There is another thread discussing that RHEL 8 is replacing the docker command with its own command line tool. Looks like it's happening sooner rather than later. There is a lot of work going on industry wide to replace Docker, can't deny that either.
I personally find it very ironic that Kubernetes only supports docker (more or less) while it's trying hard to never use the word docker and it has a whole concept of replaceable container engines.
That Redhat don't like Docker is not surprising :) Whether their tooling will supplant Docker is more debatable.
From what I've seen they don't have huge traction outside of Redhat for their stuff, so all the other major players in the space (Google, Microsoft, Amazon etc) are not shifting to use Redhat's stack.
so the challenge is, can they convert developers from using the easy to use Docker, to their newer more complex setup? I'm not sure I'd bet on that. Anecdotely, it doesn't feel to me like Openshift is winning the "Kubernetes distro" fight, if what I see on reviews is anything to go by it's mostly kubeadm, kops, Rancher or GKE/AKS/EKS.
So Redhat replace bits with their tooling, Google do the same (kaniko, GVisor et al) Amazon continue to push Fargate and serverless, there's not a unified "replace docker" effort, more a splintering of tooling/services.
My personal view is that on Dev. desktops Docker still provides the best experience. That will continue to keep them relevant until someone else can replicate/exceed that experience.
well actually it's clear that docker is bad at building images, thats why a lot of solutions are coming around to build them.
I mean google has over 3 different ways of building containers, kaniko, jib and even a way in bazel.
also security you can't run shared environments with docker or container runtimes. thats why they invented givsor. however docker will still live as a runtime and probably a lot of companies still keep docker as the runtime. I mean containerd will probably be the default one in k8s, sooner or later, but that will only happen because containerd was designed just for that. being a awesome container runtime designed to have a good api.
docker basically has only lost because they focused on too much. docker can do all things, but way worse than all the special tooling.
What is wrong with Docker? This does not address any technical or other shortcoming and only seeks to replace one set of over engineered tools with another with the exact same problems. [1]
The is yet more of the ecosystem continuing to push over engineered tooling and 'winners' breathlessly without basic technical scrutiny that leaves end users dealing with needless complexity and debt.
Containers can be useful as a lightweight, efficient alternative to VMs and those who want containers untouched by questionable ideas should try the LXC project on which all this was based.
Any additional layer on top of this be it non standard OS environment or layers should meet technical scrutiny for end user benefits and most users will be surprised by the results.
This post links to podman. But the podman website is completely useless, because it does not tell me what it does differently/better than docker, only that
What is Podman? Simply put: `alias docker=podman`
So, for someone who occasionally uses Docker for running services or creating specific build environments (manylinux1): what are the benefit of podman over Docker?
Podman doesn't have a daemon like Docker does. It also more tightly integrates with buildah, which the article doesn't expand on. Have a look at this (very brief) overview to get a bit better idea of their relationship: https://github.com/containers/buildah#buildah-and-podman-rel...
Podman also uses the same notion of pods, and it doesn't support docker-compose syntax/files, because RedHat strongly believes that Kubernetes has already won. Basically, podman/podlib allow you an easy migration path from your local computer to a k8s cluster, with the same images and same concepts. Have a look here: https://github.com/containers/buildah#buildah-and-podman-rel...
> Podman also uses the same notion of pods, and it doesn't support docker-compose syntax/files, because RedHat strongly believes that Kubernetes has already won.
Could you expand on that please? Almost everything I run locally (be it a self-hosted service or app devel) with docker is a docker-compose stack. It allows me to easily manage/monitor services via CLI or Portainer. How does Podman and other modern tools offer to solve this case, or is it proposed now to use K8s locally?
I got enthusiastic about Podman not having a daemon and running Podman containers as a non-root user[1].
CI/CD (that needs to build a container)? Use buildah.
And use skopeo for copying images around when needed
You could argue this makes it more difficult to run containers on the server without kube, and you'd be right. Whether or not that is a bad of red hat to do I'll leave up to others.
build containers to confirm that your build steps all work ok, so Podman + buildah, then you might want to
... push them to a repository to collaborate with a colleague without setting up CI/CD, so Podman + Buildah + skopeo.
Then you might want to execute the containers to test to see whether your code is running ok....
the point I was making didn't relate to running containers on servers where typically things like CI/CD will be part of the process, but to new developers and devops people getting started with containers.
Podman and Buildah have registry push/pull built in, so technically you don't need Skopeo except in CI. (Honestly I'm still a little unsure what Skopeo's ideal use case is.) Podman also has a `build` subcommand that takes a Dockerfile, so I'd argue if you're on a development workstation where you only use Dockerfiles to build containers, all you need is Podman. (Buildah supports more interesting build pipelines that can be driven without a Dockerfile.)
For debian/ubuntu users (of which there are a fair quantity) whilst podman's install process looks like that, I don't see it taking over from Docker any time soon.
Not to mention Windows/OSX users.
Ease of use should not be overlooked when it comes to developer tooling. Podman can be the most technologically advanced solution, but if it's a pain for developers to get going with it, it's not going to replace Docker any time soon...
I agree that developer experience is paramount. I've actively made choices to choose tooling with better developer experience (Rust and Elixir over C++ and Erlang, for instance), and Podman will need to have Debian/Ubuntu packages eventually. If you're on Fedora, openSUSE, or RHEL, though, Podman is a yum/dnf/zypper install away and works quite well (better than Docker in my experience wrt SELinux issues).
It's not clear to me that the author has a solid understanding of how Docker makes it money. You don't make money just by giving away software, and you also don't make money by providing support software for no one uses. Docker has been pretty smart to build a well-known brand around supporting a specific set of technologies -- some theirs, and some not. When it became clear they were unable to own every part of the container ecosystem, they made smart decisions around supporting k8s and engaging the open container standard.
Docker's got lots of runway providing enterprise support contracts, so I'm not worried about them.
I think the author's also not noticing that orchestration was really a bit of a stumbling block that would eventually be removed. Sure, you've still got to use k8s in self-hosted, GCP, and Azure, but those of us on AWS have the option to use ECS with Fargate and have many of the core features of something like Kubernetes fully managed.
So anyway, this post is a bit dramatic, and maybe has a few blinders on.
Agreed. I came for rational but saw none. There seems to be a lot of hate for Docker I don’t understand. Is it community management? Is it some esoteric tech concern? Can someone more knowledgeable pipe in?
HN crowd likes to gaze at FANG-like corps and treat it like gods, FANGs use Kubernetes? well then for sure docker is passe :) It doesn't matter that for some use cases swift and straightforward solutions like docker and it's ecosystem are better.
For me Docker's value is to be able to easily run some Linux services with Windows. They got a lot of ready to use recipes on their website, so it's really easy to run e.g. wordpress with mysql. I would spend at least hour to manually setup VM with Ubuntu, install that stuff there. Also I'm using it to run postgresql for development. While technically I can do it just from Windows, I feel safer concentrating all that stuff inside disposable VM, also it's easy to share onboarding scripts with colleagues. I don't see how RHEL's tools would help me with that. Actually I could replace all my Docker usage with few shell scripts, but they have to be written and with Docker they are already written, many by experienced software maintainers. Docker will die when popular software will discontinue their docker images.
For me docker is a really convenient way to self host multiple services on one host. In the past I did it without docker and almost every service requires you to add a repo and after you do this a few times you almost always end up breaking your system and having to start again. Docker keeps things clean and easy to manage.
No problems at all. There was weird problem when I tried to output binary file to stdout inside docker and redirect it inside cmd which resulted in garbage. But it wasn’t appropriate usage I guess.
Are there any container filesystems that support multiple inheritance, and create diff layers? It would be really nice if I could build a few different things independently, and then merge the final images together. Also if I could only include files that have changed in a new layer, and ignore duplicate files (even if the file was touched, or the timestamp has changed.)
Those are my biggest pain points with Docker at the moment. I have a complex build script that uses multi-stage builds and rsync to achieve this [1], but it's still a bit slow and inefficient. Would be nice if something supported this out of the box.
I've worked on a lot of projects where people just reinstall (and recompile) their entire list of dependencies (Ruby gems or NPM packages), and you have to jump through hoops to set up a caching layer, or maybe install them into a volume as a run-time step, instead of at build time. There should be a much better native solution for this, instead of needing to invent your own thing or read random blog posts.
I would say check out buildkit, which is the tech behind "docker build"'s new builder.
I don't know if the Dockerfile format is really suitable for this, but you can now build your own format and Docker can just build it.
Basically buldkit breaks things down into a frontend format (like Dockerfile) and a frontend parser which gets specified as an image at the top of your file (`#syntax=<some image>`), the parser converts the frontend format into an intermediary language (called llb), buildkit takes the llb and passes it to a backend worker.
This all happens behind the scenes with `DOCKER_BUILDKIT=1 docker build -t myImage .`
Docker actually ships new Dockerfile features that aren't tied to a docker version this way.
Actually there are a number of new Dockerfile features that might get you what you need, even if the format isn't all that great, at least it's relatively natural to reason about.
Things like cache mounts, secrets, mounting (not copying) images into a build stage's "RUN" directive, lots of great stuff.
Nix[0] does this, sort of. It doesn't merge everything into one tree at runtime, but instead builds every package to a unique path, which gets embedded into downstream dependees. It assembles the dependency tree into a DAG, and uses this to automatically parallelize builds (where safe). Builds automatically run in ephemeral containers, to ensure that all dependencies are accounted for. There are also importers for many languages, and an exporter for Docker images (where each Nix package becomes one Docker layer, up until a limit that you specify).
That sounds somewhat like my current project [0]. Currently it's in the prototyping phase, so it won't be released for a while, but it's coming along nicely.
Your use case is one I've been planning to support all along, although I think about it a bit differently - in terms of composition of (partial) images rather than multiple inheritance.
This is a similar discussion to vinyl vs cassette vs cd vs streaming. All will keep existing in some form or another. Admittedly, some will die out completely (DCC, Minidisc) I’m pretty skeptical this going to be Docker though.
Even more, the argument that because Redhat RHEL 8 no longer has a Yum repo for it Docker is dead in the water is a bit far fetched. According to Wiki RHEl is a fairly small % of the server market compared to Ubuntu, Debian and Windows for that matter. https://en.m.wikipedia.org/wiki/Usage_share_of_operating_sys...
RHEL also means Fedora and CentOS, the latter is extremely popular for large scale deployments. The article is for web servers only, I think the real market share of RHEL and CentOS is more like 30-40%.
Podman just makes more sense because it doesn't require a big fat daemon. You launch containers like any other service: with systemd. And you can run podman without root permissions as well, which is a huge win for security.
Well, having worked quite a bit with RHEL and Centos in the past, I found their day to day usage quite different. Maybe that has changed.
More important, I only saw RHEL at traditional Fortune 500 companies. Not sure how their market in FANG type companies is. Probably negligible. F500 is still big of course.
FANG are making their own OS at this point so they're not really relevant.
There are two distributions left in Linux, RedHat derivatives and Debian derivatives. In terms of install base, it's maybe 1/3 and 2/3. In terms of money, expect the other way around, because it's F500 who pay the most for software and they are on RedHat.
RedHat is actively trying to kill docker (along Google and Amazon). RedHat removing "docker" CLI and replacing it with its own tools is a major step toward that. Docker will be de facto dead in enterprise as soon as it stops being supported by RedHat.
I will pass on the joke to call docker either stable or feature complete.
There is nothing that prevents RedHat to ship an alias to their own tools by default, just like java => openjdk.
It's not the responsibility of RHEL to maintain or support third party software. If case you didn't know, docker stopped shipping with Debian years ago.
Nothing is ever feature complete or stable, I know. All software is shit, right?
The day RHEL customers stop using the docker runtime is when RHEL will stop supporting it - until then, they'll support it. Case in point: java -> openjdk.
Anyhow, this article is just click bait, and it's been done at least twice a year since Dockers inception. I'm disappointed that this community finds shitposts like this more compelling than the NSA opensourcing a reverse compiler, but I digress.
Nope. Docker's value isn't just its software. It's the support built around it, tutorials, familiarity and common usage, Dockerfiles, huge Docker Hub, existing setups relying on it and so on. Articles like this tend to overlook the value of entrenched technology that works well enough.
Based on this, I've tried most of today to make minikube work on macOS. But as I'm using DNSCrypt-Proxy I had major issues making it work without manual steps.
So far, the Docker for Mac is a solution that just works.
If someone have some experience, I'll gladly want to know how you make it work flawless.
The article mentions Kubernetes using containerd and the OCI is the future, but also fails to mention that containerd is developed by Docker, which supports OCI, which largely supported by the Docker company.
As of the latest versions of Docker, the dockerd is now using containerd under the hood.
I'm not sure how the Docker CLI will exactly die. The posts seem to focus on the CLI only, and even calls out that "the viability of the company Docker is outside the scope of the post", but it failed mention my previous points.
Containerd is a project within the Cloud Native Computing Foundation, which in turn is part of the Linux Foundation. Docker haven't been directly involved since 2015, and even then, it's arguable that they've never been (you may be thinking of runc though, which was donated in 2015).
It's true that Docker uses containerd under to hood, but that's actually part of what the author is arguing. Docker as a technology is a wrapper platform around core industry technologies that they neither own or control. That means they have to compete as a tooling company, and they have already lost ground there. The more things like Kubernetes and podman join the market, the less required Docker becomes, which means they're going to be more and more at risk of failing.
> It's true that Docker uses containerd under to hood, but that's actually part of what the author is arguing. Docker as a technology is a wrapper platform around core industry technologies that they neither own or control.
I cede your point, but it's irrelevant and isn't want the author implying (even directly).
From the article: "I do not think there is any reason for us to user docker any more and therefore Docker as a technology and as a company will slowly vanish."
The end users do not care how cgroups are setup or mount points are built. The guts may be standardized, but the docker toolchain (Dockerfile, docker-compose, docker run) will continue to exist. The "runtime" is irrelevant and there just isn't a competitor in the "tool chain" arena.
Docker Swarm is the only thing that will vanish.
> The more things like Kubernetes and podman join the market, the less required Docker becomes, which means they're going to be more and more at risk of failing.
Kubernetes is an entirely different use-case. Nobody is arguing Docker Swarm will beat it.
You could say the Docker CLI isn't required with the advent of other tools, but those are incredibly big shoes to fill. Think of all that entails (Dockerfile, docker-compose, the CLI, cross-platform(-ish) support for Windows/OSX).
Also, competition leads to better tooling. Did anyone ever say "Unix isn't required any more, because with have Linux"?
The tooling of the Docker CLI is in a very good spot as-is. The guts are being opened, which I think would relieve the pressure some may feel to jump from Docker.
containerd is a project started entirely by Docker and contributed to the CNCF by Docker, and Docker is still a primary contributor and maintainer on the project.
Honest question from someone who only used docker on several occasions to deploy 3rd party software but wants to invest into containerization. Is it worth going deep into docker right now? Is he knowledge transferrable to kubernetes and whatever else is going to replace it? I want easy deployment that properly deploys containers and telheir dependencies, but many of my scenarios don't need redundancy, scalability etc. My use cases are in-house so managed solutions won't be considered. On the other hand I don't want to have much worse operational complexity than with docker (and that's what I heard running local kubernetes biggest drawback is). Also many open source solutions provide dockerfiles, not caring about other containerization solutions. Please help, I'd rather avoid investing in an already obsolete solution if possible.
So many times have we thought that a technology will prevail or disappear based on its validaty or adequatecy, and so many times have we seen the most popular survive.
JavaScript comes to mind. If it’s widespread and easy to learn, it might not matter how much technically better other alternatives might be.
Well I can see kubernetes being the dominant container orchestration, but what about use cases for docker that simply doesn't need any orchestration? I mean those containers that are run on a single machine, perhaps manually, for purposes other than deployment?
I've always disliked articles that use this tone. It's meant to be informal, but comes off as lazy and spoken in muted tones as if the author were in a pub; Docker sitting in a booth a few meters away blithely sipping on a pint of bitter. The substance isn't much more enlightening than typical bar banter either.
Docker played a pivotal role in creating our modern understanding of containers and we should be thankful for that. I don't understand the value this article provided beyond pointing out a few container runtimes I hadn't heard of.
I don't get why not more people are using Swarm. It's simple and it get the job done for many people who don't need a full Kubernetes setup. Also, for instance, it encrypts secrets, opposed to Kubernetes which only does Base64 encoding of secrets.
I do understand the market dynamic but still there are niches for other simpler products.
The death of the Docker as a technology is laughable. Every Cloud provider is racing to add container orchestration features and they all use Docker under the hood [1, 2, 3, 4]. Cloud providers are dumping hundreds of millions into this ecosystem through, training, credits, new features, etc. I have not seen anyone step off the gas. They are all seeing massive growth in container adoption. My guess is this is only the start. Go look at any DevOps/SRE job posting and you will see Docker, Containers, Kubernetes. This post is wrong when is comes to the technology going away anytime soon.
Container orchestration is light years better than anything that we were doing on the operations side before. This is why I find it extremely hard to believe we are going back. We would need something much better to jump to and that has not been invented yet. So, Orchestration/Docker/Containers are here to stay for the foreseeable future.
The rkt image format tried early on to subvert the Docker Image format and was unsuccessful. Today, there is much more depending on the existing image standard. You would totally fragment the existing integrations if you attempted to change. I am not saying that people will not try. But, I just cannot see it happening anytime soon. You have all these Clouds betting on it, you have millions of Dockerfiles out there now, and tons of people trained on it. All these CI/CD platforms supporting it. You would need some massive reason to change. Why would they? You might see people under the hood take the Docker Image and wrap it somehow, like gvisor [5], but they will still likely accept the Docker Image.
Go look at these charts from Datadog about their customers Docker adoption [6]. Every single chart is hockey stick growth. This is a sampling of 10,000 companies running 700 million containers in real-world use. I am just offering a counter to the point that the image format is coming to its death. I do not see that.
I think the article agrees with you, that Kubernetes is the defacto orchestration tool. And that Amazon, Google, Microsoft all offer k8s as a service.
But the point of the article is that the part that Docker provides in that stack is not secure. There will be and already are alternative container engines underneath k8s.
One day a simpler, quicker alternative will gain enough traction with Kubernetes (and others) and Docker will wither away. As a company.
That happens all the time in tech. Websphere -> Tomcat -> Jetty, Apache httpd -> Nginx, VMs -> containers, etc.
And it will also happen to Kubernetes one day. Yes, the helmsman will also be replaced by a nimbler modern replacement.
The article goes on to elaborate on that statement, claiming that Docker as a container runtime is going to get elbowed out of the market by faster, safer, more feature-rich container runtimes.
It's funny that you quote that, because your OP genuinely felt like that was as far into the article as you got. He is no way is saying that containers are dying, quite the opposite, but he thinks that Docker itself is going to lose relevance in the container ecosystem.
> One (temp) image per line in the Dockerfile might have some practical uses, but most of the time it doesn't.
I completely disagree. Cached builds being one reason, viewing the intermediate layers is just a huge benefit. Why don't you like it. You could always build with "--no-cache", which all CI should be using anyway.
> I can't know what 'docker rm' does without reading the docs. Oh and there is 'docker rmi' is that for images?
I think you're using the old CLI. A couple of years ago, they added new syntax and "encouraged" its use by default [1] (though IME people still use the original).
This is definitely more discoverable and logical; the syntax would be `docker image rm` [2] or `docker container rm` [3], etc
This is sad, because what made Docker useful was a full feature set.
We could already build chroot environments. We could already keep remote images. We could already set up app-specific networks. Having the tools wasn't what we needed: it was the confluence of all the features.
If the alternative to Docker becomes learning 50 new tools, we'll have failed as an industry, and shouldn't call ourselves engineers. You don't replace a car with a wheel barrow-lawn mower-hand held radio-portable fan. This obsession with churning out a new tool every year has to stop.
I basically agree with the technological argument made in the post. Orchestration plays the role of OS, where resources are defined, managed and protected. Containers are something analagous to device drivers, presenting the underlying resource as narrowly and efficiently as possible.
But that's the life I expect, not the one I live today. In the present day I'm much more reliant on docker than on kubernetes, and I cannot live without being able to freely mix and match go, python, or Java , really old and really new, in any host configuration I want.
All 3 big cloud providers, now have a managed Kubernetes solution that they offer to their customers (and as a result will eventually sunset their own home-made solutions that they built over the years - because there can be only one).
I haven't heard anything about this from AWS. Certainly they've introduced EKS, but I haven't heard anything about setting their ECS orchestration out to pasture. Does anybody have information about this?
Yea there are alternative Docker runtimes and Kubernetes won the orchestration war but Docker images are here to stay. As far as I know there aren’t better alternatives for Docker images and even Kubernetes uses Docker images. Docker as company currently has a Red Hat-like business model and that’s certainly hard to sustain so agree on that front.
I'm using Swarm on a small cluster (3-5) machines. Is there any reliable Kubernetes alternative for that? Portainer as Swarm UI was buggy for me - a usable and stable UI would be nice, but I'm not sure if I should do the full Kubernetes dance :/
Easy to debug and lightweight would be also great. I didn't really found something good.
It's not like Kubernetes has a better UI. You could take a look at OpenShift. It's a Kubernetes based Open Source distro my Red Hat and it includes a quite nice UI, both for users and admins.
I'd be really interested to know more about your setup and processes if you're running Swarm in production? For example, how do you handle rolling updates?
we are not a SaaS company, we just run some simple nodejs based services on swarm and my goal is to use swarm as a testing hub for docker - mostly the ci should push something there for the devs. So uptime does not matter, however swarm crashed badly a few times on me and at the moment everything important is well configured outside using your typical LTS linux distribution and I'm hesitant to move our basic services to it...basically 9-5 it has to work, I can just run apt-get update && reboot earlier or later - really not sure if I should move all the stuff into swarm / k8s - also because bus factor would be one and there is some experience with the old linux way.
For everything that resembles a microservice/service -> Docker, everything else (ldap, mail) -> oldschool. So far so good.
I'm looking at k3s or minikube with rancher as ui - maybe that's better, or we just ignore the hype and run docker-compose.yml files by hand but that's also pretty shitty.
Providing base technology doesn't seem to lead to financial success. You have to provide more concrete solutions for companies and people instead of the abstract infrastructure. I wonder if protection of the base technology through software patents could have averted this looming financial disaster.
I do like the fact that docker now works for non-Linux cases, like Docker on Windows with Windows containers. Would be nice if they would also support MacOS containers, FreeBSD etc. That would certainly entrench themselves as a "universal" container technology.
Who is providing a container registry that can replace the offering from Docker (the company)? That seems to be the big contribution of Docker the company these days: a huge library of container images.
Still using LXC where it makes sense and never felt the call of Docker...or Kubernetes or any of this developer popularized opaque tooling over sophisticated meddling with my systems.
Docker should just merge with Npmjs. Then they might get big enough to get bought up by Microsoft, Oracle or IBM (which I think is their end game anyway).
I think I've seen one person use docker once, back in about 2013, to create and freeze a very particular Latex setup. Everybody sees a tiny slice of the world, but to each individual, that slice looks like all of it.
The death of Docker could potentially happen, but not for the reasons the author lays out. The last argument in the article is interesting. The author implies because RHEL dropped Docker - then game over. But if we want to analyze this it isn't because Docker lost a technology battle. It's because RHEL is protecting their ecosystem. The article is correct in making the case there is choice. However the author doesn't put two and two together with regard to RedHat. Just because you can alias podman with docker doesn't really mean anything. Someone seems to have dipped into the non-reality of Docker-is-a-bad-word, by author Dan Walsh of RedHat. Also keep in mind containerd was contributed by Docker and is the graduated runtime in CNCF. You definitely don't need Docker to run containers. But it's easy to use Docker to run and build. And Docker is still a one line installation on all Linux platforms (https://get.docker.com)
Next is Docker Enterprise. Can they sell it? This is where Docker is making mistake after mistake IMO. Docker does have a leg up in Windows. But does it matter? Today Swarm is your only orchestration option and while I understand k8s support is coming it still remains to be seen if GMSA is included in the first release (doubtful). If you're a Windows developer and you're deep in the Microsoft ecosystem, you won't be able to productively use Windows Server and k8s this year is my guess given the timelines.
Regardless, Docker has done a poor job capitalizing on Windows. Maybe it's that people just don't need it as much as Docker would have you to believe? Or maybe it's too jarring of a workflow change. Regardless, it isn't working.
Next is the products you get for the money: Docker Universal Control Plane (UCP), Docker Trusted Registry (DTR) and Docker Enterprise Engine (EE).
They're all OK. But the sell is, again, tough. UCP is basically a dashboard for centralizing RBAC for Swarm and UCP in the enterprise, and don't get me wrong - the enterprise needs this. But UCP feels lacking. The UI is distracting and has a lot of oddities. Managing it is a decent amount of work. It's just OK. Also Docker UCP still doesn't support k8s PaaS while competitors do. Remember that Docker hired Kal De away from VMW as CTO. In his time at Docker not much has changed externally, but Docker needs to move faster. Much faster.
DTR is interesting because everyone who's doing containers in the enterprise needs an image repository with, again, those enterprise requirements. And DTR delivers this, but Docker doesn't want to sell you DTR only. You can't buy DTR by itself like it's competitors. Docker wants you to only use all of their bits and it's dumb because Docker claims choice as a core pillar then tries to lock you into UCP+DTR+EE. Oh I see Steve... Choice as in what DOCKER defines for me as choice. Choice of bare metal or hypervisor, but not how I manage containers and orchestration? Back to DTR: it's decent, but again there are other choice and a lot of them are better standalone offerings.
Finally EE. There are about three reasons you really need EE. The first is enterprise support and long term releases. The second is runtime enforcement of signed images at the engine level. The third is oddities like FIPS support. But that's it. And, again, Docker doesn't want you to buy just Engine. They want to sell you the 'platform'.
At the end of the day Docker has good products. They're not exceptional when taken into context of other enterprise software. They're just OK. Docker doesn't realize this yet from what I've gathered. They have recently changed pricing to very VMW-esque CPU count pricing. And Docker doesn't have the market capital to make that leap, unfortunately. At the end of the day my opinion is that Docker is marred by Docker. Not by the competitive landscape. Yes, that contributes to Docker overall but Docker doesn't seem to make sure they have the best product above everything else. Docker's CEO, Steve Singh, is the wrong guy for the job. He is constantly stating cash flow positive by end of this fiscal year yet doesn't seem to care about how good or bad the product actually is or why people wouldn't buy Docker over something else.
So yeah... Docker has done great things. Those great things are all of the contributions Docker has made to the community and continues to do so. But Docker on the enterprise side feels weak, lackluster and very confused. Docker is marred by Docker management and executive ranks from within. I'll give them 2 years make or break and I think, as it stands today it's a coin flip. The acquisition of RedHat by IBM was a good thing for Docker, but may be the nail in the coffin for RHEL long term. If Docker does flounder any longer look for them to be acquired by Microsoft. And then... Docker will be set to sail off into the ether.
I have an honest question- is docker even useful for most projects or is it just a preparation for solving imaginary scaling issues that most won't even reach?
I ask because I'm wondering if I should care about docker in the beginning of my project which will have very few concurrent users/requests and would run fine on a single machine.
> is docker even useful for most projects or is it just a preparation for solving imaginary scaling issues that most won't even reach?
I use docker for a small server at work, where I know there will only ever be one container per image.
Where docker is incredibly useful is with non-free software which works on Linux. Usually, they rely on a specific version of Debian or Ubuntu. Yes, this can be achieved with VMs, but that requires running 5 or 6 kernels. So for us it's a huge resource win over VMs.
For me, Docker sits between chroot and full-blown VMs. If you drop privileges you actually do have some security advantages over chroots and separate processes – although I don't think Docker emphasises this enough. The way Docker is used and the way it is documented seem to have a disconnect.
> I ask because I'm wondering if I should care about docker in the beginning of my project which will have very few concurrent users/requests and would run fine on a single machine.
If it doesn't solve any problems you have I personally wouldn't bother – you can always "dockerize" after, which should be a relatively straightforward process. At this stage it's just a distraction. Good luck.
Docker doesn't have anything to do with scaling. All it does is to bundle shared libs, executables, etc. of your app in a container image. If the world could agree on a common Linux userland (same builds, features and names/locations of common userland libs and language runtimes), then Docker wouldn't exist. So you could say Docker just solves a problem of our own making - that of too many Linux distros. Of course, now that those libs are contained in the image, and often invisibly so, the problem of updating those dependencies for eg. security updates is unsolved. That is, the entire reason why these dependencies are provided as shared libs in the first place - that of removing vulnerabilities without having to rebuild entire apps - is lost.
The other thing that Docker does is run in a namespaced environment such that apps don't have access to the local file systems on your host by default. This is also a problem that's been solved since the dawn of Unix by eg. using file permissions (though admittedly not every scenario is served well). Docker apps don't have access to the host's access control infrastructure (/etc/password, pam, etc.) though, so Dockerized apps need to invent their own ad-hoc auth.
The only problem that Docker is really solving is that you can pack more apps on a single host compared to full-blown VMs. Basically, Docker is infrastructure for cloud providers to sell you lots of pods cheaply (for them, that is). Since many images are based off Debian (pull in deps via apt on first start), it could also be argued that Docker is acting as GPL circumvention device of sorts.
+1 tannhaeuser. I live in a more heterogeneous environment than what docker can provide, and it seems to me that the ground docker covers is like you said, application packaging. This is frustrating since in a lot of cases, "applications" are only being provided as docker images. What the heck can I do with a docker image on something that doesn't provide docker? How do I have assurance that when teams pull docker images in from the wild, that they're actively maintained and not full of 0-day vulnerabilities? I've switched to pkgsrc and highly recommend it for solarish (and solaris), *bsd, centos, debian, mingw+windows, osx and whatever else one would feel like building binaries and dependencies for. The key to me is separating the operating system package manager, and the application dependencies such that when operations runs their mandatory apt/pkg/yum/dnf/whatever updates it doesn't break application dependencies. And on the flip side when applications want to screw with things that aren't in apt/yum/etc those custom needs can be met. This approach also doesn't preclude using the respective os container mechanism (zones/containerd/vmware/hyperv/chroot/etc). We package custom internal packages in our own internal pkgsrc repo, along side of the main repository that provides north of 10,000 packages.
So only to get on the the same page as you and not to be a smartass- you're saying Docker is like an AppImage without access to host files (I didn't get the last part about GPL circumvention) whose purpose is to provide non-emulated VMs?
If all I use is Debian on my server with a file that always installs the required packages and I don't need isolation between services, should I care about docker as an "end-developer"?
Difficult to tell. I'd probably stay within a plain dev environment as long as possible. Just keep in mind you can't rely on host auth within Docker if you ever wanted to use it. Besides, it's never wrong to learn something new, and Docker isn't difficult to pickup.
Often times you want to run a newer version of a package but it has new dependency requirements that conflict with other packages. Most solutions around this involve tweaking the package, or using a chroot. A container lets you avoid this problem as each container has it's own rootfs and can be a different OS and can run different versions of software however you want without worrying about conflicts.
It's somewhat similar to static linking, but at a higher level.
It depends on what you deploy to. If it’s AWS or Azure web-app type stuff, then maybe not, but if it’s your own infrastructure in anyway then fuck yes.
Ten years ago we build things in C#, used MSSQL and deployed to IIS. All Microsoft tech, all pretty straightforward, except it wasn’t. We never kept track of the “it works in dev, but it explodes in prod and I don’t know why” hours, but I wish we did. Because it’s in the thousands, and those thousands of hours is exactly why we use docker.
We also use docker because it lets us build things that our IT crew isn’t certified in running infrastructure for (and the lovely security issues that brings to the table) but we mainly do it because it works.
Docker might not have a monopoly on that, but they have enough of a brand that the word “docker” is to containers what “google” is to search. At least in my circles.
I use Docker for development stuff too. I (and team members) have spent countless hours battling with setting up things like RabbitMQ, and I only wish I'd done it sooner - being able to instantly spin up a working, consistent development environment is amazing.
I know you can do this with VMs too, with tools like Vagrant, but containers start in a couple of seconds - great for integration tests.
> We also use docker because it lets us build things that our IT crew isn’t certified in running infrastructure for (and the lovely security issues that brings to the table) but we mainly do it because it works.
It is amazing how many technologies get traction simply because "It lets us bypass IT."
In our case it’s the compromise between developers and a operations department which has 5 technicians to support the infrastructure of a municipality with 7000 employees and around 300 IT-systems.
To manage our IT had to make certain infrastructure decisions and build their competency around those. This clashes with a lot of modern development, but we manage with docker and an increasing Azure presence. It’s not optimal, but sometimes it’s just necessary. Don’t get me wrong, we’re trying to improve and build Devops that isn’t handing off a container, but it’s a challenge in sectors where digitisation and IT aren’t priorities despite being an inherent part of any business process in 2019.
There is a real danger in there of course, but we have strategic choices for our development platforms as well. They just need to move a little faster than IT.
What are some docker alternatives? And how does Docker on windows work? I know it uses Hyper-V underneath but does MS include a base linux kernel image?
Docker on Windows is interesting. For windows containers, it can do both process based isolation (similar to Docker on Linux) and Hyper-V based isolation, which uses a very cut down VM as a base. Compared to Linux containers the base images are large, but there are only a few commonly used base images, so once you've got them it's not too horrible due to the overlay filesystem in use.
For Linux containers there is LCoW, which uses a Linuxkit VM to host the containers.
There is also some chance that in the future Linux containers will run natively on Windows without a VM, via WSL. This is very hacky at the moment but there have been reports of people being able to run Docker engine in WSL with no VM.
I always thought WSL was only meant to be a dev-tool. Do you have any links where there is a mention of a plan for them to be prod? (I know that's not what you're mentioning but wondering if that's how you guessed...)
Whether MS ever use that tech. for production linux containers remains to be seen. It sounds kind of cool and might be good enough for dev/test but I imagine sorting out all the wrinkles to make it prod. ready could be tricky...
Docker is immensely useful when starting projects that use web services because it lets developer download and run instances of any service on demand with a single CLI command, and lets any teammember do the same with your work.
Then with docker-compose you are able to instantiate custom deployments of your projects, including dbms and gateways and messa buses and IAM services, with a single command (docker-compose up).
So yes, it's very useful.
However, the point of the article is that Docker is no longer the only option, and although right now it's the default go-to container tool, there is nothing but inertia keeping the world using Docker.
I don't even think Docker is useful for production, but it's indispensable as a development tool since I don't want to sully my Linux install with a bunch of databases and dependencies I'll inevitably forget about.
It makes it trivial to package, build, and configure your host machine as a VM exactly how is supposed to be setup. Drastically reduces the amount of "well this works great locally" BS.
Then it's trivially fast to pull and run the image on any machine with Docker or K8s installed.
Docker is not a tech for scaling, its a tech for repeatable environemnts (prod dev). Its very useful for this. Container orchestration is for scaling (k8s). Docker is very useful when you have more team members with different computers.
I've found it really useful on embedded systems (e.g. Raspberry Pi). It takes a long time to install anything complicated, and you occasionally need to make kludges. Having a Dockerfile means (a) I can remember what I did to make everything work, and in what order (b) using a hub, I can easily duplicate the environment to another Pi without waiting overnight for all the applications to build.
Yes, you can clone the SD card, but I think it's cleaner to use a version-controlled Dockerfile. Otherwise you always need some master SD card to clone from (and keep track of a multi-GB image file), and you have to faff with resizing images if the new card is smaller.
A fresh system install is then: flash Raspbian, update system, setup some init scripts, install docker, pull the image and clone the latest version of the code from github.
This is also the approach embraced by Balena, who (conveniently) provide Docker base images for a bunch of common embedded systems.
Another reasonably big user-base is machine learning.
It's for a drone-based sytsem. Python, OpenCV, ROS are the main parts, plus some machine vision camera SDKs. I've also put in some optimised machine learning libraries which are a bit finnicky to setup. ROS is an absolute mule to get right and I like having it in a closed off place so it can't mess around with the rest of the system. I have the actual ROS workspace in a shared volume so they're persistent if I fiddle with the Dockerfile.
None of it really requires docker, but it's nice to have the whole environment encapsulated, and having a record of what I had to do to get some of these things to install is invaluable.
I have a shell script which launches the container (I just run a new one) every time the Pi boots.
Think about this: Wouldn't it be nice if installing apps on your desktop was as simple as just a folder? And everything lived in that folder instead of sprawling out into the OS everywhere? Copy that folder to run more, or replace it to update, or delete it to remove everything.
That's what Docker does. It lets you wraps everything into a single isolated package (a container) that can run whatever you want without affecting anything else on the system, and then cleans up perfectly. You can connect it to networks and disks but There were APIs for most of these things already in Linux but Docker brought it all together into a simple interface.
The other big innovation is the Docker registry that makes images easily available over HTTP. No more complicated downloads or package systems, instead you could just point to a simple host/image:label address and download whatever you need. That simplicity and flexibility is what made it take off.
As others are saying, Docker is a tool for repeatable environments.
I have an application with a long, tedious setup process involving dozens of apt dependencies that I don’t maintain. In the past, I’ve had issues with things not updating properly, inconsistencies between versions, spontaneous breakages... Using Docker, I built an image that contains all the dependencies that are unrelated to my code or configuration, then another image relies on this and contains all of my stuff. My deployment process only needs to be concerned with updating this second image and all of the frightening dependencies are guaranteed to be consistent and reliable. This second image is deployed to each box. All of the production systems are identical and if I need a new production box, I can have it ready to go in minutes and be confident that it will work and behave reliably.
I'm not sure if mine is a good way of doing it but it's working for me. I took the relevant portion of my own badly written shell script, moved it into its own Dockerfile that has its own CI project and its own private repo on Quay, and I rebuild/republish as needed. The remainder of my original shell script went into the Dockerfile for the project that holds my code. The first line of this Dockerfile just starts with my base image.
The point of the && in a Dockerfile is because every single RUN directive adds another layer to the final image. Less layers = slimmer images = faster and lighter deploys.
It makes deployment really easy so long as you have the infrastructure to deploy to. I have most experience with Amazon ECS but you could also manage it with Kubernetes on AWS, Azure or GCP.
My company now deploys every app that can run on Linux as a Docker container, regardless of whether it needs a single instance or 50.
It's not as though it doesn't introduce its own problems for example - you might patch software on the host but you still need to patch it on the container and it can be difficult to get an inventory of that, say there's a zero-day found in nginx. You still require organisational discipline, that never goes away.
Even if you don't use Docker in production, I think it's still very useful tool for local development, combined with docker-compose.
When I'm working on one service, I often depend on a few other services too. I usually need a storage layer, perhaps API calls to related services, it may even to talk some Amazon services like S3 or SQS. Having a simple way to spin up every dependency locally, even AWS (lots of great AWS API compatible images out there) is really useful if only for local development.
I use it a lot for ML work. Nvidia-docker makes it easy to setup CUDA and partition access to GPUs (if you have more than one). Docker makes it easy to setup tensorflow, Keras, python, etc. which I find to be a royal pain otherwise.
Anytime a potential new hire submits their sample project in a docker container and I can stand it up and everything 'just works' I am happy. No need to mess with any dependencies and it should work as they intended it.
Part of the article calls out how the alternatives (like CRI-O) are shipping docker compatible CLIs, so that users can keep going `docker build` but with a completely different runtime.
CRI-O isn't comparable to Docker for developers, it's a replacement for Docker when used as part of a Kubernetes cluster.
The thing is that whilst various operations are shipping bits of the functionality that Docker provides in various different tools, for developers working with containers none of them (that I've seen) have an easy to use, cross-platform setup.
If you are a coder, and you want to implement auto devops yourself, for free, then see this video.
https://youtu.be/Qlj6NiOy5jM
You can do it to, remember the proffesionals will shout at you for using this method, it makes them obsolete.
What’s wrong with Docker? Why would I want to switch to something else? Are these other solutions really so superior that they warrant the significant time investment that it would take for me to learn how to use them?
The author doesn’t actually answer any of these questions. No arguments against Docker are made, nor are any arguments made in favor of competitors.