Hacker News new | past | comments | ask | show | jobs | submit login
Docker (computer.rip)
342 points by hundt on March 25, 2023 | hide | past | favorite | 138 comments



The narrative seems quite clear to me. They released the tooling and the services to become the defacto solution, and then Swarm was supposed to be the cashcow that turned that into cashflow. And then k8s happened.

They've raised a tonne of capital, and it probably looked sane at the time. And now they're grasping at straws trying to figure out how else they can turn this into revenue.

A lot of the recent narrative has been worded like pivoting into a glorified webhost was their evil plan all along. It's not, it's an act of desperation.


I feel like all that VC money was actually their undoing. Instead of working with the community, Docker spent bucketloads of money acquiring a lot of community projects and startups in an obvious attempt to become an end-to-end container solution company. But, they didn't have a plan and ended up killing most of those acquisitions. To me, it felt like Docker was using all the money they got to squash the community instead of working with it.

Every year at DockerCon, there would be flashy announcements that went nowhere. As a developer, those years from 2013 to 2017 were both super exciting and super frustrating. Everything started falling apart when Docker (the project) got split into Moby for open source and the rest went commercial. Docker started to sell Docker Swarm (the original), only to kill it a year later with a new Docker Swarm (what we have today). Then, Kubernetes started growing traction, leapfrogging both Docker Swarms, Mesos, and others in adoption. They never had a cohesive commercial plan. Just lots of empty promises and burned bridges.

When I think of Docker (the company), I feel bitter about all the projects they killed in their attempt to own the market. I love using Docker (the software), but the company's just one big disappointment.


>I feel like all that VC money was actually their undoing.

I don't think you have Docker without VC money. I think at best you have some LXC-lite project that is intertwined with GCP/AWS/Azure but unfortunately, I don't think you get something as quite polished as Docker in the same timeframe without VCs pouring millions to hire people to work on it.

Where I am sympathetic to Docker is Docker wouldn't have worked as your standard Open Source project; it took a ton of paid engineering hours (and you can argue X% was wasted on projects that went nowhere, but that's a given in any org), to get the software and infrastructure right, and if they had tried to charge developers they would have gotten nowhere. Even now, where people are keenly aware of the value of Docker, trying to monetize it is met with tears and angry blog posts.

I think Swarm was the plan (as the money has always been in providing infrastructure), Google just had more developer sway (which also killed Mesos).

The way I see it are there probably a ton of services you could build with the right team and right amount of people that could make an even greater impact than Docker on productivity and they would never be built because it would be way too difficult to monetize and the people with the talent to build it are going to get paid more working at FAANG to do any sort of OSS approach.


>there probably a ton of services you could build with the right team and right amount of people that could make an even greater impact than Docker on productivity and they would never be built because it would be way too difficult to monetize

There's a big difference between creating value and capturing value. Probably thr biggest single issue with capitalism in general.


In all fairness, Docker never worked with the community at all. At times it looked like they almost got their own community going, but it was never a priority to them.

They were always in a race to reinvent everything the Docker way. Which is, sort of, what you need to do if you want to become an enterprise software company, if that's where you think the money is. It's pretty much exactly in the footsteps of VMware. But to make that work, they would have had to work much closer to Windows, which probably was even harder than the Linux community.

Docker couldn't have existed without VC money. Free hosting was what made them hugely popular. Docker was a VC productization of containers.


Moby doesn't seem excellent either. at least the tickets I've stumbled into the stewards have not been very receptive


Speaking personally, the fate of Docker, Inc. was clear to me when they took their $40M Series C round in 2014. I had met with Solomon in April 2014 (after their $15M Series B) and tried to tell him what I had learned at Joyent: that raising a ton of money without having a concrete and repeatable business would almost inevitably lead to poor decision making.

I could see that I was being too abstract, so I distilled it -- and I more or less begged him to not take any more money. (Sadly, Silicon Valley's superlative, must-watch "Sand Hill Shuffle" episode would not air until 2015, or I would have pointed him to it.) When they took the $40M round -- which was an absolutely outrageous amount of capital to take into a company that didn't even have a conception of what they would possibly sell -- the future was clear to me.

I wasn't at all surprised when the SVP washouts from the likes of VMware and IBM landed at Docker -- though still somehow disappointed when they behaved predictably, accelerating the demise of the company. May Docker, Inc. at least become a business school case study to warn future generations of an avoidable fate!

- Bryan Cantrill, https://news.ycombinator.com/item?id=28460504


It always seems like VCs. They did great and started growing a lot. More VCs jumped into the next possible “unicorn” and they had even more money. They had to do stuff with it.

But they weren’t paying back fast enough, and K8s was making waves. Better make money fast while you can. Time for the squeeze play so the VCs can win.

If they had been allowed to grow at a more natural rate maybe it would all be fine. If they were allowed to be happy with a 40% share of a big future market, things could be great.

But that’s not the VC-shoot-for-the-moon way.


Yeah, that's the elephant in the room that the article claims to acknowledge, but misses by an order of magnitude.

"Docker is also a company, and companies are expected to produce revenue." That reads like wages, office rent and perhaps a boat for the founders. But they can't be that humble anymore, not after burning through a quarter billion in VC money. (even if those numbers seems surprisingly harmless compared to some more consumer oriented VC bonfires)

Now it's their job to make those investments winning bets or ruin the company trying. All that free image CDN convenience much of the docker revolution was built on? Too good to be true, effectively a loan from VC betting on getting people hooked.


> then Swarm was supposed to

> be the cashcow that turned

> that into cashflow. And then

> k8s happened.

We are still using Docker Swarm in production. It seems to be working fine so I always wondered why it never took off. But I am not a Devop. Can sb please give some insight on why kubernetes took off instead and why Dock Inc. failed with its cloud product?


k8s is basically a standard interface for cloud providers, where you can define load balancers, persistent volumes, certificates, etc. in addition to jobs and services. It has a well thought out declarative language that maps well to how ops runs things at scale.

There are quite a few optional tools built on the k8s model, like service meshes or tools to orchestrate a postgres cluster.

Swarm is a small piece of that, the scheduling system, and you have to run it yourself. It's really not comparable.


"Can sb please give some insight on why kubernetes took off instead and why Dock Inc. failed with its cloud product?"

IMHO, Google did something smart/evil here. Google did not use the docker-compose style yaml files for k8s configurations. This turned into a big deal. If you have a docker-compose file and want to run it in k8s - you cannot. So you need to make a decision: go with docker-compose yamls which has no place to be hosted except swarm OR go with k8s yaml style and be able to run it on aws,gcp, etc.


I still use Docker Swarm in production, but the main issue is the lack of support from cloud provider and tooling. For instance, there is no good solution like ArgoCD for continuous deployment and updating your stacks. In addition, if you want to be able to automatically add a new node to the cluster, you basically have to write some infrastructure code by yourself. This makes infrastructure as code / configuration as code a pain, and these things are required to be compliant with stuff like SOCII


Good question, I'm also interested in the answer. Probably has something to do with K8s being backed by Google, which has battle tested it in the form of their internal Borg infrastructure.


My understanding is that k8s was definitely never used internally at google (at least at the beginning). They may have learned some lessons from Borg, but k8s is very very much not Borg nor a Borg component. I remember lack of battle testing and architectural differences being a key criticism when k8s was released.


Z


Swarm is fine but k8s is a lot more advanced. I've used both and now that I understand k8s, I would choose it over swarm any day. Microk8s is neat if you want a lightweight cluster.


[flagged]


Clearly written by chat gpt


Same applies to now trying to remake Java and .NET application servers with Kubernetes and WASM containers.

I rather have to deal with WebSphere 6.1 yet again, than a poorer replication of its developer experience with lesser tooling.


I know I will get downvoted for this as off topic, but this is just the latest blog we've seen in this top 30 of many that show ZERO regard for legibility. Yes, I can zoom my browser, but c'mon.

A 13px font for paragraph text is nearly hostile. It's not that legible to people with perfect eyesight, but then it's not at all legible to anyone with imperfect eyesight. It's like saying you don't care if anyone who reads your blog would struggle doing that. And given how very simple it is to change, it's kind of insulting, specifically given how many years usability has been a thing.

10 years ago I wouldn't have written this comment. But now this isn't how you behave if you have an audience.


This website doesn't even set a font size. The font size is just your browser's default. You have your font size set too small. This website is one of the few examples of doing it right. How can a web developer know what your requirements are? Only you can know that. There are many bad things about the modern web but fortunately being able to set your own font size is still a thing.

Firefox even lets you set a minimum font size. And there is an option to stop websites overriding your choices which helps with sites like HN (but not the one you are complaining about) which explicitly set a small font size.


It was super readable on mobile - I greatly appreciated its easy to read format. Presumably other platforms yielded a different experience…


Not sure what you are talking about, the font seems much bigger than the one on hacker news and pretty standard sized for a website or a desktop (which usually has default font size at 11 or 12px).

Besides:

- Nearly if not all desktops allow to scale. If your eyeseight is that bad you should set it at desktop level anyway.

- All browsers and terminal emulators allows one to use his own defined fonts and size.

- Nearly if not all browsers and terminal emulators now allows you to zoom dynamically for that odd website and keep that preference.

- Firefox has reader mode, I guess similar extensions exists for most browsers.

> And given how very simple it is to change, it's kind of insulting, s

Changing by which size? 16px, 32px, 64px? There is no single form of universality regarding eyesight. And I would argue that if your eyeseight is bad, the solution are prescription glasses, not websites with huge fonts.


Hacker News is a terribly designed website and just plain unreadable without zoom. 12px might have been the standard 15 years ago, but most sites use larger sizes. Gmail and old Reddit are using 14px. The New York Times and Washington Post are using 20px (and those are usually longer reads). The fact that you can zoom and override fonts does not give people a right to design unreadable/inaccessible websites.


>does not give people a right to design unreadable/inaccessible websites.

They aren't as the rendering of a website is ultimately always controlled by the client/visitor.

Any website whose semantic is only made of text is infinitely more accessible than another that would use bigger fonts but would include images or content that only render if javascript is activated.

And ultimately, they have every right to make it the way they want.

> Washington Post are using 20px (and those are usually longer reads)

In the full text maybe, I find much smaller fonts than those on computer.rip in metadatas and subtitles.


> They aren't as the rendering of a website is ultimately always controlled by the client/visitor.

This true in a technical sense, but not in a practical sense. If you want me to stay on your website, you'd better put in the effort to make it usable and readable. It's not the user's job to design the website, it's the job of the damn website designer!


Some users don't know how to use their browser's zoom feature. Most don't know how to change the default font in their browser, and those that do might not want to do this in case some sites break because of bad assumptions. Others don't like zoom, because it makes images look blurry. The default appearance of the page should be usable and accessible by an average user.

And yes, the 20px font applies to the full text. I'm not arguing that smaller fonts should be banned, things like metadata don't need to be as large. But the main content of the page should be larger and comfortably readable.


The average user use prescription glasses if he has eyesights issues.

I've been wearing glasses since I was 6 or 7y old.


I’m wearing glasses too, and they are of the correct power for my needs (as measured by an optometrist). That does not make HN less unusable and the posted page less uncomfortable at 100% zoom from the recommended viewing distance.


This is why you should do more CSS with rems and respecting the user agent's size so it's easier for folks to get the size they want (this article's author used ems and defaults though)


At least on my Android in Chrome, it seems to have the same font size as hacker news. Requesting "Desktop Site" yields the same results.

And yes, I do believe that Hacker News needs a bigger default font.

(I'm getting old. You will too!)


My default zoom on HNews is 133%. I too am getting old (or my eyes are at least), and agree with you old sods :))


More annoying to me than the font size is the font selection. Maybe it's because I didn't grow up using monochrome terminals, but for me monospace fonts are generally terrible for legibility (except in places like coding where lining things up vertically is useful). Typographically, the font designers have to make all sorts of readability sacrifices to make all characters the same width.


I agree, at least with Firefox we can put in read mode, in this case it was much better.


FWIW I downvoted you for predicting that your comment would be downvoted.


In its weird death spiral, if Docker Inc. were to be bought out by Microsoft, I shudder to think how much of the dev ecosystem would yet again depend on Microsoft's good graces to shoulder the burden of storage and data transfer costs for building products. They already do npm and Github (+ Github Container Registry) so they have some standing in being stewards in this space.

On the plus side, it would perhaps give enterprises more confidence about their build pipelines remaining dependent on Docker Hub, maybe even being more comfortable paying for it.

On the flip side, far too much of the dev ecosystem would depend on Microsoft, the famed supervillain of open communities. EDIT: With that sense in mind, I am indeed rooting for Docker Inc. to succeed.


> if Docker Inc. were to be bought out by Microsoft...

You can always use Podman. We already have fully OSS solutions in the container space.


On my personal computer and projects I always use Podman. There is even a fancy web app if people desperately want a cool icon on their menu bar, though it pales in comparison to Docker Desktop in features. (For instance, it's not able to search for images, whereas Docker Desktop can).

I do not miss Docker at all considering that I can copy paste almost every docker invocation I see online and have it run flawlessly with Podman. Unfortunately my workplace will probably never even consider trying out Podman as a replacement to Docker. I wonder if someone here has a nice anecdote of using Podman successfully at their workplace.


We are now on the Podman train. We used Docker for a while on some parts of our services. We did a thorough comparison of Podman and Docker when it was time to move over all the rest of our legacy-deployed services. Podman won out on many technical, subjective, and future-looking topics. Feels good, everybody is on board here.


Big ask but I'm sure that would make a fantastic blog post...


Since you seem to miss the GUI, have you tried Podman desktop?


Call me when podman fully supports the Compose spec(so Docker compose V2)


Podman speaks the Docker API these days so you can use docker-compose with it.


With no support for starting containers on startup using `restart: always` or buildkit



podman runs or builds containers? as far as i understand it docker desktop does 2 or 3 different things and i haven't managed to untangle that yet because it hasn't fully broken my workflow yet. getting more and more tempting to remove it but i need something for my weird windows+wsl setup


> podman runs or builds containers?

It does… in the same sense that docker (the program/tool) does, that is: both are not container runtimes (such as containerd which docker uses, or runc and crun which are the options typically used with podman) but a container management tool that control a container runtime. So you would indeed use podman to create a container just like you would with docker.

As for building images, buildah is the tool most used in the podman community for that. and yes, both podman and docker can handle containerfiles (what is/was called "dockerfiles" in the docker world)

> need something for my weird windows+wsl setup

oh, well, uhm, my condolences for that. Luckily I never had to use that for containers, but a quick look on the podman homepage tells me that they also offer a virtualized WSLv2-based distribution for Windows users: https://podman.io/getting-started/installation.html#windows …And of course, there is Podman Desktop if you want something more click-UI-based than the command line podman (never really tried it though, so I can't really say if it's good or not): https://podman-desktop.io/


Docker Desktop runs dockerd in WSL and adds a few things to enable working with it from Windows (e.g. installs the docker CLI on the Windows side and exposes the dockerd control socket to it). You can easily get rid of it and replace it with running dockerd in WSL on your own, or with podman-based tools.


Docker did do something smart with Docker Desktop by including wsl-vpnkit...to work around brain-dead corporate VPNs that break docker networking. Your alternative solutions don't work when AnyConnect or GlobalProtect, etc, are running.


With AnyConnect you can definitely work around enough to make WSL2 + dockerd functional: https://gist.github.com/pyther/b7c03579a5ea55fe431561b502ec1...


This is only partially true, if _all_ traffic is tunneled over the vpn, then yes you’ll have this issue, but if the traffic is split such that only interesting traffic is sent over the vpn, then you won’t have this issue.


AWS Client VPN breaks it just by having ever run, even if not currently active, as it sets `sysctl net.ipv4.ip_forward=0` 'for you'.

My suspicion is that since you pay for client connections, they don't want you running a single bastion client and having your real clients connect via that. But it's annoying, and if you really wanted to do that, you only have to edit the script, or set it back on a schedule/after starting up the client.


Yes, though the end user has no control over that knob...the corp end can turn off split tunneling and it's off by default.


Where do you host the base images? Someone's footing that bill or are you paying for it?


Why not decentralized? Debian (for example) can provide one or two servers with a gigabit line. That's going to be quite slow to download during every CI build. Therefore, your CI provider would have a caching proxy registry. You get fast service, upstream doesn't need a ton of bandwidth.


Docker images are often tied to CI runs. So a Github action is configured to "bake" a docker image that saves the image to Github Container Registry and a CI platform is triggered to pull the image and run tests immediately thereafter. Imagine this happening on every Git commit in a certain branch.

Often, there isn't time for an image to be pulled from a mirroring serving.


I meant the base images. If you're pushing your own images, then you can use any service you want (and pay for it). Since the traffic doesn't need to go through the internet at all, the bandwidth cost to Dockerhub would be irrelevant.


Base images are already available on different container image hosting platforms. For instance, ubuntu is here [1] on Amazon ECR. So it's matter of updating Dockerfiles to use them.

There again there's the question of finding image sources you can trust to be updated and secure. Docker Hub has a "Docker Official Image" tag for critical base images that are managed by each community.

[1] https://gallery.ecr.aws/ubuntu/ubuntu


That's my point - that it's entirely possible to not rely on the single central repository.

Re. finding the sources, a central namespace may still be useful, hence the use of caching proxies.


> I shudder to think how much of the dev ecosystem would yet again depend on Microsoft's good graces to shoulder the burden of storage and data transfer costs for building products

Does that hint that the model of sending around megabytes-to-multigigabytes of VMs is inherently too expensive to maintain as a backbone for an awesome tool?

For the same reason, I wonder why provides Maven Central and NPM repositories, whether they will do it for free, but at least those are billions of small jars, not hundreds of thousands of gigabyte-sized VMs.


> model of sending around megabytes-to-multigigabytes of VMs is inherently too expensive

Yes. I think automated build pipelines running 24x7 that could request even for the oldest version of a sizable image without caching at their end is part of the issue. There was no limit that I'm aware of on the no. of tags/versions per image or per OSS account on DockerHub, so just like package repositories, effectively every image had to be available forever and each image was of significant size. I don't believe storage and network tx costs have reduced at the same rate as increase in adoption of build pipelines and automation.

Same issue exists for apt, NPM, Maven, PyPi, or any other repository, but yes, the storage requirement should be significantly smaller.

Aside: Because Java has been around for so long in enterprises, many have learned over time to set up registries internally - a combination of wanting to host private packages securely on prem and protecting from downtime, supply-chain attacks. JFrog Artifactory is pretty commonly seen. However, IIRC npm registry was not easy to self-host on prem in the early days, and many enterprises had their private packages hosted on npm.


The ISO mirrors of every Linux / BSD* have been successful for decades. Decentralized repositories could solve many problems. Add Bittorrent as acceptable usage pattern like the free AI community is using to solve it even further, the Internet was not designed to be centralized IMHO.


The difference is that the average distro user cannot just push a new ISO image to the mirror network, say after changing some installer defaults (or more realistically, push a new QCOW2 image). There's also a substantial delay between mirror pushes and the update being ready for installation everywhere, something that developers probably would not accept in their pipelines.


>Microsoft, the famed supervillain of open communities.

FOSS: "Never thought I'd die fighting side by side with Microsoft."

Microsoft: "What about side by side with a friend?"

FOSS:


Since Azure became the most relevant platform for Microsoft business, DevDiv now under Azure organigram, has turned into a polyglot unit, not only about .NET and C++, rather anything that can bring developers into Azure.

Who would imagine that 20 years later, Microsoft would become yet again a Java vendor, with their own OpenJDK based distribution and upstream contributions to ARM JIT, and escape analysis improvements?


they're hosting npm now? i didn't know



> How is it that Docker Inc., creator of one of the most important and ubiquitous tools in the modern software industry, has become such a backwater of rent-seeking and foot-shooting?

My guess: Because not all good ideas are profitable. Especially in software.

I read most but not all of the article, so if I missed this already being stated, that’s egg on my face.


As a newcomer to the devops world I was kind of surprised at the general thesis of this article, that companies use docker hub and using something different is awkward. Neither of the two companies I’ve worked for use it (artifacory in both cases) and there is a general taboo around having docker desktop binaries on any company systems (though docker engine seems to be prevalent). I guess I had just assumed that the golden / default path was to use one of the (non docker) commercial registries. So from that perspective, the suggestion that there are some patterns that still use dockerhub by default was actually enlightening.


I have exactly the same experience. Can't imagine why someone'd use Dockerhub by default for enterprise use cases nowadays.


What I've deen is in-house base images in AWS's registry but still pulling stuff like Fedora base images from dockerhub.


as a 1 man shop i actually went back to docker hub after my DO based registry started flaking. tbf i was self hosting the registry because I'm cheap but the images were in do spaces. worked fine for a couple years then became unreliable. knowing not much else i decided to give hub the $5/mo or whatever to save me the pain. not looking forward to figuring out what magic auth keys i need to reconfigure to pull from somewhere else now


> In particular, the union file system (UFS) image format is a choice that seems more academically aspirational than practical. Sure, it has tidy properties in theory, but my experience has been that developers spend a lot more time working around it than working with it.

What is the alternative that is better? The ability to have layers that build on top of each other and can be cached is a big feature... what alternatives provide that and are better?


IMO image definitions should be a list of mounts that may be overlays on root but may also be more “normal” mounts to directories within root. I should be able to make an image that is ubuntu:bionic plus a conda installation at /opt/conda plus a personal package at /usr/local/mything. Currently you have to decide on how to stack those layers, which is unnatural and prevent sharing/deduplication of partial-file system images where there’s no reason to prevent it.

Taken to the extreme, look at something like Nix (or conda, come to think of it). Why can’t I just have one copy of a package of a given version shared by all containers, if they all want that package? Unix file systems should be great at that kind of composibility; that’s the advantage of a unified tree instead of a tree-per-source. But in the docker model, you’re stuck with a stack.

My ideal image definition is a hybrid between docker’s immutable hash-addressed image layers and an fstab file to describe how and where to mount them all.


The POSIX standard requires certain behaviours from the filesystem, that POSIX-compliant software can rely on.

Unfortunately, those behaviours are mutually exclusive with transparent layering.

It's certainly possible to build a file-system whose behaviours are compatible with that kind of transparent layering - Plan9 was built on exactly that model, for example - but then it wouldn't be a POSIX-compliant filesystem anymore.

The promise of Docker was that you'd be able to deploy your existing applications in a more reliable, repeatable way, but that breaks down when you have to tinker with your application's file-handling code, or jump through extra hoops to flatten the layers of your container's filesystem image.


One should make a distinction between:

* The general idea of mixing together filesystems+folders to achieve re-use/sharing/caching.

* The "Dockerfile" approach to this - with its linear sequence of build-steps that map to a linear set of overlays (where each overlay depends on its predecessor).

The "Dockerfile" approach is pretty brilliant in a few ways. It's very learnable. You don't need to understand much in order to get some value. It's compatible many different distribution systems (apt-get, yum, npm, et al).

But although it's _compatible_ with many, I wouldn't say it's _particularly good_ for any one. Think of each distribution-system -- they all have a native cache mechanism and distribution infrastructure. For all of them, Dockerization makes the cache-efficacy worse. For decent caching, you have to apply some adhoc adaptations/compromises. (Your image-distribution infra also winds up as a duplicate of the underlying pkg-distribution infra.)

Here's an alternative that should do a better job of re-use/sharing/caching. It integrates the image-builder with the package-manager:

https://grahamc.com/blog/nix-and-layered-docker-images/

Of course, it trades-away the genericness of a "Dockerfile', and it no doubt required a lot of work to write. But if you compare it to the default behavior or to adhoc adaptations, this one should provide better cache-efficacy.

(All this is from POV of someone doing continuous-integration. If you're a downstream user who fetches 1-4 published image every year, then you're just downloading a big blob -- and the caching-layering stuff is kind of irrelevant.)


I’m someone who had a front row seat to the emergence of Docker, and some might say competed with them (I’d disagree on that point). I don’t plan on commenting on their company, business model, or recent decisions. The only thing I want to comment on is the claim Docker was evolutionary, not revolutionary.

I disagree, I believe Docker /was/ revolutionary. And I feel like I see heavy technologists make this sort of dismissal based on technical points too soon. From a technical perspective, it was arguably evolutionary — a lot of people were poking at LXC and containerization a long time before Docker came around — but from a product perspective it was surely revolutionary.

I used to joke, in my own experience building a business in the DevOps space, that you’d spend 2 years building a globally distributed highly scalable complex piece of software, and no one would pay for it. Then you slap a GUI on it, and suddenly someone is willing to pay a million dollars for it. Now, that’s mostly tongue in cheek, but there is a kernel of truth to it.

The kernel of truth is that the technology itself isn’t valuable; it’s the /humanization/ of a technology, how it interfaces with the people who use it every day.

So what Docker did that was revolutionary was take a bunch of disparate pieces, glue them together, and put an incredible user experience on top of it so that that technology was now instantly available in minutes to just about anyone who cared.

At some point in the article, the author says it’s maybe something about a “workflow.” I’m… highly biased to say yes, absolutely. One of my core philosophies (that became the 1st point of the Tao of the company I helped start) is “workflows, not technologies.” When I talk about it, I mean it in a slightly different way, but it’s highly related: the workflow is super valuable for adoption, the technology is to a certain extent, but less so.

Technology enthusiasts (hey, I’m one of you!) usually hate to hear this. We all want to think you build the best thing or a revolutionary thing and then it just wins. That’s sometimes, but rarely, the case. You need that aspect, and you ALSO need timing to be right, the interface to be right, the explanation to be right, etc. Docker got this all right.

(Now, turning the above success into a business is a whole different can of worms, and like I said in the first paragraph, I don’t plan on commenting.)

For the author: I don’t mean any offense by this. I mostly agree with the other points of your post. The “FROM” being revolutionary I was nodding quite vigorously. Being able to “docker run ubuntu” was super magical, etc. I mostly wanted to point this because I see MANY technologists dismiss the excitement of technologies purely on the basis of technology over, and over, and over again, and the sad thing is its just one part of a much bigger package.


(author)

I'm not sure that we really disagree, but I wrote this sort of late and I also think I wasn't entirely clear. The point I was trying to make is that the "container runtime" part of Docker is a lot less important than the tooling they put around it, and they made Docker Hub a very core part of that broader ecosystem.


I think that in many ways creating a shared, stable namespace for images was actually a bigger contribution than any of the technology. The ability to type somtething like 'FROM python:3' at the top of your Dockerfile and have that automagically mean what you expect was definitely revolutionary in terms of productivity. Behind the scenes I don't know it really matters that much whether that references an image hosted in a repository by Docker the company, or a file in AWS S3, or a tarball from the Python Software Foundation. And that namespace is exactly what they're stabbing in the heart.


Namespaces are so important to ecosystems. See the issues with NPM packages, discussions about Cargo organisation names, etc. I was a huge fan of Deno early on because it used "the web" as its package namespace. Every time I see a new tool come out which bakes an assumed default into a "bare name" I die a little.


>The kernel of truth is that the technology itself isn’t valuable; it’s the /humanization/ of a technology, how it interfaces with the people who use it every day.

Apple in a nutshell.


Apple even takes it even one step further: they understood (Just like haute couture, cosmetics and luxury watch brands before them) that it's neither the product nor the technology nor even the interface itself that's what's really valuable (in the sense of: monetizeable), but the user EXPERIENCE in the sense of how it makes the user FEEL. Which is Why Apple excels in brand marketing.

If you want a cash cow, you don't want a technology-focused project or a mere company, but something between a luxury lifestyle brand and a cult.


in retrosprect - re-reading my last comment again, too late for editing - I just noticed that there is a small detail error in it, owed to my technology/development-oriented bias that is too detached from the brand marketing and sales/monetization oriented mindset of Apple:

Instead of "user", the more fitting word would have been "customer", though even that would only have been a rough one-word approximation for the concept that could in the Apple brand marketing context better be described as "person (to be) suggestively conditioned to be a follower/fan who is content with being kept in a walled garden and milked as much as possible in order to be part of the circle and to continually FEEL the EXPERIENCE".

So yes, "EXPERIENCE" is key, but "user experience" in the sense that "user" has in development/tech or even specifically in UX is only a smaller part of that.


This used to be common to all home computer platforms, with exception of the PC.

We used to buy the whole vertical experience, from hardware, OS integration and peripherals.

The clone market, which IBM failed to prevent, kind of broke this down.

Yet it is no surprise that OEMs nowadays try to bring this model back, not only because of Apple, also because it is the only means to differentiate themselves, while recovering the margins that have been lost all these years.


I'd replace humanization with socialization. Locating a goodenough~ spot in the problem space of an industry, that somehow resolves a lot of tensions.


The Docker saga teaches us the significance of default settings, the relationship between free and paid software services, and the need to consider the economic implications of relying on free services provided by a company, as these can alter over time.


How does the Docker story compare to NPM who are also freely hosting a bunch if stuff, heavily downloaded and relying on some paid users but mostly free. And NPM has “competing” repositorys too. Could the same happen with NPM where they need to charge?

I get that NPM packages are smaller than docker images typically.


This is a great point but do note that it is quite easy to setup [1] a private npm registry as well. Most orgs actually do just that as you really do not want a production build failing if npm goes down.

Either that or you vendor in your dependencies.

[1] https://smalldata.tech/blog/2023/03/17/setup-a-private-npm-r...


NPM went the other way: get bought out by a big company with deep pockets and a strategic interest in owning the ecosystem.


Maybe because they are owned by Microsoft


„npm enterprise“

You can run an in-house npm repository with that sold by Npm Inc.

I don’t know how sustainable it is but that’s probably one of their cash cows.


Docker is at 130 Million in annual revenue now!

At current average SaaS revenue multiples (6.7), Docker is on the cusp of Unicorn status.

It's weird to read comments about "poor, sad, dying Docker" given how ridiculously successful Docker's Desktop licensing scheme is.

https://devclass.com/2023/03/24/docker-subscription-revenue-...


https://en.wikipedia.org/wiki/Solaris_Containers 2004 I will just leave it here.


Yes, Solaris was doing "containers" in the mainstream before Linux but pointing that out as a response to articles like these misses the point of how and why Docker exploded; it was Docker's UI, the user experience and Docker Hub that really unlocked the full potential of the technology.

Solaris containers didn't have anything like Docker Hub nor was setting them up as easy as "docker run".

(Posting as ex-Solaris guy)


Not only Solaris, other UNIXes as well.

My introduction to containers was in HP-UX in 1999, via the HP Vault infrastructure.

Tru64, while short lived in the market, also had similar ideas.

And then there is the whole linage of mainframe and micro computers from IBM.


okay so swarm is dead, but is kubernetes actually that good or is it just ubiquitous and you’re forced to use it today? what about nomad? or mrsk?


Nomad is awesome and works at scale. The engineers continue to battle harden it and it’s a joy to work with. You do have to manage things like service discovery (usually with consul) and traffic routing separately - but the integration with vault is sublime.

About the only real negative of Nomad is that it doesn’t have the mindshare that k8s does, so you don’t see the amount of developer engagement in extending it the way you do in the k8s SIGs. Also, being an expert in Nomad doesn’t give you the same number of career opportunities, and on the other side - there aren’t umpteen thousand nomad SREs the way there are with k8s - so getting someone up to speed can take a couple months (but this system is very well defined, well documented, and small enough that any half talented engineer can master it very quickly)

Nomad does have the very important advantage that Hashicorp stands behind the product - so if anything goes awry, you’ve got a support team and escalation that will jump on and root cause/resolve any issue, usually within a matter of hours and even in the really squirrelly cases (that you are only likely to see when when you are managing many, many thousands of nodes in a cluster) within days.


> You do have to manage things like service discovery (usually with consul)

Since the last couple of versions there has been native Service Discovery in Nomad which works pretty well.


My personal experience is that Consul and vault are too complicated to fiddle it. I can spin a kubernetes cluster in minutes but I gave up on trying a Consul+Vault+Nomad lab around all the setup steps for replication, RBAC and whatnot.


Less career opportunities need not be bad if they are better payed and the companies are more accommodating.


Swarm isn't dead: Mirantis still puts engineering resources into it and has plenty of businesses relying on it. Bunch of new features just released https://www.mirantis.com/blog/announcing-the-23-0-major-rele...


k8s is good, I don't get the hate against k8s.


k8s is good, but complex, and thus not fit for everyone and everything, but gets overused to death due to being the most popular choice, thus you have a lot of people that dislike it due to bad experience.


There's an absolutely massive ecosystem of tooling around it to make a developer's life easier and abstract away the confusing parts. No one needs to write K8s manifests directly if they don't want to.


The abstractions work very well, until they don't and you need to dive in 5 abstractions deep to debug what's happening, or an upgrade needs to be done and you have conflicting dependencies a few levels deep.


Even during development I’ve had to learn far more than I’d like about k8s internals as things broke left and right.

ECS, nomad or even autoscaling VMs are much easier to deal with when they fail.


But isn't this the case with everything? I feel there is a necessary level of complexity to everything. Things can't be made simpler beyond a level.


YAML spaghetti, plus a complexity that makes Java application servers look like toys.


They have expertise and have visibility. If I were them, I would extend docker-compose with a cloud version that runs flawlessly including stateful workloads with backup and restors and charge for that. Heroku but even more simplified.

You change your docker-compose, push and we detect via webhook and deploy. Logs, metrices everything from command line with bubble tee or something.

Most companies have brilliant engineers and shortsighted, incompetent out of touch product teams.


> There's been a lot of discussion lately about Docker, mostly about their boneheaded reversal following their boneheaded apology for their boneheaded decision to eliminate free teams.

So making a bad decision is bad, but admitting it was a bad decision and reversing it is also bad?


You lose a lot of goodwill from the community if you refuse to change the code that a `docker pull image` no longer defaults to hub.docker.com AND then start to monetize teams, ESPECIALLY non-commercial open-source teams.

If they would mandate a registry then many more people would host their own and take the load off of their system.

But no, they want to have it all. And yea, they can. That I don't care about.

But you can't make a change, and walk back from it, and expect people to be happy, given the story that came before all of this.


Very often reversing a decision is not even comparable to having never taken it.

Currently the situation is that a lot of people will think twice before generating any dependencies with free teams, or perhaps Docker altogether.


Absolutely. The ones I hear are moving away from Docker Hub are currently not moving away from it because they have to, as Docker Inc reversed their decision. They are moving away from Docker Hub because they want to avoid getting hurt in the future, as Docker Inc proved they don't really care about them unless it becomes a PR disaster, and next time there might not be one.


Admitting it was bad and reversing it is (usually) better than just letting the original bad decision stand, but you can't erase what you've done. You still publicly decided to do something that caused people to lose trust and faith in you. People will wonder if you're going to make other bad decisions, and then not walk them back when people tell you how bad those decisions are.

There are also good and bad ways to apologize and change your mind. I don't have an opinion as to whether or not Docker's apology and reversal were done well, but I think it's fair that some could believe they weren't.


Announcing it damaged their reputation. Reversing doesn't undo that (because there's always the chance they'll do it again), but now they don't even have the benefit of not having to host so many images.


What went bad is a little thing called trust. From the community.

Probably, especially from those who think in the long term. Like those who builds things for themselves. They don't like to read news from Docker every day and keep in mind that their project images and even base images can just disappear overnight. It's too expensive for them to track docker decisions. It just takes resoures.

Making bad decisions is not bad at all. Losing trust is.


Or they were just testing the waters and check the reaction to the decision, as companies often do.


> Docker images are relatively large, and Docker Hub became so central to the use of Docker that it became common for DevOps toolchains to pull images to production nodes straight from Docker Hub

Not only that, but it was actively encouraged by all Docker fanbois to pull as soon as you can. When I saw Watchtower the first time I was just speechless.

Though IMO they had a chance at getting money long before that debacle: https://news.ycombinator.com/item?id=34377674


> Docker Inc.'s goal was presumably that users would start using paid Docker plans to raise the quotas but, well, that's only attractive for users that either don't know about caching proxies or judge the overhead of using one to be more costly than Docker Hub... and I have a hard time picturing an organization where that would be true.

But that achieved their goal too? They wanted to reduce loses from bandwidth costs, that works by either making the users pay, or use less bandwidth.


Question: Possible for Docker to die as a company; VC's lose their money; the technology survives and is still the mainstay? if the answer is no, what's the future and what do you expect the timeline will be? have a probability of that actually occurring?


I don’t know how Docker Hub falling by the wayside plays out. I suppose most cloud providers really should offer their own container image repo mirrors or something instead. But it’ll be painful

Wild to me that Docker inc. Doesn’t just charge to pull prebuilt images. Just send docker files!


Yeah, I would like to build something against FROM debian:wheezy.


Possible for Docker to die as a company; VC's lose their money; the technology survives and is still the mainstay?

Yes, this is the default assumption.


The issue here is mainly commercial.

They had guaranteed cost (hosting & serving a bunch of heavy data) and no obvious monetization play available.


> their boneheaded reversal following their boneheaded apology for their boneheaded decision

How can the decision and reversal both be boneheaded?


> Still, the point of this tangent about Docker Desktop is that Docker's decision to monetize via Desktop---and in a pretty irritating way that caused a great deal of heartburn to many software companies---was probably the first tangible sign that Docker Inc. is not the benevolent force that it had long seemed to be. Suddenly Docker, the open-source tool that made our work so much easier, had an ugly clash with capitalism.

> Docker Hub, though, may yet be Docker's undoing. I can only assume that Docker did not realize the situation they were getting into. Docker images are relatively large, and Docker Hub became so central to the use of Docker that it became common for DevOps toolchains to pull images to production nodes straight from Docker Hub. Bandwidth is relatively expensive even before cloud provider margins; the cost of operating Docker Hub must have become huge. Docker Inc.'s scaffolding for the Docker community suddenly became core infrastructure for endless cloud environments, and effectively a subsidy to Docker's many users.

I'm not sure why they couldn't have been a bit more aggressive about monetization from the start?

DockerHub could have been free for an X amount of storage, with image retention of Y days by default, with Z amount of traffic allowed per month. The Internet Archive tells me that they got this half right, with "unlimited public repos" being where things went wrong: http://web.archive.org/web/20200413232159/https:/hub.docker....

> The basics of Docker for every developer, including unlimited public repos and one private repo.

For all I care, Docker Desktop might have just offered a CLI solution with the tech to run it (Hyper-V or WSL2 back ends) for free, but charge extra for the GUI and additional features, like running Kubernetes workloads. BuildKit could have been touted as an enterprise offering with immense power for improving build times, at a monetary cost.

Perhaps it all was in the name of increasing adoption initially? In a sense, I guess they succeeded, due to how common containers are. It is easy to wonder about these things after the fact, but generally people get rather upset when you give them things for free and later try to take them away, or introduce annoyances. Even if you cave to the feedback and roll back any such initiatives, the damage is already done, at least to some degree.

I still remember a piece of software called Lens one day starting to mandate that users sign in with accounts, which wasn't previously necessary. The community reacted predictably: https://github.com/lensapp/lens/issues/5444 (they also introduced a subscription plan later: https://www.reddit.com/r/kubernetes/comments/wakkaj/lens_6_i...)

That said, I self host my own images in a Nexus instance and will probably keep using Docker as the tooling/environment, because for my personal stuff I don't have a reason to actually switch to anything else at the moment and Docker itself is good enough. Podman Desktop and Rancher Desktop both seem viable alternatives for GUI software, whereas for the actual runtimes and cloud image registries, there are other options, though remember that you get what you pay for.


You grow WAY faster with a free product. There is no downside for people trying it out.

If it was paid, even a small amount, that’s a hurdle for people. Plus people avoiding it would have created more/stronger competing products as they had more incentive.

Get a ton of users then try to monetize later is a very common SV play for VC backed companies.


>I'm not sure why they couldn't have been a bit more aggressive about monetization from the start?

I'm not sure there is that much money in running a glorified specialized S3. Charging for disk space is terrible - the people who have money would probably just setup a private repo on S3 where it's cheaper and the people who don't aren't going to pay you.

For the amount of money they raised I don't think that would have been a convincing story to tell.


Ofcourse there's money. Look at Dropbox, and the like.


I think we're past the point where key players like AWS have _ran with_ the technology Docker provided and did not pay their fair share in the process.

Docker as a company may be a joke, but I don't think the software will be nearly as nice to use without them. I think it's ridiculous that so many asshats are jumping on the hate Docker (the company) bandwagon without understanding how much they have been taken advantage of by the big players who can absolutely support them, but choose not to.

Sometimes I am so disappointed at how much ego still exists in tech. We're supposed to be more educated than the folks who came before us, yet we're doing a worse job.


> Docker as a company may be a joke, but I don't think the software will be nearly as nice to use without them. I think it's ridiculous that so many asshats are jumping on the hate Docker (the company) bandwagon without understanding how much they have been taken advantage of by the big players who can absolutely support them, but choose not to.

As much as I do not condone said big player's actions here, the whole system just doesn't reward "doing the right thing" as a general rule. If the licensing allowed it, they were in their right to do it. Even if they did do their right thing and support them, their competitor may not have. The morality element just doesn't have much weight the way things are.


This. FOSS licenses have their problems.


Yep. Hate the game, not the player.


No one will pay unless you force them. We can wish all we want that the world is different but I've seen this over and over. You need to hold something back from day one or you'll never make money.


> You need to hold something back from day one

I don't know if it needs to be held back, but the product needs to have innovation or it's going to feel like features are being held back and put a sour taste in the community's mouth.

I get the outrage. If it's pointed at Docker's inability to productize, then bravo, but I think AWS and the other big guys deserve most of the blame for taking tons of money on the table without contributing much to the state of the ecosystem. It's just tragedy of the commons with more steps.


WASM is the future


The Docker management team needs visionaries. Someone who actually understands what Docker can truly be. Right now they are just trying to milk the cow before it can even produce milk.


The cow has to produce milk or they go bankrupt




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: