Hacker News new | past | comments | ask | show | jobs | submit login
Docker is deleting Open Source organisations - what you need to know (alexellis.io)
1556 points by alexellisuk on March 15, 2023 | hide | past | favorite | 738 comments



As an SRE Manager, this is causing me a hell of a headache this morning.

In 30 days a bunch of images we depend on may just disappear. We mostly depend on images from relatively large organizations (`alpine`, `node`, `golang`, etc), so one would want to believe that we'll be fine - they're all either in the open source program or will pay. But I can't hang my hat on that. If those images disappear, we lose the ability to release and that's not acceptable.

There's no way for us to see which organizations have paid and which haven't. Which are members of the open source program and which aren't. I can't even tell which images are likely at risk.

The best I can come up with, at the moment, is waiting for each organization to make some sort of announcement with one of "We've paid, don't worry", "We're migrating, here's where", or "We've applied to the open source program". And if organizations don't do that... I mean, 30 days isn't enough time to find alternatives and migrate.

So we're just left basically hoping that nothing blows up in 30 days.

And companies that do that to me give me a very strong incentive to never use their products and tools if I can avoid it.


The images you mention (alpine, node, golang) are all so-called “Docker Official Images”. Those are all the ones without a slash as the namespace separator in them: https://hub.docker.com/search?q=&type=image&image_filter=off...

They are versioned and reviewed here: https://github.com/docker-library/official-images

I don't expect them to go away.

Disclosure: I maintain two of them (spiped, adminer).


"don't expect" or "for certain"? Can't really plan ahead without some kind of certainty.


There is no real distinction between those two phrases here, because the person using those phrases isn't ultimately in control.


Indeed. While I do maintain two of them, that maintenance is effectively equivalent to being an open source maintainer or open source contributor. I do not have any non-public knowledge about the Docker Official Images program. My interaction with the Docker Official Images program can be summed up as “my PRs to docker-library/official-images” (https://github.com/docker-library/official-images/pulls/TimW...) and the #docker-library IRC channel on Libera.Chat.


Even if they were fully in control, there still would not be a distinction because whoever is controller this decision could change their mind at a later date.


My analysis of this:

After Kubernetes became the de-facto container orchestration platform, Docker sold a bunch of their business to Mirantis. They shifted their marketing and positioning from enterprise to developers. From public sources, it sounds like their strategy is doing pretty well.

The question then is, does Docker look like they are committed to open-source and the open-source ecosystem?

1. You would think that a developer-focused strategy would involve open-source, and that doing things to decrease their influence on the open-source world would reduce their influence, branding, and narrow their funnel. (But maybe not. Are the people paying for Docker Desktop also big open-source users and advocates?).

2. It sounds like Docker has full-time internal teams that maintain the official Docker images and accept PRs from upstream.

3. Docker rate-limited metadata access for public repositories. Is that a signal for weakening support for open-source?

4. According to the article, the Docker Open Source program is out-of-touch ...

5. ... But they may still be paying attention to the big foundations like CNCF and Apache. So the images people depend upon for those may not be going away anytime soon

So I would look for other signals for diminishing commitment to open-source:

- If several of the larger projects pulls out of hosting on Docker Hub

- If the internal Docker teams are getting let go

- If the rate at which PRs are accepted for the official images are reduced

- If the official images are getting increasingly out of sync with upstream

- Some other signals that matches


From my understanding, Docker is moving to become an open to (proprietary) enterprise packager for closed source, or paid for software development.

- Keep access to big, permissively licensed open source software.

- Charge for higher pulling limits and tools.

- Keep source open, but infra closed, hence converting whole infra to "source available".

- Keep "small open source fish" out of the pond, by charging for what's available on the hub/platform.

As a result, they are kinda becoming "Snap Store" of containers. Premium feel, high fees for higher bar for entry, etc.

At the end of the day, Docker is just a hungry whale chasing money. I can't blame them, but they are not motivated by the value they provide anymore. They are motivated by the money they can make.

Sad, but understandable (to a degree). This makes them very easy to disrupt in free software arena. I'm a paying Docker Pro customer, but I might look somewhere else in the long run.


Interesting, "snap store of containers" sounds plausible. Seems like a decent business model, but also, one that is not really committed to the open source ecosystem.

To go back to the question that started this out -- should we be worried about having image dependencies pulled out all of the sudden? It sounds like, if it is a large open source project, probably not so soon. That includes the `library` repos.

Any smaller project should be vendored if they are still sticking to Docker Hub.


I don't think that Docker has an interest to be committed to the open source ecosystem anymore.

They're playing the long game. They standardized container format, completed the reference implementation, gave it as open source and implicitly told everyone "they have done their part, the infra is not under their monopoly, so they're free of the burden".

Due to experience, I'm wary of the infrastructure which I can't build/rebuild. So, anything above Linux distribution + the package repositories needs to be buildable from scratch. As a result, I don't use ready-made app containers unless I have to.

I pull a distribution container and configure it to my liking, and build my own containers. Moreover, if I use the container more than two times, I publish its Dockerfile publicly, so anyone can build it from scratch if they want to. This allows me to get my hands dirty and pivot and rebuild pretty quickly if companies pull things like that.

Containers are great, but not knowing how to work without them, or not knowing how to make one is taking great toll as companies pivot to proprietary/money first models more and more.

Maybe being open and being closed just a cycle, and we're moving to other half of the period?


Did Docker standardise the container format?

I always was under the impression that they had great marketing but were mostly capitalising on tech built by other (mostly Google to be honest) which failed to properly market them.


Sage insight, definitely looking forward to disrupting them in the near future :)


> You would think that a developer-focused strategy would involve open-source, and that doing things to decrease their influence on the open-source world would reduce their influence

The influence you are talking about is negative: increasing control means increasing arbitrary behaviour, which means increasing your risks as a platform user, which means masses of pessimists fleeing to saner competitors who don't seem willing to take away your toys.


the difference is that the person in control would be attesting to their state of mind, versus the person out of control attesting to their understanding of the relevant circumstances.


One could argue that only the person who is in control could say “for certain”, and as such, that is the implicit differentiator between those two phrases.


I mean yeah okay fine the phrase is then "according to your understanding of the rules set forth by Docker, as of today's edit of the linked PDF (2023-03-15), and in accordance with the current (2023-03-15) configuration of the three images, `alpine`, `node`, and `golang`; are those three images covered by the open source program and will continue to be accessible or will those images cease to be accessible by non-paying members of the general public in thirty (30) days?"

It's just that I'd thought we'd moved past the need for that level of pedantry here, but apparently not.


That's probably a good thing, because that makes clear you missed the point about the Docker Official Images program. Docker's support for open-source organizations has nothing to do with the Official Images program; they are generated by Docker themselves, rather than being generated by an open source project and merely hosted by Docker.


I may sound pedantic, but in all honesty, Docker has been quite hostile over the past few years in terms of monetizing / saving costs, so nothing would surprise me at this point. I would definitely not feel comfortable saying "for certain". Phrased differently, if the person who is in control says "for certain", vs. some random HN user, I would attach a lot more value to the statement made by the personal in control.


There's no need to post a combative reply. The parent is just pointing out the dictionary meaning of 'certain'.

You could instead set the bar at a 'high likelihood' instead of 'certain'.


Unless you're hosting the infrastructure yourself, you can't ever be certain. No one can know for sure what Docker will decide to do in the future. The entire company could shut down tomorrow.

But it seems to me that Docker official images are no more at risk of deletion today than they were a week ago.


> Unless you're hosting the infrastructure yourself, you can't ever be certain.

I can't be certain I won't be hit by a car, or a storm won't rip up the fibre to my house, or a raft of other things, either.


You relatively can.


> Can't really plan ahead without some kind of certainty.

If you are relying on images hosted by a third party, you have already committed to relying on something without certainty.


> Can't really plan ahead without some kind of certainty.

You can only plan ahead with uncertainty, because that's the only way that humans interact with time. Nothing is 100%. Even if you paid enterprise rates for the privilege to run a local instance, and ran that on a physical server on your site, and had backup hardware in case the production hardware failed...the stars might be misaligned and you might fail your build. You can only estimate probabilities, and you must therefore include that confidence level in your plans.

Sure, depending on free third-party sources is much more risky than any of that, but no one knows the future (at least for now, and ignoring some unreliable claims of some mystics to the contrary, though I estimate with very high confidence that those claims are false and that this state of affairs is unlikely to change in the next 5 years).


Useful information, bad look for Docker - "Oh, no slash as the namespace separator. Good and easy way to tell, that's how I would've done it!".


I mean, it's not a terrible convention. On the website they have a badge ("docker official image"), but devs aren't usually looking at the website, they're looking at their Dockerfile in vim or whatever. This is a straightforward way to communicate that semantically through namespacing.

Still, shame on docker for the rug-pull.


It's better than none, but explicit over implicit. If it were namespaced like PULL docker.org/offical/alpine:latest that would be better, imo.


They are also available at docker.io/library/alpine and equivalent, and I'd advise anyone to start using this format as more distros might break the default registry[1].

[1]: https://man.archlinux.org/man/containers-registries.conf.5.e...


One should get in the habit of prefixing them with docker.io/library through, simply because docker's claim on being the default namespace is unacceptable (and also not true on RHEL-adjacent distros)


First of all, want to say, that sounds deeply frustrating.

Secondly, if this is a serious worry. I would recommend creating your own private docker registry.

https://docs.docker.com/registry/deploying/

Then I would download all current versions of the images you use within your org and push them up to said registry.

It’s not a perfect solution, but you’ll be able to pull the images if they disappear and considering this will take only a few minutes to set up somewhere, could be a life saver.

As well, I should note that most cloud providers also have a container registry service you can use instead of this. We use the google one to back up vital images to in case Docker Hub were to have issues.

Is this a massive pain in the butt? Yup! But it sure beats failed deploys! Good luck out there!


I would not recommend doing it through Docker, though, especially after this change. We use AWS's ECR, and you can set it to do pull-through caching of public images, so images you've already used will stick around even if Docker blows up, and you don't have to pull the images yourself, you just point everything in your environment to ECR and ~~ECR will pull from docker hub~~ (EDIT: it only supports quay.io, not docker hub) and start building its cache as you use the images.


ECR pull through caching is only possible for other ECR repos or quay.io. you cannot use it for Docker hub.


ah, sorry. We only use it for quay, I didn't realize that was out of necessity rather than targeting.


Knowing ECR has pull through caching is really helpful. I'm sure we would have come across that in the course of investigating our response, but this definitely saved us some time!

Edit: Damn, looks like ECR's pull through caching only works for ECR Public and Quay? It's a little unclear, but maybe not a drop in solution for Docker Hub replacement.

https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull-...


As someone who maintains the registries we use globally at work, +1.

I know people groan at running infrastructure, but the registry software is really well documented and flexible.

If you don't need to 'push', but only pull - configuring them as pull through caches is nice for availability and reliability -- while also saving from nickle/diming.

They will get things from a configurable upstream, proxy.remoteurl.

Contrary to what the documentation says, this can work with anything speaking the API. Not just Dockerhub.

edit: My one criticism, it's not good from an HTTPS hardening perspective. It's functional, but audits find non-issues.

You'll want nginx or something in front to ensure good HSTS header coverage for non-actionable requests, for example.


All good points but while this saves you from the docker images disappearing it does nothing to solve the issue of those images no longer receiving important security updates and bug fixes going forward.


Indeed, buying time at most :)

The situation just presented an opportunity for improvement, I don't intend to suggest it as a cure - but a good step!

Edit: For anyone curious, our upstream is actually the same software somewhere else, utilized by CICD.

That being the origin allows for pushes, with the pull-through caches being read-only by nature


That's good to hear. So I'll just have to spend an hour or so tomorrow night ensuring our private pull-through registry is used on everything prod and the biggest explosion is averted. Images built by the company land in internal registries already, so that's fine as well.

Means, it's mostly a question of, (a) checking for image squatting on the hub after orgs get deleted, which I don't know how to deal with just yet (could I just null-route the docker hub on my registry until evaluated and we just don't get new images?), and (b) ruffling through all of our container systems to see where people use what image to figure out which are verified, or paying, or obsoleted, and where they went, or what is going on. That'll be great fun.


The typical Docker registry software when configured as a 'pull through' doesn't allow for pushes, if memory serves. That may be an important consideration while handling the situation

We run them in 'maintenance mode' just to be absolutely sure anything the upstream doesn't have (or had at one point) is permitted in!

Though, I don't think they'll allow pushes anyway with 'proxy.remoteurl' defined.

I'm not sure I followed your setup properly, but with the private registry defined as your 'proxy.remoteurl', you shouldn't have to worry about the Hub in particular - unless it's looking there, or people are pushing bad things into it


> I'm not sure I followed your setup properly, but with the private registry defined as your 'proxy.remoteurl', you shouldn't have to worry about the Hub in particular - unless it's looking there, or people are pushing bad things into it

That is exactly the thing I am worried about, as we have a pull-through mirror for the docker hub.

What happens if some goofus container from that chaotic team pulls in knownOSS/component, but knownOSS got deleted and - after 30 days of available recon by _all_ malicious teams on the planet - got squatted instantly afterwards with rather vile malware? Spend some pennies to make a dollar by getting into a lot of systems.

Obviously, you can throw a million shoulds at me, shouldn't do that, should rename + vendor and such (though how would you validate the image you mirror.), but that's a messy thing to deal with and I am wondering about a centralized way to block it without needing anyone but the registry/mirror admins.


Ah I see!

I misunderstood, didn't realize that it's pointing to the Hub. I assumed the more strict sense of 'private' :)

The Docker-provided registry software is limited in terms of "don't go here". You get all of upstream, essentially

Quay or Harbor are more configurable in that regard, but I'm less familiar.

We're privileged, being already very-offline and signature heavy... and that's someone else. I just run the systems/services!


The problem here is that the company I work at has started building these golden images, full of cruft and then no team gets allocated to maintain them.


> Secondly, if this is a serious worry. I would recommend creating your own private docker registry.

I've personally been using Sonatype Nexus for a few years with no issues - both for caching external images, as well as hosting my own custom ones. It has pretty good permissions management and cleanup policies.

Here's more info: https://blog.kronis.dev/tutorials/moving-from-gitlab-registr...

Edit: here's a link to the site of the product directly as well, in case anyone is interested in the self-hosted option: https://www.sonatype.com/products/nexus-repository

It's probably not for everyone, but only having to pay for the VPS (or host things on my homelab) feels both simpler and more cost effective in my case. I've also used it at work and there were very few issues across the years with it, mostly due to underestimating how much storage would be needed (e.g. going with 40 GB of storage for approx. 10 apps, each of which were in active development).


The regsync utility is very useful for this purpose. It’s part of this excellent regclient project:

https://github.com/regclient/regclient


> If those images disappear, we lose the ability to release and that's not acceptable.

This shines light on why it is so risky (from both availability and security perspectives) to be dependent on any third party for the build pipeline of a product.

I have always insisted that all dependencies must be pulled from a local source even if the ultimate origin is upstream. I am continuously surprised how many groups simply rely on some third party service (or a dozen of them) to be always and perpetually available or their product build goes boom.


Likewise. I've always insisted on building from in-house copies of external dependencies for precisely this kind of scenario. It astonishes me the number of people who didn't get why. Having things like docker rate-limiting/shutdowns, regular supply chain attacks, etc has been helping though.

Slightly related: actually knowing for sure that you've got a handle on all of the external dependencies is sometimes harder than it should be. Building in an environment with no outbound network access turns up all sorts of terrible things - far more often than it should. The kind that worry me are supposedly self-contained packages that internally do a bunch of "curl | sudo bash" type processing in their pre/post-install scripts. Those are good to know about before it is too late.


> Building in an environment with no outbound network access turns up all sorts of terrible things

Yes, highly recommended to build on such a system, it'll shake out the roaches that lie hidden.

In a small startup environment, the very least to do is at least keep a local repository of all external dependencies and build off that, so that if a third party goes offline or deletes what you needed you're still good.

For larger enterprises with more resources, best is to build everything from source code kept in local repositories and do those builds, as you say, in machines with no network connectivity. That way you are guaranteed that the every bit of code in your product can be (re)built from source even far in the future.


Be sure to archive your development tools as well, just in case that rug gets pulled. You don't want to be in the position that you need v3.1415927 of FooWare X++ because version 4 dropped support for BazQuux™, only to find that it's no longer downloadable at any price.


I do not know if Nix will be the answer, but I really hope it or a successor drags us to fully explicit and reproducible builds.


for reproducing a build you need at the least the source and the tools to build it which might not be available as well


Yes, but Nix is essentially about getting things built, so those build tools are part of the recipe to make something happen.

I'm still learning Nix myself, but one small example: a small, Haskell-based utility I've written depends on specific versions of one library, due to API changes. That version gets lumped in according to some GHC versions. The whole situation was uncomfortable, in that code I had left working, stopped building some time later when I came back to run with whatever was seeming more current.

Defining a short nix flake solved all of that. That first compile was a slog, since it fetched and built the appropriate GHC and libraries, including whatever transitive dependencies those needed. Once done though, those are cached, and "nix build" just works.


We can't go NIH for everything. If we do that we're back to baremetal in our own datacenters and that's expensive and (comparatively) low velocity. We have to pick and choose our dependencies and take the trade off of risk for velocity.

This is the tradeoff we made with the move to cloud. We run our workloads on AWS, GCP or Azure, use DataDog or New Relic for monitoring, use Github or GitLab for repos and pipelines, and so forth. Each speeds us up but is a risk. We hope they are relatively low risks and we work to ameliorate those risks as we can.

An organization like Docker should have been low risk. Clearly, it's not. So now it's a strong candidate for replacement with a local solution rather than a vendor to rely on.


It's less NIH and more "cache your dependencies." Details will very greatly depending on what your tech stack looks like, if you're lucky you can just inline a cache. I know Artifactory is a relatively general commercial solution although I can't speak personally about it.

If you can't easily use an existing caching solution, then the only NIH you need to do is copying files that your build system downloads. I know many build systems are "just a bunch of scripts" so those would probably be pretty amenable to this, I don't know if more opaque systems exist that wouldn't give you any access like that. If so, I suppose you could try to just copy the disk the build system writes everything to, but then you're getting into pretty hacky stuff and that's not ideal. Copying the files doesn't give you the nice UX of a cache, but it does mean that in the worst case scenario you at least have the all the dependencies you've used in recent builds, so you'll be able to keep building your things.


> I don't know if more opaque systems exist that wouldn't give you any access like that

As long as there is there is "server reimplementation", i.e. private registries available, one can always hack together a solution out of self signed CA, DNS and routing to replace "the server" with local registry.


"Free service which requires $$ to maintain" and "low risk" are not compatible.

We moved to cloud as well, and we use AWS ECR for caching. We have a script for "docker login to ECR" and a list of images to auto-mirror periodically. There is a bit of friction when adding new, never-seen-before image, but in general this does not slow developers much. And we never hit any rate-limits, too!

We pay for those ECR accesses, so I am pretty confident they are not going to go away. Unlike free docker images.


> We can't go NIH for everything. If we do that we're back to baremetal in our own datacenters[...]

It's a bit of a leap from keeping copies of dependencies to building your own datacenter. Even the smallest startup can easily do the former.

> This is the tradeoff we made with the move to cloud.

To clarify, when I say keep local copies I meant copies which are under local control (i.e. control of your organization). They may well still physically be in AWS somewhere. The key is that they can't be modified/deleted by some third party who doesn't report to your organization.

Yes, this assumes AWS is too big to fail, but for the typical startup whose entire existence is already dependent on their AWS account being available, this would not increase risk beyond what it already is. Whereas each additional hard dependency on third-party repos do increase risk.


> that's expensive and (comparatively) low velocity

The problem with this approach begins when many people your build depends upon start to share it.


You can prototype without NIH and later go NIH when you have stuff to lose.


I run a local Ubuntu mirror for the work systems I manage, for this reason.


You can vendor images. Never have your product depend on something that is in the internet. Spin up Harbour locally and put it in the middle to cache at the very least.


Imagine if everyone actually did this. Then we would have a myriad of base images hiding even more malware than we do currently.

Not to mention vertically integrating the entire Docker layer set defeats the whole point of using Docker in the first place.


That's.... I don't know how you even arrived at that idea of that being what happens? Are you imagining some kludged together perl script to hackily save the tarballs, written by someone who is then immediately let go?

What they're suggesting is basically setting up a cache for it locally in-between them and the "main repo" and ensuring the cache doesn't delete after x days and/or keep backups of the images they depend on.

If the package disappears, or the main repo falls over (cough github, cough), your devs, CI & prod aren't sat twiddling thumbs unable to work...

and if the package is nuked off the planet? You've got some time then to find an alternate / see where they move to.


No, you're wrong. Everyone who wants to stay in business and makes money actually does it. Has been my experience in all big companies, it's a business continuity problem /not to do it/. You can and should run security in the vendored images.


What are you talking about? Malware and spyware is just as likely (if not very much *more* likely - depending on the definition of malware or spyware*) to be in corporate sponsored software than it is in foss software, and that idea extends to software distribution.

I would expect the security and quality of images in a decentralized system to be far superior to any centralized system spun up by some for profit entity.

* malware and spyware could be defined here as software that allows remote keylogging, camera activation, installation of any executables, etc - i.e. root access - which is precisely what most corporate entities make software to do (e.g. "security solutions" that you have to install on your work computers). This is also most web services which are 90% tracking with an occasional desired application or feature these days.


I've never worked somewhere that didn't have an internal Artifactory with copies of everything.

Not doing that is unusual, and actually less secure. Do you think it's sane or secure for all of your builds to depend on downloading packages from the public internet?


They're internal mirrors of public images, if there's something in your infrastructure installing malware on them you've got bigger problems


Many of the responses here are talking about how to vendor/cache images instead of depending on an online registry, but remember that you also need access to a supply chain for these images. Base images will continue to be patched/updated, and you need those to keep your own images up to date. Unless the suggestion is to build all images, from the bottom up, from scratch.


It's a stop-gap measure. There are dozens of companies chomping at the bit to replace Docker as THE docker registry: I'd bet someone at Github is very busy at this very moment.


The article talked about using the Github Container Registry, which was launched in 2020.


Those very busy people at Github may well be in marketing


> Base images will continue to be patched/updated, and you need those to keep your own images up to date. Unless the suggestion is to build all images, from the bottom up, from scratch.

If docker pushes people to that, hopefully more reproducible solutions like nix and it's ux friendly "porcelains" such as https://devenv.sh/ gain market share.


Typically when you "cache" something, you're gonna expire it at some point...no? If the image is patched, it eventually gets refreshed in the mirror. If the image dissapears at least we still have it until we figure out where the heck it went.


>Unless the suggestion is to build all images, from the bottom up, from scratch.

this doesn't really seem like an unreasonable suggestion.

just change your build process to pull from the source repository (almost always linked from the docker hub page) instead and eliminate one level of dependency from the chain. in an ideal world, docker hub would have been a more stable buffer inbetween the original source of the image and you, but as long as they are proving to be more of a liability than a source of stability, just cut them out.


> If those images disappear, we lose the ability to release and that's not acceptable.

left-pad moment once again.

> I mean, 30 days isn't enough time to find alternatives and migrate.

Maybe take control of mission critical dependencies and self-host?


> Maybe take control of mission critical dependencies and self-host?

Last few years prove that this option is a no-go - they just don't do such things ! Independence ? Self-sufficiency ? Security ? Local, fast access ? Obviousness ? No payment required ? Avoid at all costs !


> Last few years prove that this option is a no-go - they just don't do such things !

Who are "they"?


Busy DevOps crews?


taking responsibility for our supply chain and things we depend on that use mostly for free? absolutely preposterous, the business demands more features.


This whole thing is so weird. Why do so many organizations depend on the internet to function?

It wasn't too long ago that it was standard practice to vendor your dependencies; that is, dump your dependencies into a vendor/ directory and keep that directory updated and backed up.

But now, you all think it's 100% acceptable to just throw your hands up if github is down, or a maven repository is down, or docker hub makes a policy change?

Every year that goes by it becomes clear that we are actually regressing as a profession.


There are some places that still work the old way, such as where I work - and we're finding we're increasingly out-of-touch with younger developers who grew up in a connected world. We had a recent college grad engineer (developer) who didn't work out as a hire. Some examples of the disconnect:

Try as I might, I couldn't get him to understand the difference between "git" the tool and "Github" the website. He kept making me nervous because he'd slip up and use the two terms interchangeably. (We have sensitive data that shouldn't be uploaded to the cloud.)

He didn't seem to completely understand files/folders and the desktop metaphor. He didn't seem to understand the difference between personal devices and work devices.

The last straw for our boss to let him go was he turned in a project that used a free web service on the cloud to upload data and get back the rows sorted. (Refer back to what I said above about: sensitive data.)

It didn't appear he was being obstinate, it was a tech-cultural difference. "Radical semantic disconnect" as I've seen the term used in science fiction.


Oh dear, sounds like a tricky one, but I'm not sure that the "cultural" difference is what really mattered: the question is, did he have a willingness to understand where his worldview was falling short; to see that his limited experience of the world and college education was only a tiny subset of human experience and technical practices; to actively engage with those differences and continue learning? Unfortunately over the years I've also had some bad experiences with recent grads and the biggest problems usually boiled down to arrogance rather than ignorance... e.g. I can sort-of understand somewhere along the line that someone could have mistakenly learned that this new thing called "unicode" was invented for unix machines (after all, the names sort of sound similar) and that therefore we have absolutely no business trying to use it on a Windows system. But to then absolutely insist until you are blue in the face that you must be right about this because you learned it in college and everyone else in the team is just wrong, no matter what evidence is produced... well that is a difficult situation that I have personally encountered.


That is not a youner/older thing. That is just someone who does not understand the tech they are working with. I had one dude about 10 years ago roll into our org with the source code from another company. He saw nothing wrong with it. Our lawyers disagreed. I never let it past the code review stage.


> But now, you all think it's 100% acceptable to just

Who is this "all" you're talking to? Seems like most of the responses are suggesting vendoring too.


> Who is this "all" you're talking to?

This is a thread under most updooted comment. Generally it is safe to assume that top comment more or less reflects general sentiment


"Reflects general sentiment" seems like quite a stretch. It doesn't take many upvotes to float to the top.


Or you could, you know, host a Docker registry and reupload those images to something you control. Worst case scenario, in 30 days, nothing is gone from Docker and you can just spin it down.

Your job as an SRE is not to look at things and go "oh well, nothing we can do lol".


i didn't read it as that, they were stating the realities, assumptions were made, those assumptions are now invalid, they are working on alternatives, 30days is a short deadline for something like this and docker as an organization is behaving poorly.

all of that seems pretty true, and frankly no one should support a company that does something like this. I get they need to figure out how to make money, but time has shown the worst way to do that is to screw over customers or potential customers.

I like the poster will never trust docker, and will never use their tooling or formats, pod-man all the way.


Earlier events already had us slowly switching out docker for pod man, and the tooling is more similar than I had expected. Half of the work is ensuring the images are explicitly prefixed with docker.io/

And this week it turns out that makes the now problematic spots a lot more greppable


Yes, that involves ripping out Docker Hub everywhere. It's a significant chunk of work, not something easily fit into 30 days on a team that is already strapped for resources with more work than we can do.


Setting up harbor as a docker proxy-cache is actually quite simple



I'm not familiar with how Docker works, so forgive the ignorance. I thought the point of docker images was portability? Is it not just taking the references and pointing to a new instance under your control?


Most production workloads do not use docker directly, but rather use it as sorts of "installation format" that other services schedule (spin up, connect, spin down, upgrade). One of typical defaults is to always try and pull new image even if requested version is available in node-local cache. On one hand it prevents issues where services would fail to start on certain nodes in the event of repository downtime. On the other hand it blocks service startup altogether. With such a set up availability of registry is mission critical for continuous operation.

Some people think it is a perfectly reasonable idea to set up defaults to always pull, point to latest version and not have local cache/mirror. Judging from the number of upvotes on OP, depending on third party remote without any SLA to be always available for production workloads seems to be the default.


I'm not too familiar with docker myself, but gitlab's selfhosted omnibus includes a container registry that Just Works™ for our small team.


That’s unplanned work. There’s other work needing to be done as well.


And a sudden fire is also unplanned work, but that's still your work. If this is such a threat, then maybe shift priorities around.


Are we not allowed to complain about unnecessary unplanned work being foisted on us with 30 days notice?

That seems like an entirely relevant complaint for this forum but from your first reply, you’re acting like somehow it’s the greatest offense in the world that someone pointed this out.


It’s not like using cloud services without suitable contractual agreements isn’t a known risk?


Sure. It’s a risk. But that doesn’t somehow make this work expected and planned, not invalidate the original comment.

It could have happened at any time. But it’s also been running for a decade now so there’s an expectation that things will continue rather than have the rug pulled with 30 days notice.


Come on, 30 days notice is a walk in the park. Additionally, OP was the one complaining that changing a few URLs and eventually spinning up a new server. It's quite literally a one day or two job, unless you're at a company the size of Amazon (in which case, luckily for you, you're not the only SRE, so it's still just a few days).

> The best I can come up with, at the moment, is waiting for each organization to make some sort of announcement with one of "We've paid, don't worry", "We're migrating, here's where", or "We've applied to the open source program". And if organizations don't do that... I mean, 30 days isn't enough time to find alternatives and migrate.

This is the original comment. The best they can come up with is... do nothing and wait to see if the smoke turns into a fire ? I've seen better uses of time. 30 days is enough time to find an alternative, migrate _and_ get regular coffee breaks too.


> Come on, 30 days notice is a walk in the park

Sure, maybe in a small business or startup, and even then I'd content not quite as easy as all that.

When you're dealing with anything larger, say involving multiple teams, organisations, and priorities, 30 days is an insanely short shrift to look at figuring out what your actual route forwards is (and if you're provisioning something new, making sure you're allowed to and have any relevant sign-offs etc.)

This particular situation with Docker doesn't affect us, but if it did this would have some serious knock on implications. The teams in my org are already busy with things that need to GA by certain dates or there will be financial implications. It's not "tire fire" but in most cases it's solid "don't waste time" territory. There's always flex in the schedule, but the closer to a GA date you get the more rigid the schedule has to be.


If you're unable to take on a task that has a 30 day deadline in your org, regardless of size, you're experiencing a good amount of bloat.


You're absolutely right! But bloat is also incredibly common, especially when "task" in this case might describe "multi-team project." Can it get done in 30 days? Of course! Might you already have dozens of high-priority projects to deliver in the next 30 days with stakeholders screaming at you every day for updates? Absolutely!


You should read The Phoenix Project to understand that "have dozens of high-priority projects to deliver in the next 30 days" is a consequence of poor management, not a given even for large organizations.


I've read TPP. What I'm saying is, most companies are Parts Unlimited before their awakening and a lot of us are Brent. I think the reason that book resonated so well the dysfunction described there is a lived reality for most IT workers.


Fair, but I’m also fair to call that out, I think!

And I do try to do less Brent things, I think there was a degree of amnesty given to Brent that we can improve on. We can all be more Bill, even if we’re still SMEs.

Something something managing up, or getting the kinds of Bill jobs to make the changes.


> Sure, maybe in a small business or startup, and even then I'd content not quite as easy as all that.

Not enough time even in this case; for some (apathetic) people, 30 days isn't actually 30 days.

Every now and then I'll come across the kind of small org where the one person (euphemistically and sarcastically) referred to as SRE only checks email once a month because they've "evolved beyond primative tech" or wtf ever, then get mad about it like it was a conspiracy rather than self-sabotage.

90 days minimum for overt assaults on stuff that some ppl may require to keep their doors open. This kind of shit is enough to possibly mess with people's livelyhoods in edge cases. Personally I never used Docker. I'm kinda paranoid so "free" stuff not backed by some kind of legal guarantee like open source licensing always seems sorta shady.


At this size you should have a local registry that acts as a transparent cache. If you don't, then get one right now. What happens if Docker's servers are down for whatever reason? Does your whole process break?


Sorry, didn't mean to imply that it is actually affecting us or even a concern. It isn't. I was just calling out that 30 days isn't just "simple" as the parent poster was asserting.


30 days is nowhere near enough times for people with real jobs that have other things to do rather than drop everything to do this. Once again, completely needlessly.

You’re making a mountain out of an entirely valid complaint.

Quoting your own profile, stay mad.


“Tell me you’ve never worked an enterprise tech job without telling me you’ve never worked an enterprise tech job.”

My next 30 days are already accounted for, and will already include disruptions that actually come from the area of work.


Heh. "But shouldn't an enterprise have all of these things figured out and mirrored and also pay money to Docker Inc"?

Should? Certainly. But guess what kind of emergencies it takes to get these things finally prioritized and what kinda mad scramble ensues from there to kinda hold it together.


Looks like some of the planned work's getting pushed back a little bit. That's the company paying the price for saving money earlier in exchange for taking on more risk. Decent chance the move was still net-beneficial for them.

Either way, it's the company reaping the benefits and paying the costs, not me.

It would have been nice and more confidence-generating for Docker to make this a 90-day notice rather than 30, but I'm not going to get upset that some of the work my company wants to do will get done slightly later for reasons having to do with their own penny-pinching and some 3rd party's somewhat-rude notice period for termination of service to entities that aren't even us and who weren't paying them. Job's a job. You want me to fix this and delay some other thing, or let it break and do some other thing instead? Fix it? Cool, no problem.


So, you're 100% booked with no room for anything? Congratulations on your management for understaffing your team and expecting you to do 120% if something happens, they're saving up quite a bit of money.

You might want to start respecting your free time though, because they clearly don't give a shit about you.


Did you stop reading that sentence halfway through? There's room for disruptions in that schedule. But that room isn't infinite and this is going to add a lot more disruption on top of the existing expected amount.


They didn’t ask for a perfectly reasonable explanation.


What if you already have important planned and unplanned urgent work occupying all your SRE'S for the month? On a team or org that's already running thin? Surely you've been there.


I have. And it was also my job to say to management "hey, there's a very preoccupying fire right there, and it will delay this less important thing. If you're unhappy about it, send me an email explicitly telling me drop the very preoccupying fire."

"Everybody has a plan until you get punched in the mouth" also applies to tech.


I think this thread was started by the manager that has to hear that push back, hence the headaches.


Getting punched in the mouth is pretty far from a walk in the park.


In this case it is entirely planned work: Anyone depending on docker.io chose to make their processes dependent on online endpoints with whose operators you have no business relationship. An unpaid third-party service going offline should be far from unexpected and if you rely on it you better be ready to cope without notice.

This is like complaining that you have to put out a fire because rather than fixing the sparking cables you have been relying on your neighbor to put them out before the become noticeable and he only gave you a short notice that he'd be going on vacation.


> Anyone depending on docker.io chose to make their processes dependent on online endpoints with whose operators you have no business relationship.

This does not somehow make the work “planned”. That has a specific definition and this ain’t it.

Some people may have called it out as a risk when it was implemented. But that still doesn’t mean it’s planned.

Someone may have included an explicit report on how to deal with it at that time. That still doesn’t make it planned.

Also, just because it’s known to be a risk and may have a chance to happen in the future does not make it expected either. Nor planned.


If someone were to set my server room on fire, I'd be equally annoyed about them.


Yeah, but with a sudden fire there's a good chance there's at least a rudimentary disaster recovery plan lying around to kick things off. How many ppl have a DRP for "docker bends open source over and goes smash?"


You’re missing the mark. It’s about risk and expectation management and one the risks just blew up in an unexpected way.


Yes, that’s what unplanned work means.


I'm sure docker will happily hold off on this work until you can fit it into your OKR planning next quarter. /s


This exactly. We have pipelines designed specifically for this reason. We pull, patch, and perform minor edits to images we use. We then version lock all-the-things for consistency.

Not saying this is good news, but in the Enterprise, you have to plan for shit like this.


Imagine you shipped software that included references to docker hub images. That software will no longer work if any of the referenced images are deleted from docker hub. This will be the case with any helm charts that reference images that are deleted from docker hub.

Some of those charts will not have variables that let you override the docker images and tags, so some of those will not be usable without creating a new release.

This is one of the primary reasons to vendor your third party docker images into a docker registry that you control.


"Don't release software that can pull code from random services on the internet then execute it without making that configurable" has been standard since the internet was available, just about.

Vendor your helm charts if they are production critical. Vendor the docker images if they are production critical. Vendor the libraries if they are critical.

As an added bonus, you even help making a saner internet where you don't pull left-pad three billion times a month.


Yes vendor them all too.


You should be escrowing any Docker images you depend on, I'd have thought.


Good thing is that setting up an image registry in AWS is so simple! (Ha-ha, only serious.)


It's strikingly trivial to self-host docker images in AWS ECR and to run your own CICD platform with safe deployments using EC2, the AWS SDK and the Docker SDK. A super basic process that monitors one GitHub repo is ~150 LOC.

EDIT: I just confirmed that GPT-4 can write this program. Have fun!


Thank you if you can share this prompt.


Literally just ask it to do it. I've been asking ChatGPT just now to write me a bunch of bash scripts I've procrastinated doing. Holy crap that thing is pretty awesome!


ChatGPT has learned no less than 100000 snippets about `rm -r *`, and how to trick people into accidentally using it.


`rm -r` is a great starter. Real conniseurs go for `wipe -fcrsQ1 /dev`.



It exactly is good.


It's great. We migrated our images to it months ago (not because of this, bandwidth issues mainly, we also vendor our base images on it) and it has given us exactly 0 problems.


This is what I don't understand about Docker's policy switch. Aren't all the companies that would potentially pay them just going to switch to one of the main CSPs Container Registry service?


I assume that is what they want. Without the jucy margins on running the containers hosting images is never going to be profitable.


What do you mean? Running containers with docker is free as in beer, as well as docker itself being free as in speech. Do you mean docker desktop? Any reasonably proficient developer can install docker as cli only and doesn't even need the lousy electron app...


I mean AWS/GCP/Azure provide extremely good and cheep Docker compatible image registries; and also happen to have high margin products allowing you to run containers in their cloud.

The is no way Docker can compete if they only offer a image registry.


It’s baffling. Moving registries is extremely easy.


There is also ECR Public Gallery, which mirrors many public images from DockerHub. https://gallery.ecr.aws


or more specifically, https://gallery.ecr.aws/docker

for any "official" image you're pulling from docker hub, just prefix it with "public.ecr.aws/docker/library/" to pull from ecr instead


> escrowing

Are you sure this is what you mean? Escrow is a type of contactual arrangement, one type of which is agreeing with a commercial partner that you get a copy of their source-code if they go broke.

I feel like you mean vendoring.


Maybe I’m old fashioned, but we used to call this mirroring.


Mirroring is the best way. Do it before a service is shuttered, which we used to call "closed".


[flagged]


"Vendoring" is 100% a word. It may not be in the OED or MW, but those things are descriptive, not prescriptive. Words become words when they are used as words, and "vendoring" is used as such. See:

https://en.wiktionary.org/wiki/vendoring

https://www.google.com/search?q=vendoring


"Vendoring" is a term of art that is used to describe incorporating third party dependencies into your (source code) repository. While not a perfect fit it seems close enough - closer than escrowing where typically a third party that has no immediate use for the artifict is the one holding it.


Things become a word when there is a critical mass of people that use the word. In this case vending initially refereed to placing a copy of the source code of the third party library into /vendor/ subdirectory, thus "vendoring" it. It has since been extended to similar use cases and has become part of the software developer jargon.


Maybe this is just me being a physicist, but I would have trouble applying the notion of escrow to anything that does not obey a law of conservation...

“Put that idea in escrow”—I assume I have to write it down first? “Put our incrementing page view count in escrow”—uh...? “Put my time in escrow”—how on earth am I going to get it back?

Similarly “escrowing your software dependencies”, hard to interpret if I didn't know the context. Whereas “vendoring” is similarly opaque but immediately recognizable as jargon and has made it into tools (`go mod vendor` and `deno vendor` for example).


Our organization currently caching all and every external dependencies we are using: Go, Python, npm and .NET packages, Docker images, Linux deb packages, so everything is contained inside our perimeter. We did that after one day our self-hosted Gitlab runners were throttled and then rate-limited by some package repository and all CI pipelines halted.


> Which are members of the open source program and which aren't.

You can tell which are members of the open source program if you go to their docker hub page and you'll see a banner "SPONSORED OSS"

Here is an example:

https://hub.docker.com/r/rclone/rclone


Any organization that has the means to pay, should pay for another service that is not openly hostile to users...


Sounds like you could save yourself some time and budget by offering to pay for those images your are using?


That's a fair point, and when someone with a working brain mentions the fallout throughout the Internet that would result, I expect Docker Inc. will reverse course and embark on a PR campaign pretending it was all a mere tawdry joke.


Use JFrog Artifactory. If you're with self hosting there's a free JFrog Container Registry edition.


You probably wanna move to AWS Public ECR Gallery. They have a notion of official images.

AWS is in a better position to offer long term coverage.


How hard is it to spin your own registry and clone those images there? I’m not heavily invested in my company’s infrastructure but as far as I can tell we have our own docker and npm registries


Just to be clear, the official images are definitely not at risk, and I say that as a Docker captain.

Official images are hugely important to Docker, now and going forward.


Time for you to locally clone the dockerfiles you're reliant on, build up your own in house repository, and then do what has been done since time immemorial.

Mirror the important shit. No excuses, just do. Yes, it's work. I guarantee though, you'll be less exposed to externally created drama.

Making sure your org stays up to date though, that's on you.


What's so hard about making your team build and host the images you rely on?

Install Gitlab, clone these projects onto it, it will usually detect and build the container images. You may have to manually fire off builds for older tags/branches but it will work


> I mean, 30 days isn't enough time to find alternatives and migrate.

Write a script to iterate the images and push them to your own registry. This will buy you time in the event anything does dissapear.


That's why better to have NetBSD + pkgsrc combo for servers.


You misspelled Nixpkgs ;-)

I'm kidding, of course, but IIRC pkgsrc (and alikes, such as APT) has a number of limitations, for example a very limited ability to have multiple versions of the same package installed, making it less than optimal replacement.

(I believe a lot of people depend on ability to spin up a new version while the old is running, then do the cutover and shut down the old one after it's not is use.)


Capabilities aside, if you're reproducible and source-based, you're gonna survive binary artifact repository outages a lot better than if you're not.

If there were a comparable culling of the Nixpkgs binary cache, pipelines relying on Nix for their packages would be affected in a much less invasive way: they'd see Nix silently fall back to upstream sources, and reproducibly build from source, wherever the caches binary artifacts became unavailable.


Also it has unprivileged builds, installations, packages rooted anywhere in filesystem, completely self-contained dependencies.


> APT ... has a number of limitations

...and crucial features, like having security fixes backported.


What you're alluding to is an Ubuntu decision about what goes into their repos, and has nothing to do with apt itself.


If anything it's usually a Debian decision. But it's more than that.


And Guix, too. Specially Guix.


pkgsrc allows installation of multiple versions of same package ;)


Concurrently, with both versions usable? I was not aware of this. Could you please give me a brief primer how it works? Thanks!


Good time/opportunity to get your team/company to invest in a registry+proxy to host all images you depend on.


Very first thing you should have is a mirroring docker hub proxy. Im surprised SRE manager doesn't have it, why?


And this is why in many organizations I insist on mirroring every packages and images that we depend on.


> If those images disappear, we lose the ability to release and that's not acceptable

It’s your responsibility to ensure your own business continuity. You should review how your build pipeline depends on resources outside of your org perimeter, and deploy a private registry under your own control.

btw, you could also contribute some mirroring bandwidth to the community. You must’ve heard that the cloud is just someone else’s computer.


This is why our team vendors the images we depend on into AWS Elastic Container Registry.


Why not use your own registry with a pull-through cache?


if you haven't mirrored the docker images your application needs to a private registry, then you are doing it wrong.


How viable is it to fork Docker Hub ?


the bad thing about other computers, could happen to everybody, it is harder to use your machine but better long termn


If your business is depending on these open source projects to exist, shouldn't you be paying them so they can then pay for Docker?


Not every open source project wants to deal with donations / payments that could force incorporation, tax filings, bank accounts, credit/debit cards, and other paperwork. I certainly wouldn't want to deal with that for a side project.


If you are part of an organization, you already need to deal with most of those ?


You might need to deal with this on the receiving end.


No, business love freeloading.


Docker the tool has been a massive benefit to software development, every now and then I have a moan about the hassle of getting something bootstrapped to run on Docker, but it's still worlds better than the old ways of managing dependencies and making sure everyone on a project is aligned on what versions of things are installed.

Unfortunately Docker the company appears to be dying, this is the latest in a long line of decisions that are clearly being made because they can't work out how to build a business around what is at it's core a nice UI for Linux containers. My hope is that before the inevitable shuttering of Docker Inc another organisations (ideally a coop of some variety, but that's probably wishful thinking) pops up to take over the bits that matter, and then hopefully we can all stop trying to keep up with the latest way in which our workflows have been broken to try and make a few dollars.


Docker should have been a neat tool made by one enthusiast, just like curl is.

Instead it has a multi-million dollar company behind it, and VC's who demand profits from a thing that shouldn't have ever had a business plan.


> Docker should have been a neat tool made by one enthusiast, just like curl is.

I have nothing but mad respect for Daniel Stenberg. 25 years of development of great software, for which he had been threatened[1] and had ridiculous US travel visa obtaining issues[2].

[1] https://daniel.haxx.se/blog/2021/02/19/i-will-slaughter-you/

[1] https://news.ycombinator.com/item?id=26192025

[2] https://daniel.haxx.se/blog/2020/11/09/a-us-visa-in-937-days...


There are lots of high functioning but harmless crazy people out there . I used to work for a government job and I found one of the most common tells was exactly what this "slaughter" person did. They love to list dozens of agencies to you for no reason. They have no authority so they hope they can borrow it from your fear of a random place. I cannot tell you how many emails/calls I have had left that fit this pattern, dozens at least.

>I have talked to now: FBI FBI Regional, VA, VA OIG, FCC, SEC, NSA, DOH, GSA, DOI, CIA, CFPB, HUD, MS, Convercent

Bonus Tell: They also love to say they are a doctor or PHD of something or often say PHD in multiple subjects.


I remember someone abusing a ticketing system I had to work with for reporting technical issues with a vast computer network, raising a ticket with an attachment from some absolute nutcase in multicolored manyunderlined .RTF format which was like as you described as "hate mail" in the subject line and the ticket being closed as "not hate mail", still makes me chuckle every time I think about that


> [1] https://daniel.haxx.se/blog/2021/02/19/i-will-slaughter-you/

Wow that's clearly someone with serious mental issues :( I hope he could find some help for his condition.


If you have your name all over the place, I guess that it bound to happen eventually. Curl is used by millions of people which makes Daniel Stenberg kind of a celebrity. With so many users, there have to be some crazies like the "I will slaughter you" guy.

It must be a common occurrence among famous software people, I wonder how they deal with that. Do they actively hide their real identity, for example by using a proxy for licensing, do they just ignore such madness, is it a burden or on the opposite, they enjoy their fame?


Maybe it's a good thing that the guy affected hasn't been awarded the defense contract as a result.


People suffering from psychosis can create "facts" supporting their ideas and believe in them. Usually it's the stuff like "someone follows me", "someone wants to hurt me". Psychosis is the entry point to schizophrenia which is more or less, an illness in which brain makes stuff up and the ill person cannot differentiate facts from hallucinations.

Possibly there was no defense contract at all.


It's not just people suffering from psychosis who do that.

"29% believe aliens exist and 21% believe a UFO crashed at Roswell in 1947. [...] 5% of respondents believe that Paul McCartney died and was secretly replaced in the Beatles in 1966, and just 4% believe shape-shifting reptilian people control our world by taking on human form and gaining power. 7% of voters think the moon landing was fake." -- https://www.publicpolicypolling.com/wp-content/uploads/2017/...

"Belief in both ghosts and U.F.O's has increased slightly since October 2007, by two and five percentage points, respectively. Men are more likely than women to believe in U.F.Os (43% men, 35% women), while women are more likely to believe in ghosts (41% women, 32% men) and spells or witchcraft (26% women, 15% men)." -- https://www.ipsos.com/en-us/news-polls/belief-in-ghosts-2021

"A new Associated Press-GfK poll shows that 77 percent of adults believe [angels] are real. [...] belief in angels is fairly widespread even among the less religious. A majority of non-Christians think angels exist, as do more than 4 in 10 of those who never attend religious services." -- https://www.cbsnews.com/news/poll-nearly-8-in-10-americans-b...


The other day some one mentioned any of these surveys consistently have about a 5% troll rate.

The 77% belief in angels is bizarre though. Like I believe in the possibility of aliens, the universe is quite large. Although I think all spacecraft sightings are almost certainly just mundane stuff from spy planes to weather balloons, etc. I even believe in the possibility of ghosts being real, more likely some strange phenomenon we can't explain that we might misidentify as ghosts. But angels?

One man's angel is another man's ghost or alien though I guess.


if you buy into god, angels are on par with aliens, possibly even more probably present.


Indeed. According to [1], it would appear 58% of the US officially believes in angels by creed (Protestant, Catholic, Mormon, Orthodox, Jewish, or Muslim). Only 11% are atheist or agnostic, and there's a 30% group that's religious but "unaffiliated" or "other". I totally buy that two thirds of "religious but unaffiliated or other" would believe in angels.

The difference here is that to the religious mind, angels are credible in a way that UFOs, ghosts, and magic are not. (The irreligious mind probably finds them all equally credible, hence the disconnect.)

Put another way, it would not surprise me that someone who was "religious but not affiliated" might have a high regard for the Bible. Angels figure prominently in the Bible, and hence fall in that bucket.

[1] https://en.wikipedia.org/wiki/Irreligion_in_the_United_State...


I would be very interested to see a citation on the troll rate.


Lizardman’s Constant by Scott Alexander:

https://slatestarcodex.com/2013/04/12/noisy-poll-results-and...


That's not a citation. That's a guy making things up.


I'm surprised aliens is the low one here. The exact question is "Do you believe aliens exist, or not?", not something more specific like little green men in flying saucers abducting cows.

The universe is large. In the tiny slice we can observe well enough to draw conclusions, Wikipedia currently lists 62 "potentially habitable exoplanets". I'd be much more surprised by intelligent life being unique to Earth than by there being many planets harboring intelligent life, or to answer the question as asked: I believe aliens exist.

https://en.wikipedia.org/wiki/List_of_potentially_habitable_...


"Belief" or stated belief to an anon survey?


If we go that way, a lot more believe god exists!


That sounds very much how chatgpt acts.


Is it? GPT just halucinates the next words in a given text.


Of course it is. What in the parent's post is different from that? The parent post's first sentence is, "People suffering from psychosis can create 'facts' supporting their ideas and believe in them."


I don't think GPT believes something.



Yea schizophrenia is no joke. Even the follow up apology makes it clear he hasn’t recovered.


The Terry A. Davis reference was bemusing.


In the pdf there's mention to terry davis so I'm tempted to think this is actually a bit of a troll.


That PDF links to https://web.archive.org/web/20210223111850/https://www.nerve.... Would be quite the troll to go to the effort of buying a domain just to mess with an open source author.


You're probably right - I assumed the name terry davis being embedded in an email following a schizophrenic rant about software was a ruse.


I think it was genuine admiration, at least that is how I took it.


Replying to child I can't reply to?

There was a period where the US was treating public key encryption like arms exports, and involved in spreading the technology outside the US as tools were in us.govs sht list


https://en.wikipedia.org/wiki/Phil_Zimmermann

After a report from RSA Security, who were in a licensing dispute with regard to the use of the RSA algorithm in PGP, the United States Customs Service started a criminal investigation of Zimmermann, for allegedly violating the Arms Export Control Act.[5] The United States Government had long regarded cryptographic software as a munition, and thus subject to arms trafficking export controls. At that time, PGP was considered to be impermissible ("high-strength") for export from the United States. The maximum strength allowed for legal export has since been raised and now allows PGP to be exported. The investigation lasted three years, but was finally dropped without filing charges after MIT Press published the source code of PGP

They tried to ruin the man.


Because he was competing with a private military contractor, and the US government is a wholly owned subsidiary of the MIC: or often acts like it is. Customs should have told RSA "no", "this is a private contract dispute", "hire a lawyer and file suit". Of course it was much more than that. Zimmerman put real privacy protecting encryption in the hands of the public, and the Many Eyes (that included state allies and adversaries) couldn't have that. But they needn't have worried: decades on the public is still ignorant about encryption, except as a marketing term, and most have no idea what a key pair is or what to do with it. Fraud around unauthorized access to government and commercial accounts is rampant (you _have_ set up and secured your online identity on your government's social security and revenue collection sites, haven't you?). That could have been prevented by early adoption and distribution of key pairs, alongside a serious public education campaign. Problem is, that would be at cross purposes with the goal of keeping the public uneducatable. Better for them to while away their time watching cable TV or delving into the latest conspiracy theory (pro or con).


I consider him equally important to people like Tim Berners-Lee for building the foundation of the web.


I read the travel issues post you linked, but am not seeing the causal link you’re drawing between development of software and visa issues. Was there more to the story?


I may have remembered incorrectly, which post was it. Here[1], in the paragraph titled "Why they deny me?" (unlinkable), Daniel hints at the possibility that this may have been due to development of (lib)curl which is used for malware creation by 3rd parties. There was no proof though.

[1] https://daniel.haxx.se/blog/2018/07/28/administrative-purgat...


The most superficial (and likely) reason to me seems to be that he uses haxx.se. I really wonder what kind of investigation they do. If they just start with Google, this one might come up immediately.


Ah, that makes sense. I have no dog in the fight and am far from the emotion of having a visa delayed in this circumstance. I would say that it was much more likely to be some level of incompetence than malice, having dealt with large government bureaucracies myself.


> ridiculous US travel visa obtaining issues

Ridiculous? This is pretty common issue for anyone who travels to US. Visa may be denied for whatever reason and tough luck on appeal. I am EU citizen and had similar experience just for visiting Iran on tourist trip. Do not even ask about guys from India, Pakistan or less fortunate countries.

And it got even worse with pandemic. US required vaccination for very long time, long long after it was relevant. Maybe they still do, frankly I do not care to look at this point!

I think biggest WTF here is why international organization like Mozilla is organizing company wide meetup in US, and not in country with liberal visa entry policy such as Mexico!


I applied for US travel visa as a citizen of Poland in 2012 and was denied travel due to "wrong type of visa". I was planning to visit my employer and spend 1-2 weeks traveling across the country. Apparently both business and travel visas were inappropriate for these purposes. To add, I was questioned in a US consulate/embassy (can't remember) in Warsaw by a person who repeatedly refused to speak in English, insisted on Polish and I, as a native Polish speaker, had issues understanding them. Poor experience.

This was not a case for Swedish citizens, which is mentioned at the beginning of Daniel's linked post. Sweden is a member of ESTA[1] and Daniel traveled to the US multiple times before being denied travel (with still valid ESTA) and only then applied for a visa.

[1] https://esta.cbp.dhs.gov/


I believe that B1/B2 should work just fine for these purposes.

Probably you answered an officer (or airline worker) that you were gonna "work" there, not just visit your employer for an event?


Absolutely not. I had, and still have, my own small business in Poland and I was clear (in writing) that I am planning to visit my main client.


You mentioned both employer and client, are they the same?


Yeah. I treat one-man small business serving mainly one big client to be comparable. On paper it's B2B, in reality it's working for the client and if the client is small business' main source of income, it's pretty much an employment.

Differences, in Poland at least, are that small business owner in this scenario is not protected by employment laws (3-months notice layoff, max 3 months salary-equal damages liability etc) and uses company's (EU)VAT registration number instead of personal social security number equivalent (PESEL number). It eases abroad contract agreements, invoicing and allows serving more clients easily. Company existence can also be validated on EU VIES[1] website quickly.

In the visa case, I have of course used the "paper" phrasing as in reality I was, and am, only employed by my own small business.

[1] https://ec.europa.eu/taxation_customs/vies/#/vat-validation


Ridiculous does not mean uncommon. Situations can be both common and ridiculous (absurd).

I'm American, but I have enough friends and family from other countries (my wife is an Iranian passport holder) to know what you're talking about and how difficult it can be.


While it’s true that US visa applications can be difficult, the same is true for any first-world country.

I was born in a third-world country, and ended up getting tourist visas to the EU, US, and Canada. US was by far the easiest - for me anyway.

If you want a large global meeting in a safe country (I would never in my life go to Mexico) there will be visa issues.


It seems like another symptom of zirp/cheap money.

Lots of ideas that could have been a neat feature or tool somehow ended up raising $500M of funding with no viable plan on ever monetizing.

The fact that the product is successful but after a decade they barely make $50M/year of revenue against $500M of lifetime funding is crazy. As a user, you can work at a company with a billion in revenue and barely owe them a few thousand/year. Or you might just use Podman for free, and prefer it due some of the design differences.

At the very least, a lot of these firms, with VC pressure, overstayed their welcome as private enterprises and should have sold themselves to a larger firm.


Some time ago I learned that Postman Labs that produces a nice but not-a-rocket-science HTTP client raised $433M at multi-billion valuation and has 500 employees. Isn't it astonishing?


Postman's strength is not in the HTTP client part. It is in the SaaS part, ad I think their valuation (even though overblown) mostly reflects their corporate penetration and the willingness of many companies to pay a small amount for their services.


The SaaS part being the offering for creating developer.acme.com type pages?


No.

Centralizing and sharing your API descriptions, test suites and plans, the various ad-hoc queries people usually keep in their notes or on Slack (and lose), handling involved auth stuff which is a hassle with curl, etc.

I think they gravitate towards the same area as swagger.io or stoplight.io, but from the direction of using the existing APIs.


API schemas and test suites are usually stored as code in some sort of SCM. I googled "postman maven" and "postman gradle" and found nothing official so I guess they have nothing except stand-alone workspaces.

API registry is a useful tool with modern love for nanoservices when a team of five somehow manages ten of those but I don't see anything similar done by Postman. Two of the service registries I know of were implemented in-house for obvious reasons.


Do you also mean things like Uber? with double digit $B lost with no road to be profitable? I agree.


And Lyft... and Doordash... and GrubHub...

Pretty much the entire "gig economy" is full of hot air and survives on regular influxes of VC money despite massive losses every year. The business model doesn't frickin work.

The hope from investors was that they would be investing into what would ultimately become a monopoly that could extract rents to repay them (not very competitive market of them, but that's tipping the hat a little isn't it...) but the funny bit is there's like 5-7 competitors in the US alone doing the same thing.

Here's a take: maybe this is just a natural monopoly situation, and if we like the convenience of gig delivery but don't like the high prices per order or that gig workers don't get sufficient pay, health insurance or other benefits, how about we just nationalize it?

You know, the same way we did for everything that wasn't food or groceries before? USPS Courier service sounds like an idea to me.


Nationalize it? No way. Besides I like rich investors ponying up money so my ride/food is more convenient and cheaper! It won’t last, but then what does?


I think that Docker can have a viable business plan but they had terrible execution. At my previous position, I wanted to use DockerHub more heavily but the entire experience was like a bootstrap project someone did as a university assignment. Many advanced features for organizations were lacking (SSO/SAML) that we would have happily paid for.


That, plus not being willing to accept Purchase Orders, doomed them with my employer.

It's as if they had no idea how things work at large enterprises that are older than most Docker employees.


Indeed. Docker should've been plumbing. They could've had a really nice business with developer tools on top of the core bits, but they decided to try to jump straight to enterprise and did a number of things to alienate partners and their broader community.

Instead of adding value to Docker they're just trying to find the right knobs to twist to force people into paying. And I think people should pay for value when they're using Docker substantially for business. But it seems like a very short-minded play for cash disregarding their long-term relationship with users and customers.

All that said: They have to find revenue to continue development of all the things people do like. I'd encourage people to ask if the things they've gotten for free do in fact have value, and if that's the case, maybe disregard the ham-fistedness and pony up if possible.


Yeah! I should be able to get 50x value from software and not pay for it /s

The open source community that carrier Docker on its back and is now bending over. Let this be a lesson to you. If you're building open source, maybe stick to open source solutions in your tech stack and if it's not there build it. This is what Apache does for the Java ecosystem.

I don't have sympathy, the writing was on the wall and this isn't the first time it's happened to the community.


> If you're building open source, maybe stick to open source solutions in your tech stack and if it's not there build it. This is what Apache does for the Java ecosystem.

You mean this Apache: https://github.com/apache ?


In all fairness, curl is purely a software tool. Docker is arguably more like a service. As such, it creates costs for and direct dependency on the entity behind it.


Docker is a software tool. Docker Hub is a service. If Docker didn't stand up Docker Hub the equivalent services from GitHub, Google et al would have competed on a more even playing field.


It's almost like they created intentional ambiguity here when they renamed the company (dotCloud) to match the name of the open source tool, then renamed the open source project behind the tool to something else (Moby), but kept it for the command line tool, while also combining the name Docker into their product offerings, including Engine and Deskop, that handle completely different parts of managing containers. That's not even including registries, dockerfiles, Compose, Swarm, etc. and the ambiguity around where those sit in the a Venn diagram.

That's some Google-level naming strategy there.


Lots of orgs figure out how to piggy back the “service” part of whatever they’re doing on free or sponsored infrastructure, though. Homebrew, for example, has been doing a lot of the same stuff on Travis and GH Actions since forever.


I think they potentially could have made a decent business out of it but they made a lot of bad business decisions.

I find myself shaking my head at a lot of their technical decisions too.

Podman seems to me to be a case study for how to do this right.


Podman is interesting. I like the architecture problems it solves with respect to Docker but the way they went about it was typical big business Red Hat. Dan Walsh, Podman's BDFL it seems, basically stood in front of RHEL / OpenShift customers for years bashing Docker even when a majority of the things he was claiming were less than half baked. RHEL made sly moves like not supporting the Docker runtime, even at a time when it put their customers in an awkward spot before containerd won the k8s runtime war. Podman is backed by much larger corporate machinery. If anyone thinks that Podman "winning" is a good thing then you've played right in to Walsh's antics. RHEL wants nothing more than to have no friction when selling all the "open source" tooling you may need in your enterprise.

Podman wasn't built out of necessity but out of fiscal competitive maneuvering. And it's working. I see so many articles on the "risks" of Docker vs Podman. The root wars are all over the place. Yet... The topic is blown way out of proportion by RHEL for a reason: FUD all in the name of sales. Is there merit to the claim? For sure. Docker's architecture was originally built up as client/server for a different purpose. That didn't play out and the architecture ended up being a side effect of that. But we don't see container escape nearly as much as Red Hat would like us to believe. I keep paying Docker because I don't want to live in Red Hat's world, with their tooling that they can just lock out of other platforms once they feel like it. No thanks.


Podman winning is good. Red Hat consistently does things right, for example their quay.io is open source, unlike Docker Hub and GitHub Container Registry. The risks of not using rootless containers weren’t blown way out of proportion, because rootless containers really are much more secure. Not requiring a daemon, supporting cgroup v2 early, supporting rootless containers well and having them as the default, these are all good engineering decisions with big benefits for the users. In this and many other things, Red Hat eventually wins because they are more open-source friendly and because they hire better developers who make better engineering decisions.


> In this and many other things, Red Hat eventually wins because they are more open-source friendly and because they hire better developers who make better engineering decisions.

We must be talking about a different Red Hat here. Podman, with breaking changes in every version, that is supposedly feature and CLI complete with Docker, but isn't actually, is winning because it's more open source friendly or better technically? Or systemd, written in a memory unsafe language (yes, that is a problem for something so critical and was already exploited at least a couple of times), using a weird special format for it's configuration, where the lead dev insults people and refuses to backport patches (no, updating systemd isn't a good idea) won "because it was more open source friendly"? Or OpenShift that tries to supplant Kubernetes stuff with Red Hat specific stuff that doesn't work in some cases (e.g. TCP IngressRoutes lack many features), is winning "because it was more open source friendly"?

No, Red Hat are just good at marketing, are an established name, and know how to push their products/projects well, even if they're not good or even ready (Podman is barely ready but has been pushed for years by this point).


>Or systemd, written in a memory unsafe language (yes, that is a problem for something so critical and was already exploited at least a couple of times)

What memory safe language 1) existed in 2010 and 2) is thoroughly portable to every architecture people commonly run Linux on and 3) is suitable for software as low-level as the init?

Rust is an option now but it wasn't back then. And Rust is being evaluated now, even though it's not quite ready yet on #2.


Go, although with it's GC it's debatable to what extent it's suitable for very low level software.

And honestly the language choice was only the tip of the iceberg, it took years of people adapting before systemd became usable. And it still doesn't handle circular dependencies better than arbitrarily which is ridiculous, literally one of it's main jobs is to handle dependencies.


There's Ada.


Ada has no ecosystem, and a lot of the ecosystem that does exist is proprietary, and it brings us back to point #2.


> Ada has no ecosystem, and a lot of the ecosystem that does exist is proprietary,

Not no ecosystem, but yes it's way smaller... probably even smaller than Rust, yes.

> and it brings us back to point #2.

I seriously doubt it. Ada is supported directly in gcc; why would it have any worse platform coverage than anything else?


it would be fun if we could simulate the world where systemd was written in Ada and then read all the comments/criticism


I first found Podman when looking for alternatives when Docker broke on my laptop in the midst of all the Docker Desktop licensing changes. Frankly, I use it because it has been more stable lately, not because of any long run marketing campaign from Red Hat. I suspect a lot of its userbase will be in a similar place as the experience with Docker continues to degrade.


OTOH, Docker didn't want to support a lot of features that enterprise customers wanted, like self-hosted private registries, because they wanted people using Dockerhub.

And wasn't the runtime problems because Docker was very very late to adopting CGroups v2?


Yes cgroupsv2 was a big problem for docker on EL8 for a long time.


Yes exactly. GP is misinformed on history. Red hat didn't sabotage anything. Docker took forever to update to cgroups V2, and that broke it for distros like fedora that are up to date. The user had to downgrade their kernel in order to use docker, but if they did everything else worked fine.


While you have a valid point with cgroups I never stated anything about "sabotage". So let's not play the misinformed card and then go on making things up.

As for Red Hat and their games of not supporting Docker, even after cgroups were addressed Red Hat never officially supported Docker as a runtime. How do I know this? Because at the time I was working with paying clients of RHEL/OpenShift and was on calls regarding said customers being forced to use inferior (their words) RHEL tooling. So while your history may have not seen the games Red Hat was playing, they surely were.


You may have a healthy dislike for the corporate behemoth that is RH / IBM, but, to my mind, Docker, Inc is worse: they keep more things closed, and they literally pressure for money.

I mean, I wish guys like FSF would have produced a viable Docker alternative, but this hasn't happened, at least yet.


>I don't want to live in Red Hat's world, with their tooling that they can just lock out of other platforms once they feel like it

Explain please. This sounds like you're accusing RH of sabotaging Docker, or planning to. That's a very serious accusation requiring proof.


I'm not sure why it's so hard for anyone to find this on their own. OpenShift forced users to use CRI-O and RHEL removed Docker as part of the Yum repository.

Plenty of references to this: https://crunchtools.com/docker-support/

Even though, at the time, CRI-O was a much worse option. Yes, Red Hat plays competitive lockout games all day long. This is just a singular example.


Some of it also sounds a bit like leftover angst from Red Hat winning the systemd war too.

Turns out hanging out in someone else’s cathedral can have some pretty big benefits.


RedHat has not won any systemd war. From all the distributions out there using systemd, RedHat is the one that uses the least amount of systemd features. They are even going so far as disabling features.

See * https://bugzilla.redhat.com/show_bug.cgi?id=1962257 * https://gitlab.com/redhat/centos-stream/rpms/systemd/-/blob/...

Sometimes they even backport systemd features from more recent versions, disable them but leave man pages in the original state. Even the /usr split isn't progressing at all.

Meanwhile Fedora has implemented all these changes, which according to https://www.redhat.com/en/topics/linux/what-is-centos-stream, should be the upstream for CentOS.

I would say RedHat dropped the ball on systemd and has no intention of supporting any of the new features in any of their systems.


I too find Red Hat's poor documentation hygiene a pain in the arse. But as for the disabled system features, I think that they all fall into the category of experimental/unproven sort of features that overlap with other existing RHEL components. Every enabled feature has a cost in the form of support burden.


Those are not "systemd features", they are components within the systemd suite. Using systemd-init has never required that you use every component within the systemd suite (e.g. ntp, network management, etc.)


>Podman is backed by much larger corporate machinery. If anyone thinks that Podman "winning" is a good thing then you've played right in to Walsh's antics.

I'm not making a moral judgement. I'm just saying that docker had serious technical problems and docker the business sucked at monetizing it.

Docker played into red hat's tactics. I've never heard of Matt Walsh and frankly, I've wanted rootless containers for years before I ever heard of podman.

>Podman wasn't built out of necessity but out of fiscal competitive maneuvering.

Becuase red hat is a business not a charity.

I doubt they would have built a better docker if docker wasn't refusing to improve.


I am usually an early adopter but I keep coming back to Docker since Podman is still very rough around the edges, especially in terms of "non-direct" usage (aka other tools)


As someone who's been bitten by this, I'm not sure if it's an issue with podman itself as much as the tools which expect docker. It could be argued that podman is not a docker drop-in replacement, but I expect more and more tools to pick it up.


> It could be argued that podman is not a docker drop-in replacement

This is an unfortunate part IMHO. podman is not a docker drop-in replacement, but it is advertised as such.


Besides the advertising, it's very close to being a drop-in replacement but their pace isn't closing that gap quick enough (or maybe they don't want to, or it isn't possible, idk I'm just a user) so you get a false sense of confidence doing the simple stuff before you run into a parity problem.


Worth remembering is that Docker supports Windows containers. That’s a hard requirement for many enterprises.


Is this a matter of developers constantly relearning the lesson of the folly of only supporting the current top thing, or is it a lot harder to support more than one?


I don't know how "hard" it is, but in my particular case I wanted to use this from intellij. It actually works, but the issue is that the docker emulation socket isn't where the ide expects it, and I haven't found a way to tell it where to look for.

Once I simlinked the socket, everything worked.


This worked for me:

Connect to Docker Daemon with -> TCP Socket -> Engine API URL -> unix:///run/user/$UID/podman/podman.sock


The devil is in the details. For example, docker has a DNS service at 127.0.0.11 it injects into /etc/resolv.conf, while podman injects domain names of your network into /etc/hosts. Nginx requires a real DNS service for its `resolver` configuration.


Docker was created by dotCloud for a different purpose than it ended up as. I think they are owed credit for what has become an incredibly elegant solution to many problems, and how great the user experience has always been.

Compare it to other corporate-managed tools like Terraform and Ansible. Both of them have horrible UX and really bad design decisions. Both make me hate doing my job, yet you can't not use them because they're so popular your company will standardize on them anyway. Docker, on the other hand, is a relative joy to use. It remains simple, intuitive, effective, and full of features, yet never seems to suffer from bloat. It just works well, on all platforms. There were a few years of pain on different platforms, but now it's rock solid.

And to be fair to them, their Moby project is pretty solidly open-source, and if Docker Inc dies, the project will continue.


> Instead it has a multi-million dollar company behind it, and VC's who demand profits from a thing that shouldn't have ever had a business plan.

But you don't have to host curl? Who's gonna put the money to host all the images and bandwidths that ten of thousands of companies use but never pay?


It could have been designed with a self-host option or a torrent/ipfs backend for near-zero hosting costs and still be 'just works' for the user.


pretty sure you haven't used ipfs before.

for users to download resources from ifps you either need to install the client (which quite resource intensive) or use the gateways (which is just a server and cost money to run).

also the speed and reliability are nowhere good enough for serious works.


Even dumber, it should have just been pointers to encrypted files hosted on any arbitrary web server.


THIS!

Alternatives:

- Virtual registry that builds and caches image chains on-demand, locally

- Maybe a free protocol like Bit Torrent to store and transfer the images


Yeah, but curl is used to access and download all sorts of data, which are all hosted by multi-million dollar companies. Just like git downloads and uploads data to git repositories. curl and git are valuable, but so is GitHub, and websites in general. The problem is that they haven't found a way to monetize docker hub.


The VCs offered free bandwidth and storage to gain market share.

Bandwidth and storage is not ultimately not free, it has to be paid for.


FWIW, Docker was not intended originally to be a tool for commercialization; it grew out of dotCloud, which open sourced the tool as a last-ditch-effort of sorts, if memory serves.


Yes, even when it was launched was obvious because used they packaged and configured existing solutions. It was like a company behind 'ls' (irony).


Coming trough: https://github.com/ihucos/plash 90% done and useful


what do you mean "demand profit"?

Last time I check rent is not free, food is not free, bus ticket is not free. No reason why software should be free.

Open source was invented by big co as a "marginalized your complement" strategy, not the ideal that is marketed as. As an evidence, I do not see any cloud vendor open source their code?


> Open source was invented by big co as a "marginalized your complement" strategy, not the ideal that is marketed as.

> In 1983, Richard Stallman launched the GNU Project to write a complete operating system free from constraints on use of its source code. Particular incidents that motivated this include a case where an annoying printer couldn't be fixed because the source code was withheld from users.

from https://en.m.wikipedia.org/wiki/History_of_free_and_open-sou...

> Last time I check rent is not free, food is not free, bus ticket is not free. No reason why software should be free.

You are welcome to sell your software. You are welcome to be replaced if you can't compete. You don't have to sell your software and we don't have to buy it. You can and will be competed with.

Trying to build a multimillion dollar venture off a UI - even a good UI - is probably unwise. It does not seem to be going well for Docker who has gone from no competitors to multiple and all of those competitors are open source.


From your very link, 1983's GNU Project was not the first piece of Open Source software.

From your link: The first example of free and open-source software is believed to be the A-2 system, developed at the UNIVAC division of Remington Rand in 1953


> Software was not considered copyrightable before the 1974 US Commission on New Technological Uses of Copyrighted Works (CONTU) decided that "computer programs, to the extent that they embody an author's original creation, are proper subject matter of copyright"

FOSS before 1974 looks.. funny. It existed! But it did not look like the modern FOSS movement.

Even post 1974 and pre-GNU, FOSS-ish text editors and such existed. This was still the era when licenses were often non-standard and frequently did not exist. Handing your friend a copy of a program was the norm, regardless the actual legal situation (which itself was probably vague and unspecified).


I'd like to see Docker succeed. They invented / formalized the space and deserve credit for that. They are probably doing the right thing with some of their development tooling (though maybe that should just be spun off to Microsoft) and ensuring images do not contain badware is something companies will pay for.

However, their core offering must be the leader if they want to survive. Devs must want to use "docker run" instead of "podman run" for example. Docker needs to be the obvious #1 for starting a container on a single machine.


> I'd like to see Docker succeed. They invented / formalized the space and deserve credit for that.

If by succeed, you mean they deserve to have revenue, I disagree.

They spun some cool work out of dotCloud when it failed. They seemed to delay thinking about how they'd monetize the work, and sort of fell into charging for developer tooling after their orchestration play lost to kubernetes.

At this point, I think of Docker the company as a wannabe Oracle. They are desperate for money, and are hoping they can fool you into adopting their tech so they can ransom it from you once you rely on it. If that sounds appealing to you, I'd say go for it.

For me, that situation seems worse than what I do without containers at my disposal. In other words, the solution is worse overall than the problem.



I mean, OCI and containerd exist. You can have "Docker" containers without the Docker just fine. Just need to replace the user tooling, which I assume podman does? (never used it)


Yes, podman has the tooling - identical cli in fact for the most part. alias docker=podman works fine for most things.


Forgot the /s


> their core offering must be the leader if they want to survive. Devs must want to use "docker run" instead of "podman run"

If their core offering is container hosting, they should be able to make a company out of that even without the client. After all jfrog and cloudsmith are more per less just that, as is github.


Docker's core offering is docker (obviously), but that doesn't have to be the piece that is monetized.


    ideally a coop of some variety
This is the role I feel like podman, the tool developed by Red Hat, is filling.


Podman is great and is first class citizen on Fedora. It also integrates nicely with SystemD. My only gripe with it is not many developers provide podman configuration on their install pages like they do with docker compose


Tangent: Why is the misspelling "SystemD" so common, when it has always been "systemd"? I would understand "Systemd" or "SYSTEMD" or something, but why specifically this weird spelling?


People not familiar with tacking on a lowercased ‘d’ to the name for daemons?


Probably to specifically call it out as "systemd" versus autocorrected misspelling of "systems".


Instinctively applying Pascal case, maybe?


I've always thought of it as in analogy to System V.


Nah, it's French.

> System D is a manner of responding to challenges that require one to have the ability to think quickly, to adapt, and to improvise when getting a job done.

> The term is a direct translation of French Système D. The letter D refers to any one of the French nouns débrouille, débrouillardise or démerde (French slang). The verbs se débrouiller and se démerder mean to make do, to manage, especially in an adverse situation. Basically, it refers to one's ability and need to be resourceful.

Source: https://en.wikipedia.org/wiki/System_D


Interestingly, https://www.freedesktop.org/wiki/Software/systemd/#spelling says...

> But then again, if [calling it systemd] appears too simple to you, call it (but never spell it!) System Five Hundred since D is the roman numeral for 500 (this also clarifies the relation to System V, right?).


I'm using docker-compose with a podman VM for development on a mac. Works ok so far. It wasn't quite slick enough when Docker pulled the licence switch last year, but the experience in the last couple of months has been pretty painless.


Fortunately you can use docker-compose with Podman these days.

(There have been a few false starts so I'm specifically referring to the vanilla unmodified docker-compose that makes Docker API calls to a UNIX socket which Podman can listen to).


This is more about Docker hub than Docker.

Image hosting is expensive at scale, and someone's got to pay for the compute/storage/network...


Docker Hub's the part I care about the most.

If I can't use it as a daemon-focused package manager that works more-or-less the same everywhere with minimal friction without having to learn or recall the particulars of whatever distro (hell, on my home server it even saves me from having to fuck with systemd) and with isolation so I can run a bunch of versions of anything, I'll probably just stop using it.

Everything else about it is secondary to its role as the de facto universal package manager for open source server software, from my perspective.

... of course, this is exactly the kind of thing they don't want, because it costs money without making any—but I do wonder if this'll bite them in the ass, long-term, from loss of mindshare. Maybe building in some kind of transparent bandwidth-sharing scheme (bittorrent/DHT or whatever) would have been a better move. I'd enable it on my server at home, at least, provided I could easily set some limits to keep it from going too nuts.


>> Image hosting is expensive at scale, and someone's got to pay for the compute/storage/network..

Bit Torrent would beg to differ.


That's a neat idea but probably unworkable in practice. Container images need to be reliably available quickly; there is no appetite for the uncertainties surrounding the average torrent download


> That's a neat idea but probably unworkable in practice. Container images need to be reliably available quickly; there is no appetite for the uncertainties surrounding the average torrent download

Bittorrent seems to work quite well for linux isos, which are about the same size as containers, for obvious reasons.

IMO, the big difference is that, with bittorrent, it's possible to very inexpensively add lots of semi-reliable bandwidth.


Nobody is going to accept worrying about whether the torrent has enough people seeding in the middle of a CI run. And your usual torrent download is an explicit action with an explicit client, how are people going to seed these images and why would they? And what about the long tail?


Nobody needs to be seeding if only one download is active. You could self host an image at home on a Raspberry Pi and provide an image in a minute.

Nobody's CI should be depending on an external download of that size.


We are talking about replacing the docker hub and the like, what people "should" be doing and what happens in the real world are substantially different. If this hypothetical replacement can't serve basic existing use cases it is dead at the starting line.


> enough people seeding

the .torrent file format, and clients, include explicit support for HTTP mirrors serving the same files that's distributed via P2P.


Archive.org does this with theirs. If there are no seeds (super common with their torrents—IDK, maybe a few popular files of theirs do have lots of seeds and that saves them a lot of bandwidth, but sometimes I wonder why they bother) then it'll basically do the same thing as downloading from their website. I've seen it called a "web seed". Only place I've seen use it, but evidently the functionality is there.


I'm pretty much convinced the people at Docker have explicitly made their "registry" not be just downloadable static files purely to enable the rent-seeking behavior we are seeing here...


Cache images locally. Docker has enough provisions for image mirrors and caches.

Downloading tens or hundreds of megabytes of exactly the same image, on every CI run, on someone else's expense, is expectedly unsustainable.


People who need "reliably available quickly" can pay or set up their own mirror. Everyone else can use the torrent system.


Not a bad idea. Have the users seed the cached images.


I agree. The core devs should create a new company and focus just on the tools, with a simple, scaling licence model for them.

As far as DockerHub goes, the OSS hosting costs do need to be solved, but surely they can be.


I'm not sure it's easy. We're seeing other open source projects like Kubernetes struggle with hosting costs, and that's just one project.

Ideally it'd be great to see the industry fund it, but with budget cuts in tech. I'm not sure that'll happen...


I haven't seen that, but I haven't been following along. I'd assumed they would be very Google-funded still. Is it a general CNCF problem?


So I'm not in the details of this but I understand from k8s slack that there is a fixed GCP budget for image hosting and Kubernetes is getting through it too quickly which is why they're moving the registry domain to a generic one from a GCP specific one, to allow for other funding to be found and used.


While that's true, for the amount of network traffic they're likely moving around, I wonder where they're placing their servers.

eg something like AWS with massive data transfer costs, vs something else like carefully placed dedicated/colocation servers at places which don't charge for bandwidth


If it's AWS, they've surely got a huge discount. No way they're paying 8+x normal big-fish CDN rates for transfer. At their scale, it would have easily been worth the effort to move to something cheaper than AWS long ago, or else to negotiate a far lower rate.


It is on S3.

    keeb@hancock > [/home/keeb] dig +short hub.docker.com
    elb-default.us-east-1.aws.dckr.io.
    prodextdefblue-1cc5ls33lft-b42d79a68e9f190c.elb.us-east-1.amazonaws.com.


> No way they're paying 8+x normal big-fish CDN rates for transfer.

While you're probably right, I've seen dumber things happen so I wouldn't completely rule out the possibility. :wink:


image hosting is not that expensive at scale. i can put an image on ECR and pay for bandwidth and storage at what is really not very good rates, and it still comes out way cheaper than paying for what docker hub wants me to pay.


How is a tool developed by, and strongly pushed by (to the point of strongarming customers to transition to their tool, features lacking be damned) a corporation, especially one owned by IBM, filling the role of a coop-developed tool?


Its not as easy nor as simple as docker + docker compose.


It’s literally OCI compatible, integrates with systemd and LSM, and runs rootless by default. Podman is 100000% better designed on the inside with the same interface on the outside.


Rootless networking is still a mess with no IP source propagation and much slower performance. So for most users docker with userNS-remapping is actually a better choice.

Also systemd integration isn't a plus for me, I don't want to deal with SystemD just to have a container start on startup.


I think --network=pasta: helps with source IP preservation.

Regardless that has never bothered me since I'm only using podman or docker for local development...


Hmmm, pasta seems to solve all rootless networking issues...

https://github.com/containers/podman/pull/16141


It’s the lack of fully compatible compose that matters most.


Podman appears to support the compose v2 spec, and the socket API, but still not fully supporting buildkit.

https://www.redhat.com/sysadmin/podman-compose-docker-compos...


You're right, it's both easier and simpler since no daemons are involved. podman-compose has the same command-line interface and has worked ok for me so far (maybe 3 or 4 years at this point).


Podman-compose isn't fully compatible with the new compose spec.

Also I really don't care if docker has a daemon or not, for me it offers feature like auto starting containers without bothering with SystemD, and auto updates using watchtower and the docker socket.

And since podman doesn't have an official distro package repo like docker, you are stuck use whatever old version shipped in your distro without recent improvements, which is important for a very active development project.


> Also I really don't care if docker has a daemon or not, for me it offers feature like auto starting containers without bothering with SystemD

Bingo, the "pain" of the daemon (it's never cause a single problem for me? Especially on Linux, on macOS I've occasionally had to go start it because it wasn't running, but BFD) saves me from having to touch systemd. Or, indeed, from caring WTF distro I'm running and which init system it uses at all.


To be fair, every mainstream distro now uses Systemd


> And since podman doesn't have an official repo like docker,

Hmm... https://github.com/containers/podman

I found that on: https://podman.io/ so, I'm pretty sure it's official.


I meant a a repo for a distro package manager, so you can get the latest version regardless of whatever version your distro ships.


The most of major distros ship podman in their repositories. Just use your package manager to install podman.


And these versions are often our of date, which is important given that podman is in active a development and you want to be using the latest version.


I don't understand what the issue is. Don't use an LTS distro if you want up to date software. Fedora and Arch are up to date for Podman. Alpine seems to be one minor version behind.


I want stability for the system and a newer podman version. I do this all the time with docker, install an LTS distro and then add the official docker repos.


podman + podman-compose is as easy.


Not comparable to the full compose spec.


> Unfortunately Docker the company appears to be dying

Docker the company is crushing 2022-2023… record revenue and earnings


Of course, I can do a lot of record revenue and earnings selling five dollar bills for $4. I'm curious what a path to profit would look like...is this kind of squeeze the only way to get there?


Docker Hub is a massive expense when you consider the data storage and egress. To do that for open source projects you have to wither (a) have a lot of income to cover such an expense, (b) a pile of VC funding to cover the expense, or (c) pile on the debt paying for it while you grow. (b) and (c) can only live on so long.


That's a self inflicted problem by docker hub squatting the root namespace, though.


This was the initial pebble that lead to Podman existing via Red Hat. No Red Hat customer wanted to pull or push to DockerHub by default due to a typo. No PRs would be accepted to change it and after dealing with customer frustration over and over...


I'm not familiar with the 'root namespace squatting' or the typo issue. Do you mean the image namespace as described here: https://www.informit.com/articles/article.aspx?p=2464012&seq... or is there something else? What sort of typo would cause problems?


Yeah, this is a good summary of the problem. If I write a dockerfile with

    FROM ubuntu:20.04 
    WORKDIR /app
    ADD mySecretAppBinary .
it will pull the base image from hub.docker.io, and there is no way to stop it from doing so. If I run:

    image_tag = test-app
    docker build -t $image_tag .
    docker push $image_tag
it will push a container with my secret application to the public docker hub, assuming I am logged in (which of course I am, because docker rate limits you if you don't). I don't ever want to do that, ever, under any circumstances, and it's just not possible to opt out of whiel using docker.


This was the proposed PR that is summarized in that article: https://github.com/moby/moby/pull/10411

if you did `docker tag supersecret/app:latest && docker push` instead of `docker tag registry.corp.com/supersecret/app:latest` guess where your code just went?

Same on the pull side, if you wanted your corp's ubuntu base rather than just `docker pull ubuntu`.


Getting out of a self inflicted problem isn't so easy. They have spent a long time trying. For example, putting distribution in the CNCF, working with the OCI on specs (like the distribution spec), making it possible to use other registries while not breaking the workflows for existing folks, and even some cases of working with other registries (e.g., their MCR integration with Docker Hub that offloads some egress and storage).

The root namespace problem was created by an early stage startup many years ago. I feel for the rough spot they are in.


It's a self inflicted problem they've doubled down on, though, and that self inflicted problem is also the reason for their success. If docker hub could be removed in a config, the value add of docker the company is significantly diminished. It's hard to feel sorry for a company who actively pursued lock-in, and didn't make any real attempts at avoiding it (you know what would help? A setting to not use docker hub, or to use a specific repo by default), and who have built an enormous company on a value add that is a monkey's paw, and they've known that all along.

edit: https://github.com/moby/moby/pull/10411 this is the change that would _actually_ solve the problem of docker squatting the root namespace, and they've decided against it because it would make dockerfiles less portable (or really, it would neuter docker.io's home as the default place images live)


I fail to follow. If DockerHub is the part that actively burns money, why stick with it? If, say, Docker Desktop is the part that actively brings profits, why would it be afflicted if the users used a different image registry? Most companies, except the smallest ones, use their own registry / mirror anyway.

Even better, the registry may continue to exist, but would (eventually) stop storing the images, and start storing .torrent files for the actual downloads. Seeing an image from the GitHub release page would be enough for most smaller projects (yes, BT supports HTTP mirrors directly).


I suspect that dockerhub isn't the part that bleeds money, I suspect like many tech companies the part that bleeds money is sales costs and developer costs (see gitlab). Unfortunately I can't prove that though.

Docker downsized from 400(!) to 60 people a few years back, and a quick search on Google says they're now back up at 600 employees again. They have ARR of $50M [0] , which is probably a little short of paying 60 people SV salaries, but it's nowhere near enough to pay for 600 people.

As regards to the registry problem, "most companies" I suspect don't follow best practices and in fact do end up using docker hub for things like public images, but more importantly _there is no way to enforce this in the tool, and docker have refused to implement this when given a PR_.

[0] https://techcrunch.com/2022/02/01/docker-makes-comeback-reac...


> I feel for the rough spot they are in.

I don't. Because there is this pattern from VCs to fund business models that involve dumping millions in resources as Open Source on the world and then owning a part of the ecosystem.

Docker originally wanted to "own" everything, if CoreOS hadn't pushed for the OCI spec, debalkanizing containers, Docker would have a near monopoly on the container ecosystem.

At this point Docker is just the command, and it is a tire fire of HCI gaffs.


I wonder if they would have a much smaller bill if they were running on physical hardware instead of renting infrastructure from AWS.

This is really not much different from https://news.ycombinator.com/item?id=35133510 case.


You can’t have positive earnings like Docker if you sell 5 for 4


I'm assuming creative calculations. Like Uber's idea of "earnings" changed when they went public.


Based on what? How is that more likely than just them being able to finally generate revenue, considering they started focusing on that a few years ago? I don't get your comment at all


Based on observation of other companies. Fluffing your earnings isn't rare for private companies looking for investors.


that would make you negative one dollar of earnings per sale


I agree given the usual definition of "net earnings", but private companies often represent earnings in creative ways that exclude obvious costs (hey Uber!).


OPs point is that revenue is easy, profit is not.


But 4$ of revenue! Do that 1000x and think of the growth!


unless you use dotcom accounting, in which case you can say that you lose money on every sale, but make up for it in volume.


Can't comment specifically on this or that "dying company", but it is a bit disappointing that after, how many, four decades of open source? and the obvious utility of that paradigm, it still seems a major challenge to build sustainable open source ecosystems. This means we can't really move on and imagine grander things that might build on top of each other.

Its not clear if that is due to:

i) competition from proprietary business models

ii) more specifically the excessive concentration of said proprietary business models ("big tech")

iii) confusion from conflicting objectives and monetisation incentives (the various types of licenses etc)

iv) ill-adapted funding models (venture capital)

v) intrinsic to the concept and there is no solution

vi) just not having matured yet enough

What I am driving at is that building more complex structures requires some solid foundations and those typically require building blocks following some proven blueprint. Somehow much around open source is still precarious and made up. Ideally you'd want to walk into the chamber of commerce (or maybe the chamber of open source entities), pick a name, a legal entity type, a sector and get going. You focus on your solutions, not on how to survive in a world that doesn't quite know what to make of you.

Now, corporate structures and capital markets etc took hundreds of years to settle (and are still flawed in many ways) but we do live in accelerated times so maybe its just a matter of getting our act together?


It's still doing better than it could be. Big tech companies have played way nicer than they had to, focusing more on vague long-term presence than on immediate profits, and imo continue to do so to a lesser extent. There always comes a point when the innovation is done and they lock things down again, but even then they have to fight their own employees.


With lots of open source licenses, there is no copyleft. Without copyleft, for profit companies can simply take the hard work, add a little on top, make it proprietary, and sell it. Customer mentality is to use the most comfortable thing, without paying attention on whom they depend, often choosing the proprietary offer, because of feature X.

There are healthy ecosystems, even some partially replacing docker, some with more daily updates than I can process, but they have copyleft licenses in place and are free software, to ensure contributions flow back. Companies can still make profit, but not from adding a minimalistic thing and making it proprietary. They need to find other ways.


> Without copyleft, for profit companies can simply take the hard work, add a little on top, make it proprietary, and sell it.

That's it. Pushover licenses are not helping at all.


It's because the incentives to make money quickly end up being stronger than incentives to build a sustainable open source ecosystems.


To me is the opposite, Docker promotes bad software development practices that in the end will hurt you. In fact most of the time when you hear that you need Docker to run a software is because that software is so badly written that installing it on a system is too much complex.

Another bad use of Docker that I've seen is because people cannot figure out how to write systemd units, that is damn simple (just spend a day to read the documentation and learn the tools that you need). Of course that makes administering the system so much complex because you cannot use the benefits that systemd will give you (thus you start using iperoverengineered tools like kubernetes to just run a webserver and a database...).

I'm maybe oldschool but i use Dockers as a last resort, and prefer to have all the software installed properly on a server, with the use of Ansible as a configuration management tool. To me a system that uses Docker containers is much more difficult to manage in the long run, while a system that doesn't use it is more simple, thus less things that will break, thus if I need to make a fix in 10 years I ssh in the system, edit the program with vim, rebuild and restart the service, no complex deploy pipeline that break, depend on external sources that may be taken down (as is the case) and similar stuff.


> To me is the opposite, Docker promotes bad software development practices that in the end will hurt you. In fact most of the time when you hear that you need Docker to run a software is because that software is so badly written that installing it on a system is too much complex.

I think one reason you may be seeing downvotes here is that you have specific projects in mind, and without you naming them, others who haven't used such projects don't see how real the phenomenon is.

I was recently helping a friend work through some Nix configuration and he told me about a couple of different projects he used where deploying the software any way other than via Docker was treated as either officially or de facto unsupported. In some cases, dependencies are not even exhaustively named in the documentation. When users ask questions in community channels (often on Discord) about what the software's requirements are, they are (at least sometimes) directed to just use the pre-baked Docker images instead of receiving real answers to their questions.

This is second-hand info for me. I don't know how bad it really is, or how common, either. But that kind of thing a absolutely screams to me, too, 'very few of us actually know how this thing works'.

Still, sharing that sentiment without giving a specific account of software that you've seen fall into this trap is likely to be dismissed and downvoted. Maybe it would be helpful to give some concrete examples of what you brought all that to mind for you.


I think you are letting your specific feelings and head-canon about purity be mistaken for solid technical arguments.

If you’re sshing to boxes, editing things by hand and slinging ad-hoc commands around then your frame of reference is so far away from understanding it’s value proposition that it’s probably pointless to discuss it.


Nix solves the ad-hoc problem much more cleanly than Dockerfiles do


There’s purity of design/implementation and there’s usage. Often these are inversely related.

I like nix - it’s probably the right direction, but just compare the UX to Docker. Comparatively Docker absolutely nailed it.


In many ways I agree with you.

Just like Visual Basic 6, it's super-easy to get started with a Dockerfile, but nigh-on impossible to create a quality result with it.


There are similarities in the implementation as well, specifically around content addressable storage and layers.

Nix needs reproducible builds as well, which is a limiting factor. What’s nice about Docker is that it’s flexible enough to actually get stuff done.


> Nix needs reproducible builds as well, which is a limiting factor.

It doesn't have to be, Nix lets you choose your level of purity and it can build docker containers.


There are a variety of "overlays" springing up for Nix that might become what docker did for lxc containers:

- devenv

- devbox

- others i'm forgetting


> worlds better than the old ways of managing dependencies and making sure everyone on a project is aligned on what versions of things are installed.

And Nix is worlds better than even this. Imagine!


I’m yet to be entirely sold on that, mostly because I think Nix the language isn’t anywhere near as accessible as Dockerfiles, but I’ll be the first one cheering if Nix does manage to take over.


Completely agree on the complexity criticism, but this interactive tutorial (that literally embeds a full nix interpreter in the browser) went a looooooong way towards making Nix files not just look like arcane incantations to me, and doesn't take very long to do:

https://nixcloud.io/tour/

if at some point you realize "oh... this is just JSON with a different syntax, some shorthands, and anonymous or library functions," you're on the right path


You might be interested in Devbox (http://jetpack.io/devbox)! We built Devbox because we were frustrated with our Docker based dev environments, and our goal is to provide the power of Nix with a more accessible interface (similar to yarn or other package managers).

We're open source and rapidly adding features, you can check us out on Github at https://github.com/jetpack-io/devbox



Does Nix have an equivalent of docker-compose yet?

nix-shell is amazing for installing binaries, but actually wiring up and running the services doesn't seem like a solved problem.

Unless Nix expects a separate tool to do this once binaries are installed, of course.


docker-compose seems necessary only because you have your "official postgres dockerfile" and your self-built "web app dockerfile" (and maybe other things like an ElasticSearch dockerfile)

Docker files seem necessary only because... well put it this way, think of a Docker image as "the cached result of a build that just so happened to succeed even though it was entirely likely not to, because Docker builds are NOT deterministic."

Now enter Nix, where builds are more or less guaranteed to work deterministically. You don't need to cache them into an "image" (well, the artifacts do get cached locally and online at places like https://www.cachix.org/), and the only reason they can do that is because they too are deterministically guaranteed to succeed, more or less), which means you can just include any and all services you need. (Unless they need to run as separate machines/VM's... in which case I suppose you just manage 2 nix files, but yes, "composing" in that case is not really fleshed out as a thing in Nix, to my knowledge)


except you cant deploy Nix files, and even if you could, better be sure that every employee is using Nix and have the same configuration. The whole point of docker is to make reproducible builds everywhere, not just your computer.


> except you cant deploy Nix files

NixOps and nix-deploy: EXIST! https://arista.my.site.com/AristaCommunity/s/article/Deploy-...

> better be sure that every employee is using Nix and have the same configuration. The whole point of docker is to make reproducible builds everywhere, not just your computer.

lol, "tell me you never used Nix without telling me you never used Nix" because it literally guarantees that, each project is a pure environment with no outside influences. THAT IS LITERALLY ITS ENTIRE PURPOSE OF EXISTENCE lolol

I absolutely guarantee you that you will have more reproducible builds with Nix than with Docker. I know, because I've worked with both of them for months on end, and I've noticed that it pains me to work with Docker more than it pains me to work with Nix (hey, it's not perfect either, but perfect is the enemy of good in this case)


First you are tightly coupling your CI to your developers machine, that in itself is already a pretty bad idea. Second, if one employee wants to install htop on their machine, then every employee will have to install it, this can quickly become a problem when you have 500+ developers. Third, I think you missed the first part on the second quote, you are FORCING every developer to not only use linux but also to use one distribution that is pretty niche.


> First you are tightly coupling your CI to your developers machine

It's not like you can't configure builds differently based on an ENV variable, which is exactly what most build tools already do.

> Second, if one employee wants to install htop on their machine, then every employee will have to install it

There's actually a way to install things on the fly on first use. That way, if you never use it, it will never install. If you use it more than once, it will use the locally-cached version. Next?

> this can quickly become a problem when you have 500+ developers.

Nope. Already answered. Plus, you can allow some things into the "pure" environment if you wish. Harmless things like btop or htop, for example.

> Third, I think you missed the first part on the second quote, you are FORCING every developer

What part of "everyone has to use slow-ass non-deterministically-building Docker" is NOT "FORCING" every developer to use something? LOLOL. Plus, on Macs (which USED to be my preferred dev machine) it's slow as fuck, which is why I had to switch to a linux laptop anyway, which is why I said "fuck this" and installed NixOS and went to town instead.

> to not only use linux but also to use one distribution that is pretty niche.

First of all, wow, you are naïve. No good thing DID NOT start out "niche". Literally every technology I've gotten into except for ASP.CRAP was "niche" when I got into it- from Ruby, to Postgres, to Elixir, to Jquery (at the time)... Do not judge things based on their popularity because that's the Appeal to Popularity fallacy. Judge things based on their promise, young padawan. And Nix... promises much.


Docker already requires everyone to use Linux, which is why Docker Desktop is just a frontend for a dog slow VM on your Mac or Windows box.


> First you are tightly coupling your CI to your developers machine, that in itself is already a pretty bad idea.

How?!


On systems that have the same containerization feature that Docker requires, i.e., Linux systems with recent kernels, you can use nix-bundle¹ or the flakes-based experimental command inspired by it, `nix bundle`² to generate an all-in-one binary that you can run without installing Nix on the target.

1: https://github.com/matthewbauer/nix-bundle

2: https://github.com/NixOS/bundlers


> except you cant deploy Nix files

Ooh let me try...

"except you can't deploy Dockerfiles"

With docker you deploy the artifact, same with Nix. Nix can also create docker images.

> , and even if you could, better be sure that every employee is using Nix and have the same configuration.

How is this different than employees needing to have the same docker version?

In a large org, either will likely be provisioned.

> The whole point of docker is to make reproducible builds everywhere, not just your computer.

No.

Docker doesn't make reproducible builds, it makes repeatable builds.

The whole point of Nix is to make reproducible builds everywhere.


https://github.com/hercules-ci/arion which allow docker-compose


oooh, I did not know of this, nice!


Yeah, there are basically two native options for managing and configuring services with Nix.

The easiest one is NixOS, and that'll be enough for most people, provided that they're okay using it as the OS for their servers.

The other is Disnix, which is a bit more cumbersome but also more flexible, and works fine for deploying Nix-based software to other systems.


In terms of UX and allowing services I've been liking devenv.

I've linked it a few times in this thread already for that reason.

Edit: see https://devenv.sh/containers/


How to run Nix on Gentoo and Debian?



> they can't work out how to build a business around what is at it's core a nice UI for Linux containers.

It's quite a shame (for the lack of better wording) that the better, simpler and more intuitive a free product is, the harder it is to make money from it by selling support.

I think that the best way to go from here, would be building companion products and supporting the whole ecosystem. By companion products, I mean other standalone apps/services, not just GUI for existing one.


> Unfortunately Docker the company appears to be dying, this is the latest in a long line of decisions that are clearly being made because they can't work out how to build a business around what is at it's core a nice UI for Linux containers.

It should have been just a small company, doing this, and making some money for their trouble instead of whatever it is they're trying to be.


That might be possible for soemthing that "only" writes and distributes software (that can also work on levels of success much less stellar than sqlite), but when you try to pull off hosting services at internet scale like docker hub did, with some free tiers, it's just not feasible to include it all in the bills of user subset that is actually willing and able to pay. The playbook is always the same: "maybe we could do that high impact free tier if we had more funding?" and that "whatever it is they're trying to be" is an implicit consequence of that, that ambition comes with the funding. "just a small company" mean bootstrapped, and bulk hosting like docker hub would simply not happen.


First time I saw Docker, I thought "that's great, but how do they make money?" They're selling a cloud containers service while also giving the software away to their direct competitors for free. Maybe I was too ignorant to understand their business model? But now I'm thinking I was right.


Does that mean it would be a good idea to start moving to the podman ecosystem? RedHat/IBM seem to have this figured out better.

I m doing that personally but I m very hesitant about mentioning that to $job.


I'd like to see something resembling the Linux model. In the case of Docker, a foundation built around a suite of open source tools that's contributed to by pledges from all the big companies that use the tool. Maybe that means podman has a reliable source of funds for maintenance and improvement.

What I don't like is having these critical tools directly in the hands of a single for-profit corporation, at least where it can be avoided.


If we want that I feel like there should be a community buy out. Just so that they have something to return to investors. Just so that they have incentive to play nice and not Hudson-up the process. You shouldn't be able to build a critical piece of infrastructure and have nothing to show for it. Community buy-out should be a viable exit plan.


Do you remember SCO?


Honestly, this is reminding me of Oracle after buying Sun.


A co-op formed by big 3 cloud providers flush with cash and put in maintenance mode.


Docker was always ‘oh we have too many moving parts in the deployment pipeline, let’s add Yet Another Thing. That will fix it’ It never fixed anything.


Why didn't Docker ever offer managed container hosting? That seems like the obvious logical next step when you create a tool for easy deploys. Instead it's 2023 and we finally get that with Fly.io.

I must be missing something obvious, because otherwise I feel like I'm going insane.



Per https://news.ycombinator.com/item?id=10425097

> Tutum is not a real PaaS, it runs containers in the user infrastructure and can be easily adapted to run on-premise

Which doesn't sound like managed container hosting to me. Their site is gone though so I'm not really sure. There was 0 discussion around their shutdown, if they did have a managed solution I'm curious why it failed. Too early?


All references to it are redirects now, but here's a HN discussion on the topic[1]. Here's the content on the wayback machine[2]

1: https://news.ycombinator.com/item?id=16665130 2: https://web.archive.org/web/20180328234733/https://docs.dock...


Thanks for the link -- neither of those explain what Docker Cloud (nee Tutum) is but https://web.archive.org/web/20171003123425/https://docs.dock... has a bit more info.

> Docker Cloud provides ... tools to help you set up and manage host infrastructure

additionally

> When you use Docker Cloud to deploy nodes on a hosted provider, the service stores your cloud provider credentials and then deploys nodes for you using the services’ API to perform actions on your behalf.

That sounds like not managed to me if you're managing the underlying infra, the IaaS, IAM connections, needing to watch host to container ratios, etc. but I guess host maintenance was limited to pressing a button to upgrade (with downtime) hosts at least.


> My hope is that before the inevitable shuttering of Docker Inc another organisations (ideally a coop of some variety, but that's probably wishful thinking)

Indeed. We should all be equal in that venture: Ain't nobody here but us chickens.


So.... podman?

No, I don't work for redhat. I'm glad a ... ?less? corporate entity / ?more? open source entity has pretty much gotten a replacement up.


It packaged containers that we already knew from other UNIX and mainframe/micros systems.


Sir computers nothing but we already known from Alen Turing.


> Start publishing images to GitHub

And when GitHub starts similar shenanigans, move out to where? I am old enough to know the we can't trust BigTech and their unpredictable behaviors.

Eventually we need to start a Codeberg like alternative using Prototype funds to be self reliant.

1: https://codeberg.org/ 2: https://prototypefund.de/


It actually sounds reasonable to me? They have an open source program, the article says its open source definition is "too strict" because it says you must have "no pathway to commercialization".

I mean why should you expect someone to host gigabytes of docker images for you, for free?


Well, it's how they established themselves in the market. Without being friendly to open source projects they wouldn't have had that marketing and wouldn't exist as a company.

So now they destroy their foundations and learn whether they 10x or fold. Pretty standard VC playbook so I assume that's the driving force here.


> the article says its open source definition is "too strict" because it says you must have "no pathway to commercialization"

What a load of crap. Free Software's "0th freedom" is the ability to use the program for whatever purpose you wish. The definition of Open Source is even looser than that. They are asking their "Open Source" users to make their software non-free, by restricting its use cases.

Anyway, the writing has been on the wall for a long while. If you haven't moved off Docker Hub yet, now is the time.


No, they are refusing to provide free artefact hosting to startups.


Gitlab's Open Source program has similar restrictions, and it's just kind of weird. Like, there are multiple companies actually making money off of Xen; but because Xen is owned by a non-profit foundation (with a six-digit yearly budget), and the foundation isn't trying to profit, it still qualifies. (As does, for instance, the GNOME project.)

OTOH, somewhere else in this context it was mentioned that curl is almost entirely maintained by one guy who makes money from consulting; and because of that, he wouldn't qualify.

So if you're either small enough to be a side hobby project, or large enough to have your own non-profit, you can get it for free; anywhere in between and you have to pay.

Personally I'd be happy for Xen to pay for Gitlab Ultimate, except that the price model doesn't really match an open-source project: we can't tell exactly how many people are going to show up and contribute, so how can we pay per-user?


While I have no _expectations_ of free hosting, one example of a project that will be affected is mine – https://hub.docker.com/repository/docker/outlinewiki/outline

I have been building this for 5+ years, and offer a community edition for free while the hosted version is paid. Once the community edition starts costing money there will be even less reason to continue supporting it, it already causes a lot of extra work and problems that I'm otherwise uncompensated for.


> Once the community edition starts costing money there will be even less reason to continue supporting it

This is exactly the reasoning Docker is using, so it seems reasonable?


If you're going to call it "open source" that should mean what "open source" usually means, i.e. that e.g. RedHat is eligible.


Somewhat related : what is Docker's stance on the licenses that fail the first Open Souce test, those that forbid commercial use (NC) ?


It actually seems pretty reasonable to let BigTech host stuff, so long as you know the rug pull is going to come. Let the VCs light money on fire hosting the stuff we use for free, then once they stop throwing money at it figure out a plan B. Of course you should have a sketch of your plan B ready from the start so you are prepared.

If you view all of this "free" VC subsidized stuff as temporary/ephemeral you can still have a healthy relationship with it.


This is how I've been living for many years and it has saved me many thousands of dollars, which is a significant amount of money here. The various "cloud" free tiers cost them at least $600 for the past year alone. Same for free CI offerings, etc. Thank you VCs and BigCo for not cutting out regions that are probably net negative for you overall (I guess it may be serious money for me, but doesn't even register on the radar at their scale).


The economics of hosting an image registry are tough. Just mirroring the npm registry can cost $100s per month in storage for tiny little tarballs.

Hosting GB images in an append-only registry, some of which get published weekly or even daily, will burn an incredible amount of money in storage costs. And that’s before talking about ingress and egress.

There will also be a tonne of engineering costs for managing it, especially if you want to explore compression to push down storage costs. A lot of image layers share a lot of files, if you can store the decompressed tarballs in a chunk store with clever chunking you can probably reduce storage costs by an order of magnitude.

But, at the end of the day, expect costs for this to shoot into the 6-7 digit USD range per month in storage and bandwidth as a lower bound for your community hosted image registry.


you just have to host the recipe and the hash/meta-data

c'mon. This is not amateur hour. Hosting the whole thing only made sense for docker because their plan was always to do this microsoft style play.

If you assume you are either open source or fully closed enterprises, the problem is very, very easy to solve. and cheap. Just relinquish full control of being able to close all the doors for a fee, like they are doing now.


Rerunning a glorified series of bash scripts on every install doesn’t work at any sort of scale. This is why we’ve moved away from configuration over ssh to either:

1) deterministic builds with centralized caches

2) snapshots

Docker has some features of (1) but is really (2).

You absolutely do not want to download the recipe to build a Docker image instead of the image itself.

Further, the base images have no recipe. They are a tarball of a slimmed down pristine base os install.


The funny thing is that I don't think anyone has ever had to pay a Linux distribution for hosting packages of their open source software... :thinking face:


It should use DHT/BitTorrent. Organizations could share magnet links to the official images. OS projects have been doing it for years with ISOs.


BitTorrent will solve the distribution problem but not magically provide more storage. Someone still has to foot the bill for storing gigabytes (or terabytes) worth of docker images.


Have a free client that seeds any images that you download, and a paid for one that doesn't. Now you have all those who don't want to pay providing your storage and bandwidth.


This is an excellent use of p2p incentives. Share to pay.

Tricky bit is, for some users, you’ll either abandon them with no way to share or you will still be paying their ingress/egress fees when their client falls back to your TURN server if NAT hole punching fails.

You’ll also have to solve image expiration gracefully. Hosting a “publish as much as you want” append-only ledger isn’t going to scale infinitely. There needs to be garbage collection, rate limiting, fair-use policies, moderation, etc. Otherwise you’re still going to outstrip your storage pool.


This doesn't make sense as an argument at all. If there isn't anyone using the image, no one will have it on their computer... Sure - but that isn't as much of an issue if you have a build file that constructs the image up from more basic parts. Secondly, the popular files get way way faster with the more their used/downloaded. Torrent is a phenomenal *wonderful* system to distribute Machine Learning weights, docker images, and databases. It's a developers dream for a basic utility of distributing data. Potentially ipfs could be useful too, but idk much about it specifically.

One of the most revolutionary and fundamental tools to be made is a basic way / template / paradigm which constructs databases in a replicable way, such that the hash of the code is mapped to the hash of the data. Then the user could either just download the data or reproduce it locally, depending on their system's capabilities, and automatically become a host for that data in the network.


It should probably work like ipfs, with pinning services. You can pin (provide a server that stores and shares the contents) yourself, or pay for a commercial pinning service (or get one from an OSS-friendly org, etc).


I think something like IPFS would be perfect for that, you have some layers pulled in your storage anyways.

big projects could self host easily, as their popularity would quickly give them enough seeds to not need to provide much traffic themselves.

also I think adapting docker way of storing layers as tars is fundamentally broken, maybe with combination with something like ostree as a storage to decrease duplicates we could really cut a lot of storage.

imagine how much unique content does your average docker image have? 1 binary and maybe few text files? rest is probably os and deps anyways.


We need to use a distributed system instead of a centralised one. Probably built on a source control system that can handle that.


May want to keep your eye on dragonfly, a P2P image distribution protocol: https://d7y.io/docs/


This seems like a perfect use case for IPFS.


We had a prototype Docker/BuildKit registry using IPFS at Netflix built by Edgar.

https://github.com/hinshun/ipcs


What happened to the old mirror lists? The ones where apt/rpm package repositories tend to be hosted?



I mean, why can't (or don't) we use those for things like docker images, npm registries and the like.

All this centralised dependence talk is frustrating (it's expensive, no free lunch etc;) when it's largely been a solved problem for decades.


a collection of distributed systems. start your own. connect with other enthusiasts in your area to get them connected.


Yea, people are really spoiled due to more than a decade of VC and general investing cashburn offering tons of services for free. But at the end of the day there are costs and companies will want to recoup their money.

The problem with just replacing GitHub isn't the source code hosting part. There's tons of alternatives both commercial and open source. The problem is the cost of CI infrastructure and CDN/content/release hosting.

Even moderating said CI infrastructure is a nightmare. freedesktop.org which uses a self-hosted gitlab instance recently had to shutdown CI for everything but official projects because the crypto mining bots attacked over the last few days hard and fast.


I don't think we will receive enough donations to cover infrastructure costs, let alone maintainers' salaries.

Even core-js sole maintainer failed to raise enough donations to feed his own family, despite the library is used by at least half of the top 1000 Alexa websites. [0]

People (and also big-techs) just won't pay for anything they can get for free.

[0]: https://github.com/zloirock/core-js/blob/master/docs/2023-02...


I guess the SQLite team managed to do it by using an even more permissive license than GPL, which attracted big companies into funding them ??


> And when GitHub starts similar shenanigans

The difference between GitHub and Docker is that GitHub is profitable.


Genuine question: Is Github profitable? I can't seem to figure out if it was before or after the acquisition from Microsoft.


whether or not it's generating cash at the moment is probably a secondary concern for Microsoft, behind having the code to train on.


Profitable today. Moving from docker to github is just kicking the can down the road.


So is Docker Inc. The last I heard it is profitable and is doing quite well


Codeburg is more strict for blocking projects at the moment. Wikiless is blocked by Codeburg for using the Wikipedia puzzle logo but is still up and unchanged on GitHub.


Maybe we all start hosting this stuff via torrent or something?


Hosting images can be doe via hypertext transfer protocol alone. Keep a local copy of your dependencies and back that up. Done.


My first thought on this was good riddance. The dev model of "we've lost track of our dependencies so ship Ubuntu and a load of state" never sat well.

However it looks like the main effect is going to be moving more of open source onto GitHub, aka under Microsoft's control, and the level of faith people have in Microsoft not destroying their competitor for profit is surreal.


> The dev model of "we've lost track of our dependencies so ship Ubuntu and a load of state" never sat well.

Docker, the company, is failing. Docker, as in containerization technology, is alive and very well.

(edited for clarity)


>> Docker, the company, is failing. Docker, the container technology, is alive and very well.

Is it though? Podman is more well liked (no daemon / non-root) and Kubernetes doesn't have direct support for it any more. I don't think it matters much that k8s uses CRI-O but docker needs to be #1 for running a container on a single machine. Yet, they seem to be letting that slip away because it is not directly monetizable. Software businesses need to be creative - invest a lot into free things, which support monetization of others. If you want low risk returns, buy T-bills.


> Is it though? Podman is more well liked (no daemon / non-root) and Kubernetes doesn't have direct support for it any more.

I've used "Docker" as in "containerization", since they are often used synonymously and the grandparents intent was definitely to criticize the latter. Docker itself will quite likely stay around as a name, but I have no faith in the company.


It seems the company is slowly trying to make users pay for it. Not too long ago it was free for companies, then they made companies pay to use it. Now they're making people pay to store images. In the next few years I would be surprised if they didn't introduce a new way to monetize it, leading up to removing the use of any docker executable at all without payment.

Many will come to comment "that's absurd, and you could just use an old executable you already downloaded prior to them halting it's circulation", etc. But I do think the writing is on the wall here with Docker continually getting greedy. If they don't monetize the use of Docker containers in general by making users pay to run them, they have other options like spyware and ads - e.g. install telemetry in the base of the system somehow to sell the personal data they receive from all images*, etc.

* I know this may not work directly as I've stated it, just giving the flavor of idea


> It seems the company is slowly trying to make users pay for it

I appreciate that pun ;)

> Many will come to comment "that's absurd, and you could just use an old executable you already downloaded prior to them halting it's circulation", etc.

I don't see this as absurd at all. The problem Docker has is that it failed to use it's market share. Making these moves now that easy alternatives exists (Podman, open registries, k8s etc) just doesn't have the pull and alienates their remaining customers.


I don't have any issue with Docker charging for things. If they can get away with charging $0.50 every time someone types docker ps, great. Unfortunately that isn't the reality as competition exists. Eventually bad monetization strategies will lead to business decline.


The company has never been good though. The technology was great, but they never found a way to monetise it properly, and their whole approach to outreach, community, and developer experience was terrible. Their support, non-existent.

In fact, I'd go as far as to say that, given the ubiquity of their product, I can't think of a worse way a company could have performed. It's been about 10 years now since it really took off, and in that time, the technology has been great, but dealing with the company, always been difficult.


> The technology was great

Nah, you just didn't look carefully enough. Docker the technology has been recurring amateur hour of screwups. For example, they hashed the downloaded data but forgot to compare the hash to expected value.

The only thing "great" about Docker was the rough idea of easy-to-transport filesystem images that could run just about anywhere, and the fact that they managed to make that kind of thinking mainstream.


I disagree with you. I've used it in production for almost a decade now. There have been issues sure, but name me a single production-level technology that hasn't had problems?

- Microsoft Windows: Various versions of Windows have had critical security vulnerabilities over the years, leading to widespread malware outbreaks like WannaCry and NotPetya.

- OpenSSL: In 2014, the Heartbleed bug was discovered in OpenSSL, which left millions of websites vulnerable to attacks that could steal sensitive data.

- Apache Struts: In 2017, the Equifax data breach was caused by a vulnerability in Apache Struts, a popular open-source framework for building web applications.

- Boeing 737 Max: In 2018 and 2019, two deadly crashes were caused by a software flaw in the flight control system of the Boeing 737 Max airplane.

- Google Cloud: In 2020, a widespread outage of Google Cloud services caused disruptions for many businesses and organisations that rely on the platform for their operations.

Should I continue?


Many things have had individual bad incidents. Docker has had many, and they've been of incredibly naive kind. To compare Docker to the kind of careful engineering done in e.g. airplanes is just silly. The commits to Docker ("Moby") have historically been underwhelming. Docker is to containers as what MongoDB was to NoSQL.


Docker either came up or popularized the idea that completely dominates how almost all services work today. That is something imo. It is obvious in hindsight maybe but not in 2012.


The docker runtime is not supported by k8s anymore but docker built containers do still and will very likely continue to work for a long time.


Tell that to svb


>The dev model of "we've lost track of our dependencies so ship Ubuntu and a load of state" never sat well.

This was my first thought when I learned of Docker.

I have a hard time calling myself an 'Engineer' when there are so many unknowns, that I'm merely playing around until something works. I insist on being called a Programmer. It pays better than 'real' engineering. Why not embrace it? (Credit toward safety critical C and assembly though, that's engineering)

EDIT: Programmer of 15 years here


As someone that does what you consider "engineering", my current project is containerized because I've lost too much of my life to debugging broken dev environments. The person who ships safety critical releases at most companies isn't a developer that's deeply familiar with the code, it's usually a distracted test engineer with other things to do that may not be very tech-saavy. Anything I can do to help them get a reproducible environment is great.


I remember back around 2009, our org had this horrible open source program, which shall not be named, foisted upon us by an excitable bloke who had heard wondrous things from the university where it had been developed. Well, we had a bitch of a time getting it running. Instructions and build scripts were minimal. We thrashed around.

I noted to someone that this felt less like a product and more like a website and set of scripts ripped from a working system. A few of us were shipped up to the originating university for a week to hob-knob with the people in charge of it. Toward the end, during the ritual inebriation phase, I managed to find out that they had never actually attempted to install it on a clean system. This had truly been ripped from a working system. And I thought to myself, "How horrible."

Now, I am admittedly pants at Linux. No good at all. But there is something about Docker and similar technologies that says, "Yes, we threw our hands in the air and stopped trying to make a decent installation system."


> "Yes, we threw our hands in the air and stopped trying to make a decent installation system."

I mean what do you expect, it goes along so nicely with "Yes, we threw our hands in the air and stopped trying to architect coherent software systems."


I doubt the real™ engineers at NASA and SpaceX know everything their proprietary closed-source Matlab-to-FPGA tooling is actually doing under the hood.


can't say for spaceX, but most NASA glue is built in house. close to zero proprietary ones.

maybe the vlsi is closed. but that is "industry standard" i guess. rest is a bunch of mathy-language du jour held together with python or something.

...opaque docker containers going to prod doesn't have a excuse other than inefficient orgs fulled by VC or Ad money. Or maybe they do, but you won't excuse them using NASA as an example :)


Developer/Programmer/Engineer titles are mostly meaningless because they mean different things at different companies. You can go wayback and call yourself a coder.


what's old is new again. the teenagers nowadays call themselves "coders" because "programmer" is for old people.


What state are you thinking of? The containers are ephemeral and the dependencies are well specified in it. You can complain about shipping Ubuntu, but the rest of this doesn’t make sense.


Makes perfect sense to me, sadly. The dependencies are specified in excessively, that's why everyone is shipping Ubuntu. This is caused by and further facilitates the development style of "do not track what we use, just ship everything". Also, the dependencies are specified in container images, which themselves are derivative artifacts and not the original source code, and these dependencies often change in different container builds with no explicit relevant change.

There are three practical problems as a result: - huge image sizes with unused dependencies delivered as part of the artifact; - limited ability to share dependencies due to inheritance-based model of layers, instead of composition-based model of package managers; - non-reproducibility of docker images (not containers) due loosely specified build instructions.

Predicting future comments, nix mostly fixes these issues, but it has a bunch of issues of its own. Most importantly, nix is incredibly invasive in development process, adopting it requires heavy time investments. Containers also provide better isolation


It doesn't have to be a choice of containers or nix though. You can put your nix built applications into a container just fine. You can also pull an image from somewhere else and shove nix stuff into it as well.

There is definitely a bit of a learning curve but the time investment is frequently over exaggerated. I see it as similar to the borrow checker in rust. Yes, you have to spend some time and also learn about the rules. But it helps you build software that is more robust and correct. Plus once you're into it you save significant time not having to deal with dependencies especially when bringing on new people


This is true, I can put nix apps in a container. It improves the reproducibility of builds, but it still wastes disk space, because the container is still based on layers and not packages.

> There is definitely a bit of a learning curve but the time investment is frequently over exaggerated

I'm not talking about the learning curve and its time investment, I'm talking about design problems. Nix's invasiveness is completely unnecessary in modern Linux, it makes its installation a very special case and requires lots of patches to just get stuff to work in nix. The fact that nix patches built binaries so that they point to correct shared libraries locations is a crutch which shouldn't be there in the first place.

It also tries to reimplement pretty much every package manager and build tool, even if they already work well and provide the reproducibility guarantees, including cargo, poetry, npm/yarn. This is a time investment, but it doesn't help me build software that is more robust and correct, that part is already handled for me. Instead, it just worsens the DX, as it forces me to use tools non-native to the ecosystem without first-class support for commonly used features.


> Most importantly, nix is incredibly invasive in development process, adopting it requires heavy time investments.

Typically yes, but Nix actually allows you to be less pure to save time and pick your most economic point on the reproducibility continuum.

I'm fairly sure there was an article about this... ah here it is:

https://www.haskellforall.com/2022/08/incrementally-package-...


This isn't what I was talking about, I'm all for being as pure as possible, dial the reproducibility and isolation to the max. Unfortunately, Nix itself as an application is not isolated. It requires a unique installation process to be available for users, because it wants to manage its store at the root level (/nix/store/), but I hear the situation is different on macOS. Applications packaged with Nix also require special treatment to run in Nix environment, with paths rewritten and binaries patched to support Nix filesystem structure instead of the traditional Linux one.


> It requires a unique installation process to be available for users, because it wants to manage its store at the root level (/nix/store/)

Yes, for cache hits to happen it has to be this way as far as I remember.

There is a project called nix-portable though that I've seen some HPC users report success with:

https://github.com/DavHau/nix-portable

> Applications packaged with Nix also require special treatment to run in Nix environment, with paths rewritten and binaries patched to support Nix filesystem structure instead of the traditional Linux one.

If you fully package it. If you use something like an buildFHSUserEnv[0] that's not true.

There is also nix-autobahn and nix-alien for automatically running foreign binaries on a more ad-hoc basis or to generate a starting point for packaging.

0: https://nixos.org/manual/nixpkgs/stable/#sec-fhs-environment...


The truth is Docker ( the company ) could never capitalize the success of their software. They clearly need the money and I have the impression things have not been "great" in the last couple of years. ( regardless of reasons )

The truth is also the fact that most people/organizations never paid a dime for the software or the service, and I'm talking about Billion dollar organizations that paid ridiculous amounts of money for both "DevOps Managers" and consultants but the actual source of the images they pull are either from "some dude" or some opensource orgs.

I get that there will be many "innocent victims" of the circumstances but most people who are crying now are the same ones who previously only took, never gave and are panicking because as Warren Buffett says: "Only when the tide goes out do you discover who's been swimming naked."

And there are a lot of engineering managers and organizations who like to brag with expressions like "Software supply chains" and we'll find out who has been swimming with their willy out.


I think it's also a product of the larger economic environment. The old model of grow now and profit later seems to be hitting a wall, leaving companies scrambling to find profit streams in their existing customer base not realizing that doing so will hinder their growth projection leading to more scrambling for profit.

It's a vicious cycle, but when you don't grow in a sustainable way it seems unavoidable.


The only real moat they seem to have here is that "FROM" in a Dockerfile, "image:" in a docker-compose.yml file, and the docker command line default "somestring" as an image to "hub.docker.com:somestring".

They pushed that with the aggressive rate limiting first though, which caused a lot of people to now understand that paragraph above and use proxies, specify a different "hub", etc.

So this move, to me, has less leverage than they might have intended, since the previous move already educated people on how to work around docker hub.

At some point, they force everyone's hand and lose their moat.


x/y expands to docker.io/x/y and z expands to docker.io/library/z


Right, it's a little different than my summary, but the main point was that they educated everyone that there's a way around that with specific image names, or a proxy, etc. If they push hard enough, the internet will route around them, distros will ship a patched docker, preset environment variables, or a docker->podman alias, etc. They will lose control over the "root namespace".


It was always unbelievable too me how much they hosted for free. I recklessly pushed over 100gbs of containers the last years, all free. Never made sense to me, even google doesn't do this anymore.


There are techniques to compress and dedup redundancies... I doubt it is real 100gbs on their disks...


It's still 100gb over the wire and those bandwidth costs add up, especially if it's a popular image used by tons of projects.


yes, but the downward traffic costs (via docker pull) are likely their main expense, not the upward transfer.


Even so...storage is not free.

Looking at the rates of enterprise storage costs compared to what Google or Apple charges consumers - I was surprised by how subsidized people's photo libraries are.


They are not, but actual usage is typically a single-digit % of promised space. So power users are served at cost (or even at loss), but the overwhelming majority of users are actually overpaying for what they use.


I just compared the price of 2TB on backblaze B2 and Google Drive, they were roughly the same price. Google doesn't charge for bandwidth, but it's also against the ToS to do anything that would result in lots of downloads anyway.

Google also charges a flat rate for 2TB with the next lowest plan being 200GB. So the majority of users are paying for 2TB but not using anything close to that much. I suspect consumer storage is also much easier to offload to hard drives and tape backups while files on S3/B2 would mostly require SSDs with some probably being stored in ram.


oh that's completely different. They want you to host your photos with them because then you can never leave their platform.


And it's different in more ways than one. Hosting the images is just one fraction of the features that you get with Apple or some other provider. Searching, albums, sharing, are all baked in services that are still cheaper than, say, going through S3 and having a bucket with similar storage.


Apple does a pretty bad job of it then, because I have a local copy of my entire photo library too on an external hard drive. It’s quite nice really, cloud storage plus a local copy. I guess it’s somewhat of a moat because switching to some other cloud provider or my own system will be more expensive?


obviously hn readers will know how to copy the photos to their own computer. most people won't and that's the point


No I’m saying that Apple Photos literally copies to a local library automatically if you have a mac and open Photos.app. I didn’t even have to set it up. My partner did the same thing, and she’s decidedly not tech savvy. It’s one of the only reasons I ended up paying for Apple One for the 2TB cloud storage, is because of how easy they made it.


We are even using Docker Hub to store and distribute VM images... The so-called "container disk image" format is sticking a qcow2 file in a Docker image and storing it on a Docker registry.

https://github.com/kubevirt/kubevirt/blob/main/containerimag...


There is a way to go from that status quo to a new and sustainable one; it requires actual engineering of the state change.

That has not seemed to have happened here, or not happened well.

Nobody expected it to be free forever; I think we expected the transition to be a lot more orderly. There have been years to prepare.


After Docker announced rate limiting for the hub this was an anticipated move. Was just the matter of time.

The only recommendation to everyone: move away or duplicate.

One of the strategies I am yet to test is the synchronization between gitlab and github for protected branches and tags and relying on their container registries. Thus (at least) you provide multiple ways to serve public images for free and with relatively low hassle.

And then for open source projects’ maintainers: provide a one command way to reproducibly build images from scratch to serve them from wherever users want. In production I don’t want to depend on public registries at all and if anything I must be able to build images on my own and expect them to be the same as their publicly built counterparts. Mirroring images is the primary way, reproducing is the fallback option and also helps to verify the integrity.


> Mirroring images is the primary way, reproducing is the fallback option and also helps to verify the integrity.

I suspect the latter will become more common over time. I can count on no fingers the number of open source projects which I’ve encountered which have production-grade container images. Once you need to think about security you need to build your own containers anyway and once you’ve done that you’ve also removed the concern of a public registry having issues at an inopportune moment.


Some self promotion but I have built a project that aims to solve some of these issues in Kubernetes. https://github.com/xenitAB/spegel

I have avoided a couple of incidents caused by images being removed or momentarily not reachable with it. It would at least mitigate any immediate issues caused by images being removed from Docker Hub.


To me this smells of VC model issues.

Initially it's great if you can get all the FOSS to play in your technology walled garden. Subsidize it with VC cash.

Downside is it generates a ton of traffic that is hard to monetize. Sooner or later it reaches a point where it can't be subsidized and then you get pay up or get out decisions like this.

One question I haven't seen yet is 420 USD? Is that what it costs to serve the average FOSS project? Or is that number a bad Elon style joke? If they came out with "We've calculated X as actual costs. We're making no margin on this but can't free lunch this anymore" that would go down a lot better I think.


Without entering into the specifics of this situation, I don’t understand the hate for Docker the company. They are providing a huge service for the community and looking for ways to make money from it to make it sustainable. I would give them a bit more empathy/benefit of the doubt as they iterate on their approach. Somewhere, somehow, someone has to pay for that storage and bandwidth whether directly or indirectly (I am old enough to remember what happened with sourceforge so I rather them find a model that works for everyone)


It’s a long standing hate for me that isn’t limited to just Docker, companies that used “we’re free” to obtain massive growth only to turn around and switch monetization models completely once they’ve become the dominant player in the market. It’s a massive distortion on the market, driving companies that tried to be fiscally sound from the start into irrelevancy while extremely inefficient ventures become the market leaders on account of superior funding.

Or to put it another way, Docker should have been focused on sustainability from the start and not dangled a price they knew couldn’t last in front of people to increase adoption.


I agree they deserve to get paid, but there are better ways than essentially holding customers’ data and URLs hostage. The problem is they are trying to extract money from other open-source developers who are at least as cash-strapped as them.

Plus, I doubt they will get many people to actually start paying. People will simply move to other storage (like Github) and switch the URLs. Docker is fully open-source and works without docker.io, they don’t really have a position here except owning the name.

IMO they just need to edit / clarify that open-source developers and organizations won’t need to pay, only those who presumably should have the funds. And take a more passive stance: bug people with annoying messages like Wikipedia does, and threaten shutting down docker.io altogether if they don’t somehow get funding (some people will complain about this too but more will understand and will be sympathetic). Wikimedia, Unix/Linux, Mozilla, etc. as well as Homebrew/cURL/Rust all seem to be doing fine as nonprofits without creating huge controversies like this.


If you inconvenience all users (by devastating the "ecosystem" of publicly available images) in order to extort money from a few users (some organizations will pay up, at least temporarily) you should expect hate.

The only benefit of doubt Docker deserves is on a psychological plane: evil or stupid?


Docker should never have become a business. There’s virtually nothing there to make a business around, it’s a suite of useful utilities that should have remained a simple open source project. I switched to podman a while ago and haven’t looked back.


Docker Hub does host images running into several GBs for even small hobby projects, and they also bear network transfer costs. Even with podman, you're going to have to host your images somewhere, right?

Right now, the internet infrastructure heavily relies on the good graces of Microsoft (Github, npm), and storage space and network transfer charges are taken for granted.


The design of docker distribution is poor because the company backing it wants to retain control .

Torrent based distribution for open source projects and other public initiatives were there long before docker .

Apt mirroring has also been there for a long long time . Checksum integrity verification of mirrors have well established workflows .

We don’t need good graces of any company to distribute assets .


Can we just get the big three cloud players to make a new public repo? They’ve got oodles of bandwidth and storage, plus the advantage that a lot of access would be local to their private networks.

Setup a non-profit, dedicate resources from each of them spendable as $X dollars of credits, and this problem is solved in a way that works for the real world. Not some federated mess that will never get off the ground.


Consensus on a new repo for public community images would help, but it isn't the biggest problem (as the author notes, GHCR does that already, and GitHub seem pretty committed to free hosting for public data, and have the Microsoft money to keep doing so indefinitely if they like).

The issue I worry about is the millions of blog posts, CI builds, docker-compose files, tutorials & individual user scripts who all reference community images on Docker Hub, a huge percentage of which are about to disappear, apparently all at once 29 days from now.

From a business perspective particularly, this looks like suicide to me - if you teach everybody "oh this guide uses Docker commands, it must be outdated & broken like all the others" then you're paving a path for everybody to dump the technology entirely. It's the exact opposite of a sensible devrel strategy. And a huge number of their paying customers will be affected too! Most companies invested enough in Docker tech to be paying Docker Inc right now surely use >0 community images in their infrastructure, and they're going to see this breakage. Docker Inc even directly charge for pulling lots of images from Docker Hub right now, and this seems likely to actively stop people doing that (moving them all to GHCR etc) and thereby _reduce_ the offering they're charging for! It's bizarre.

Seems like a bad result for the industry in general, but an even worse result for Docker Inc.


Yeah that's going to be the real issue, all the niche unmaintained images that no one is going to pick up the pieces for.

They're taking a big chunk of open source and tossing it in the garbage.


aws already have one https://gallery.ecr.aws/


Unlimited free downloads inside AWS. First 5TB of outbound transfer free. Then $0.09/GB for additional transfer.

https://aws.amazon.com/ecr/pricing/


My team used ECR for some stuff and it’s not great. We want to move on from it.


My team also user ECR, but I've not got any complaints. What issues do you have with it?


It’s constantly getting rate limited in CI then we have to re-run tests due to flappy failures. Maybe it’s an easy fix or something we did wrong, but we have a parallel system on DockerHub that just works™


quay.io is a pretty popular general-purpose repo, it replaced docker.io for many projects when they started rate-limiting.


There is no free tier https://quay.io/plans/


> Can I use Quay for free? > Yes! We offer unlimited storage and serving of public repositories. We strongly believe in the open source community and will do what we can to help!

It’s completely free for public repositories.


Wow! Didn't notice at all since its at then of the page.


The end*


I suppose BitTorrent for Images should be a thing (again?)

Discussions of decentralization and redundancy always come up in software/system design and development, but we seem to always gravitate to bottlenecks and full dependency on single entities for the tools we "need".


Good point. A dual system would be ideal. http to ensure coverage, bt to achieve network effects on the popular images.

Presumably a few - nginx and what not - have a high %


Many (most?) BitTorrent clients support web seeding[1] by a regular HTTP server.

A public registry could do web seeding, with bandwidth restrictions if needed, to ensure availability.

[1]: https://en.wikipedia.org/wiki/BitTorrent#Web_seeding


You could even do some BT freeriding on top of multiple HTTP sources.


Could IPFS possibly be a good distributed (and free?) storage backing for whatever replaces DockerHub for Open Source, as opposed to using something like GitHub? We'd still need a registry for mapping the image name to CID, along with users/teams/etc., but that simple database should be much cheaper to run than actually handling the actual storage of images and the bandwidth for downloading images.


Probably. You still need to store and serve the data somewhere of course but for even moderately successful open source organizations they will likely find volunteer mirrors. The nice thing about IPFS is that new people can start mirroring content without any risk or involvement, new mirrors are auto-discovered, like bittorrent.

It seems like the docker registry format isn't completely static so I don't think you can just use a regular HTTP gateway to access but there is https://github.com/ipdr/ipdr which seems to be a docker registry built on IPFS.

> We'd still need a registry for mapping the image name to CID, along with users/teams/etc.

IPNS is fairly good for this. You can use a signing key to get a stable ID for your images or if you want a short memorable URL you can publish a DNS record and get /ipns/docker.you.example/.

Of course now you have pushed responsibility of access control to your DNS or by who has access to the signing key.


IPFS is the same free that Docker provides. Someone, somewhere is paying for the storage and network. The public IPFS would not likely support the bandwidth, volume, and most CSOs.


I posted this in the other thread already but will also add it here. https://news.ycombinator.com/item?id=35167136

---

In an ideal world every project had its own registry. Those centralized registries/package managers that are baked into tools are one of the reasons why hijacking namespaces (and typos of them) is even possible and so bad.

Externalizing hosting costs to other parties is very attractive but if you are truly open source you can tell everybody to build the packages themselves from source and provide a script (or in this case a large Dockerfile) for that. No hosting of binary images necessary for small projects.

Especially since a lot of open source projects are not used by other OSS but by large organizations I don't see the need to burden others with the costs for these businesses. Spinning this into "Docker hates Open Source" is absolutely missing the point.

Linux distributions figured out decades ago that universities are willing to help out with decentralized distribution of their binaries. Why shouldn't this work for other essential OSS as well?


Does anybody know whether there could be something like an open/libre container registry?

Maybe the cloud native foundation or the linux foundation could provide something like this to prevent vendor lock-ins?

I was coincidentially trying out harbor again over the last days, and it seems nice as a managed or self-hosted alternative. [1] after some discussions we probably gonna go with that, because we want to prevent another potential lock-in with sonarpoint's nexus.

Does anybody have similar migration plans?

The thing that worries me the most is storage expectations, caching and purging unneeded cache entries.

I have no idea how large/huge a registry can get or what to expect. I imagine alpine images to be much smaller than say, the ubuntu images where the apt caches weren't removed afterwards.

[1] https://goharbor.io


It's all open source software. Stupidly simple and easy to host. It's a low value commodity thing without mucb value that anyone can trivially self host. All you need is a docker capable machine (any linux machine basically) and some disk space to host the images. And a bit of operational stuff like monitoring, backups, etc. So there's an argument to be made for using something that's there, convenient, and available but not too costly. Which until recently was dockerhub. But apparently they are happy to self destruct and leave that to others.

They should take a good look at Github. If only for the simple reason that it's a natural successor to what they are offering (a free hub to host software for the world). Github actually has a container registry (see above for why). And of course the vast majority of software projects already uses them for storing their source files. And they have github actions for building the docker images from those source files. Unlike dockerhub, it's a complete and fully integrated solution. And they are being very clever about their pricing. Which is mostly free and subsidized by paid features relevant to those who get the most value out of them.

I like free stuff of course. But I should point out that I was actually a paying Github user before they changed their pricing to be essentially free (for small companies and teams). I love that of course but I was paying for that before and I think they were worth the money. And yes, it was my call and I made that call at the time.

Also worth pointing out that Github actions builds on top of the whole docker eco system. It's a valuable service that is built on top of Docker. Hosting the docker images is the least valuable thing. And it's the only thing dockerhub was ever good for. Not anymore apparently. Unlike dockerhub, Github figured out how to create value here.


Yeah, honestly the answer is still Github. I entered this industry in 1998. I remember why people don't like Microsoft. I avoid Windows like the plague and have for many years. But Github has never been anything but good for developers and open source, and that hasn't seemed to change. Microsoft can afford to fund stuff like this for developer goodwill alone, and spends much more than that already for that reason. Its not the greatest model imaginable for the sustainability of open source, but there is absolutely no indication as of yet that it's a really bad one. I mean, who the hell wants to go back to sourceforge, even before it was bought and turned into a cesspool?


> Stupidly simple and easy to host.

At scale, serving tens of thousands of users? Unfortunately not.


If you have that issue, it's a solvable problem. Most companies simply don't and can only dream about that. Why pay until you actually get there?


Many people are quite upset. But on the other hand, how many years could this work? Petabytes of data and traffic.

When we started to offer an alternative to Docker Hub in 2015-2016 with container-registry.com, everyone was laughing at us. Why are you doing that, you are the only one, Docker Hub is free or almost free.

Owning your data and having full control over the distribution is crucial for every project, event open source.


I'm not too familiar with docker infra at large, but could the docker hub in principle act just as a namespace such that the opensource projects could have the images hosted elsewhere, but docker hub just redirects them there, so saving on the bandwidth on those?

I suppose there are hand-wavy business reasons not to do that, but somehow I feel that would:

  1. Still keep themselves in the loop and relevant, owning the namespace/hub/main registry
  2. Offset the costs of those that they don't want to deal with (push them to ghcr or whatever)
  3. Preserve some notion of goodwill in not braking the whole dockerverse


It is my understanding that Microsoft has previously tried to purchase Docker. Despite me having problems with companies buying up each other, I wouldn't be surprised if Microsoft revisits, or already is revisiting, buying Docker.

Being a heavy Visual Studio Code user, I have centered my personal development around Docker containers using VS Code's Devcontainer feature, which is a very, very nice way of developing. All I need installed is VS Code and Docker, and I can pull down and start developing any project. (I'm not there yet for all my personal projects, but that's where I'm headed.)


As I commented in yesterday's thread, this is not the first time Docker is pulling the plug on people with very short advance notice: See https://news.ycombinator.com/item?id=16665130 from 2018

Someone back then even wondered what would happen if such a change happened to Docker Hub https://news.ycombinator.com/item?id=16665340 and here we are today.


As long as we don’t share ownership in these platforms, nothing will ever truly belong to us. For Docker, the software, a Libre alternative is Podman. Instead of GitHub, use Codeberg, an open organization and service.

Now we need a Docker registry cooperative owned by everyone.


"Now we need a Docker registry cooperative owned by everyone."

You can build a pyramid with a non-profit, but that does not mean it owned by everyone.

Pyramids are problematic, as governance are a nightmare.


Hmm.

> GitHub's Container Registry offers free storage for public images.

But for how long?


> But for how long?

the subtitle of the web era


The hippies were right


Unlike Docker Inc, GitHub (via Microsoft) do have very deep pockets & their own entire cloud platform, so they can afford to do this forever if they choose.

And their entire marketing strategy is built around free hosting for public data, so it'd take a major shift for this to disappear. Not to say it's impossible, but it seems like the best bet of the options available.

Is it practical to set up a redirect in front of a Docker registry? To make your images available at example.com/docker-images/abc, but just serve an HTTP redirect that sends clients to ghcr.io/example-corp/abc? That way you could pick a new host now, and avoid images breaking in future if they disappear or if you decide to change.


> And their entire marketing strategy is built around free hosting for public data

    1. Embrace
    2. Extend       <- you are here
    3. Extinguish
it's bonkers when people innocently trust Microsoft to do the right thing


This would make sense if it wasn't a core feature of GitHub LONG BEFORE Microsoft bought them.

Can we stop this madness already?


No, because Microsoft did already buy them?


A simple vanity domain solution, like for Go packages, seems like it could work. Just redirect the whole URL directly to the actual registry's URL.

I don't know if container signature tools support multiple registry locations though.


I like where your head is at; I found this [1] and it makes a case that an attack vector may be created.

[1] https://stackoverflow.com/a/67351972


That's different - that's about changing the _client_ configuration. I'm looking to change the server instead, so that the client can use an unambiguous reference to an image, but end up at different registries depending on the server configuration. In a perfect world, Docker Hub would let you do this to migrate community projects away, but even just being able to manually change references now to a registry-agnostic URL would be a big help.

Shouldn't be any security risk there AFAICT. Just hard to tell if it's functionally supported by typical registry clients or if there are other issues that'd appear.


> so they can afford to do this forever if they choose.

If they choose. It's in fashion right now to fire people and squeeze free tiers.


They have a pretty good track record in code hosting (15 years), why would they ruin it for containers?


> But for how long?

applies very much to Github, and for all of their services, not just containers. The elephant in the room, hardly needs to be mentioned I would think (Microsoft). We are only a few years into that regime.


You can say a lot of bad stuff about Microsoft, but they are not known for randomly and suddenly killing of their services.


Serious question, though: how many services have they offered over the years that were free to anyone?

Free to existing Windows users, perhaps, but free to the world doesn’t seem like something Microsoft was historically in a position to offer, much less later kill.


Hotmail/outlook.com and OneDrive are two services that Microsoft has been offering for decades, for free (with a storage limit, but that’s nothing weird), no Windows required.


Not sure why those didn’t occur to me, other than that I’ve never used them. Thanks.


It's not randomly and suddenly killing off services, but rather suddenly (and not randomly) changing the pricing structure when companies have been committing to the point of lock-in for years on the free tier.


The 6-8 orders of magnitude difference in storage required?


We need a culture to not use resources even if it is available and cheap. Use github like it wasn't free.


Github shutting down is not on my risk matrix. Hopefully it happens after I retire.


It's on ours where I work, which is why we use Gitea to clone the various open source projects we use to our own servers. With binary level deduplication the amount of data stored is actually incredibly small. I think the unduplicated storage is like 50GB, and the deduplicated is like 10GB?


Docker had the ability to be baked into nearly every enterprise tech stack and extract money accordingly, instead they take time every 2 years to torment users. They will end up going down as one of the biggest missed plays in modern software.


Gitea has support for packages, so if you don't want to use GitHub or something similar, you could simply self-host your packages.


Then you have to pay for distribution, which is what people are using Docker Hub to avoid (grift off of?)


The ongoing Docker saga has convinced me I’m never, ever, ever making a product for developers.


Welcome to the bane of targeting a educated or skilled audience.


Docker have a responsibility to _their_ customers, not just OpenFaas and other open source projects. _Docker's_ customers rely on them to provide a safe and reliable service. If Docker allows these projects to be taken over by nefarious actors, then the risk falls to _their_ customers, not the Open Source projects that they've broken with.


> Has Docker forgotten Remember leftpad?

Anyone who takes even a brief glance at the absurdly yolo identity, upgrade, and permissions model Docker encourages should be able to answer this with an immediate "obviously they don't care".

The faster this implodes, the faster we get a safer setup where we don't blindly trust everything.


Docker is shooting itself in the foot. Oddly I decided to put on the docker shirt from one of their 2016 hackathons today before reading the news. I'm embarrassed to own this shirt and will throw it away after today.

RIP Docker, your former self will be missed while your current self will be loathed.


Side note: what did happen to Travis? I was just googling them yesterday because they were everywhere. They even came with the GitHub education package.

Did GitHub just eat them?


Travis CI got acquired by Idera in 2019 (https://news.ycombinator.com/item?id=18978251) then a month later laid off senior engineer staff (https://news.ycombinator.com/item?id=19218036).


Damn. They were my introduction to CI/CD. Such is life


It wasn't a good business to be in anyway. I don't think any of these freebie devops businesses are all that smart. They're not a "business", they're a feature of someone else's business. And as soon as they catch up then you're done.


Also they’re surprisingly expensive: things like spam and cryptocurrency mean that you need a fairly large abuse prevention team which is expensive but has no customer-visible benefits. GitHub has that too but as you said they at least have the rest of the business with which to recoup that cost.


Travis was acquired a few years ago and things went downhill from there on.


Seems based on some of my research that they've completely exited the free side of the business. All of their plans are now paid and the cheapest plan is $64/year


I suspect GitHub actions put a massive dent in their product usage. I seem to remember they started to cut costs and restrict free usage some years back too, and that was the beginning of the end.


One annoyance with how docker images are specified is they include the location where they are stored. So if you want to change where you store you image you break everyone.

I wonder if what regsitry.k8s.io does could be generalized:

https://github.com/kubernetes/registry.k8s.io/blob/main/cmd/...

The idea is the depending on which cloud you are pulling the image from, they will use the closest blob store to service the request. This also has the effect that you could change the source of truth for the registry without breaking all Dockerfiles.


I am confused by the meaning of Docker's announcement. They keep saying "organizations" will have Docker images deleted. Does that include personal FOSS images or not? Because the vast majority of Docker Hub images are uploaded by individual contributors, not "organizations."

Too bad about their poor relationship with the FOSS community. I've applied to them for years, and actually merged some minor patches to Docker to help resolve a go dependency fiasco. Zero offers.

I guess the next logical move is to republish any and all non-enterprise Docker images to a more flexible host like the GitHub registry.


Has there been any work on making these centralised public repositories distributed?


Yes containerd supports pulling images from IPFS: https://github.com/containerd/nerdctl/blob/main/docs/ipfs.md


Except you need to pay for pinning IPFS files to actually persist. And there-in lies the crux. People don't got money to pay for their small time docker containers.

The cost of some of the IPFS hosts that will give you a dropbox of sorts still end up costing roughly the $20/month similar to the Docker $25/month for a team account.


I was just thinking this seems like an ideal use case for IPFS or bittorrent, since most users will already be running on a server.

It doesn't seem unreasonable to have the client automatically pin/seed the container it pulls.


This has been happening recently with Helm chart repositories etc. Maybe it's time people started hosting their own Container Registries?

The huge bandwidth requirements are an incentive to keep images small.


Would it be possible to build something on top of DHT+Torrent? Main issue seems around image discovery.


Hit me up if you want to discuss using BitTorrent to back images. https://github.com/anacrolix/torrent


What work needs to be done? Provision a server somewhere and host it. AWS has one-click "give me a docker hub" and "give me a git hub" products.


This is incredibly frustrating to deal with because of how deeply the registry name is baked into Dockerfiles and image names. We end up "mirroring" our base images but there's some disconnect internally between "oh, yeah, our harbor.company/library/debian:bullseye is some random pull of library/debian:bullseye from the docker hub".

Imagine if you needed to change mirrors for `apt` and as part of that process you had to change all of the names of installed packages because the hostname of the mirror formed part of the package's identifier.


Is there any progress on podman for Windows or any other way of running containers on Windows? I cannot wait for the day the development community doesn't need to rely on anything from this company.


I use podman and it works fine?

Podman Desktop runs podman machine for me at startup.

Containers set to restart automatically don’t restart across podman machine restarts but that hasn’t upset my workflow much (at all?). I just start containers as I need them.


I wonder if VSCode’s Remote Container stuff works with Podamn on windows. The Docker Desktop + VSCode + Remote Container Dev + the ESP-IDF tooling is the nicest way to do production ESP32 dev with multiple ESP-IDF version (and even without that is just much simpler to get up and running on Windows, despite the rube Goldberg machine-esque description of it)


I've been playing around with it recently on macos and Podman is still a bit hit or miss. It seems mostly to be around assumptions of permissions. I switched over to colima (which is specifically for macos) and have had next to no issues though. Hopefully podman is able to tick off those last few boxes and make it properly stable in this use case.


Both Podman Desktop and Rancher Desktop are viable options for running Linux containers on Windows.


Is there something in particular missing? I have been using Podman for Windows almost daily for the past six months. There is no management GUI built-in like Docker for Windows, but I have not found that be a problem at all.


This is cool and I explain why: - cool because advocate for a proprietary technology hidden behind Open Source and Free Software purism is a fallacy - cool because by theses actions you improve their business against users freedoms

And most of all, Cool because I hope you and other will take the time to ask yourself before advocating a technology:

Is this technology good for my rights and by the way for knowledge of all.


It's going to cost companies a lot more money to migrate their docker dependencies than simply pay their open source dependency maintainers to keep the org intact. Might as well become a project sponsor!

This could be the best thing to happen to open source projects if the argument is framed correctly. You don't solve a cost problem by pursuing a more costly alternative.


From the article:

> If you are able to completely delete your organisation, then you could re-create it as a free personal account. That should be enough to reserve the name to prevent hostile take-over. Has Docker forgotten Remember leftpad? > > This is unlikely that large projects can simply delete their organisation and all its images. > > If that's the case, and you can tolerate some downtime, you could try the following: > - Create a new personal user account > - Mirror all images and tags required to the new user account > - Delete the organisation > - Rename the personal user account to the name of the organisation

Seems like no?

We cannot rename personal accounts on Docker Hub in 2023. There is no such feature in account settings, and here is the related issue: https://github.com/docker/roadmap/issues/44

So, at the moment, any public organization images are doomed to be lost, if they won't pay.


Yeah, the procedure outlined in the article is not going to work so don't even attempt it. No way to rename personal account right now.


It was sad to see people defending Docker Desktop changing from free to paid licenses. Now Docker is charging for even more things that used to be free.

The defenders are reaping what they have sown. Next time a company starts to charge for things that used to be free, remember not to encourage it, because that will only make it happen more.

People don't like this and many of them are not going to trust Docker in the future.


Nothing a company does is free to them. To expect them to provide a free service at all, let alone one with high costs associated with it, is not reasonable. They don't owe the world free service, same for any other company.