You use VC money and run at a loss while focusing on marketing and tech evangelism, getting more and more startups and hopefully established companies using your software. As the cracks begin to show those growing organizations have too much tied to your system and they can't afford outages and need to scale. So they pay you for the Enterprise version of your software where you actually fix all of flaws present in the community version.
Look at MongoDB if you need a good case study. It was incredibly hyped from about 2009-2015, people would defend it in heated online arguments, and today it's rarely considered for greenfield projects. But they're making about $100M/qtr selling subscriptions to Enterprise & Atlas servicing the technical debt established during that hype cycle.
It's not just Docker hub, there's services like the various Programming language package repos (npm, rubygems etc) and the Linux distro package repos.
I would have had Github in that category, but now it's owned by MS, presumably they don't have many of that kind of funding problems...
2) You can use a different registry
3) Run something like kraken so machines can share already-downloaded images with eachother
4) If you need an emergency response, you can docker save an image on a box that has it cached and manually distribute it/load it into other boxes
I'd also add as an option - https://goharbor.io
If you can't build a deploy a new version of your app, you can probably live with it and grab a cup of coffee.
If you server fails over and your new server can't pull the current image, your app is potentially down, and that's a lot worse.
The math you do here is the cost of wasted time versus the cost for you to run your own registry with better uptime.
If you're not an enterprise, fall back to your cloud provider's container registry (which is likely backed by highly durable and reliable storage; AWS ECR & S3 for example). It's likely you already are using Jenkins or some other runner to build your own containers (and if you're not setup to do so with your CI pipeline, you should be); it's trivial to support caching to your in house cloud provider registry as part of the container build/retrieval/deployment process. This functionality is a handful of properly written Jenkins jobs based on my experience.
I'm not trying to avoid going to get a coffee while you wait for external provider interfaces to settle when your systems are nominal. I'm trying to avoid those moments where you absolutely need to deploy an existing or updated container and can't (which you mention in your comment). Critical infra requires redundancy. Container storage is critical infra when part of a deployment process. One cannot say, "Sorry engineering, that hotfix can't go out until the registry is back up. See you in 30. exit stage right to coffee shop" in most environments.
EDIT: Also consider Docker's finances are precarious and it's possible they're going to suddenly go dark. Plan accordingly.
Disclaimer: Previously infra/devops engineer.
The opinion I gave pretty much would help you define what is critical.
Every single smart enteprise should be able to build from source, push to a registry and deploy via a pipeline defined in source control. This likely gives you most of the DR you need, and it also helps you work as part of a team.
However, when you take these decisions you should be able to quantify why. You can always spend more money on more 9s and tighter SLAs, but at some point you need to draw a line in the sand and call it good enough.
For a small startup, that might be before running their own registry, for a large ecommerce website it's probably after. Humorously, a tech first startup would likely do it, whilst a sales first established business probably won't, because neither of them are really quantifying their efforts.
Disclaimer: Also do infra and devops. These are concepts from the SRE book, though I don't use the SRE terms because I can't remember them off the top of my head. Interestingly, the book publishes how google add errors into some products, so people don't rely them unsustainable relability attribuites.
Seems like the tech giants should load balance these images for the good of the Internet to provide some decent redundancy and for my sanity at 11.30pm.
Partial failure is just fact of life. If this is a major issue for your process, it might be better to try and find ways to alter your process so this isn't an issue. Alternatively, mirror locally.
Being honest no build is worth losing sleep over. We are piggybacking on their service and bandwidth. For us to start building the infrastructure to cache their images doesn't make financial sense, we deploy daily and their uptime always allows for that.
Rule #1 Host your own stuff, never rely on others.
Rule #2 automate everything
Launched a new one.. docker pull bam error. Customer unsatisfied.
Incident Status Full Service Disruption