Hacker News new | past | comments | ask | show | jobs | submit login
Docker Hub Registry is down (docker.com)
116 points by 0vermorrow 28 days ago | hide | past | web | favorite | 32 comments

Regular reminder that Docker Hub is not really an enterprise registry with an SLA. You should use pretty much anything else for serious applications that rely on pulling images in the hot path (such as auto-scaling up).

I wish I knew this 2 years ago, going to have to migrate to something that isn’t down every month.

Being paranoid helps. My pipelines never pull images from the hub, I always store those locally.

do you have some kind of a maintenance routine that pulls image updates? you can end up with ancient docker hub images, because without --pull when compiling docker image, docker build won't pull base image updates by default

That's right. Update on on-need basis only. Do not fix what's working ;)

It seems like dockerhub requires a lot of bandwidth...lots of people being able to pull gigabytes worth of images everyday. Does anyone know anything behind the economics behind this? How can they offer it for free?

> Does anyone know anything behind the economics behind this?

You use VC money and run at a loss while focusing on marketing and tech evangelism, getting more and more startups and hopefully established companies using your software. As the cracks begin to show those growing organizations have too much tied to your system and they can't afford outages and need to scale. So they pay you for the Enterprise version of your software where you actually fix all of flaws present in the community version.

Look at MongoDB if you need a good case study. It was incredibly hyped from about 2009-2015, people would defend it in heated online arguments, and today it's rarely considered for greenfield projects. But they're making about $100M/qtr selling subscriptions to Enterprise & Atlas servicing the technical debt established during that hype cycle.

Likely the traditional model of taking a large amount of VC money, putting it in a pile, and then setting it on fire and waiting until they stop adding more, at which point the company ends.

yeah there's a variety of "free at point of use" services driving the Internet and, sooner or later it seems likely there will need to be a change in how they're funded.

It's not just Docker hub, there's services like the various Programming language package repos (npm, rubygems etc) and the Linux distro package repos.

I would have had Github in that category, but now it's owned by MS, presumably they don't have many of that kind of funding problems...

Github used to be in their own colo on their own bare metal. I'm not sure if they've been pushed into Azure cloud as part of the MS acquisition, but either way Github isn't paying cloud retail ($$$) for their bandwidth, and it's likely sustainable.

Yep. Even before the acquisition in 2016, GitHub was doing fairly well: https://news.ycombinator.com/item?id=13218842

What's the best way around this kind of outage?

1) You can run your own pull-through cache[0]

2) You can use a different registry

3) Run something like kraken[1] so machines can share already-downloaded images with eachother

4) If you need an emergency response, you can docker save[2] an image on a box that has it cached and manually distribute it/load it into other boxes

0: https://docs.docker.com/registry/recipes/mirror/

1: https://github.com/uber/kraken

2: https://docs.docker.com/engine/reference/commandline/save/

Great response here.

I'd also add as an option - https://goharbor.io

Have the ability to build containers on demand from source as well as host your own repo.

Realistically, it's probably a cache/mirror.

If you can't build a deploy a new version of your app, you can probably live with it and grab a cup of coffee.

If you server fails over and your new server can't pull the current image, your app is potentially down, and that's a lot worse.

The math you do here is the cost of wasted time versus the cost for you to run your own registry with better uptime.

If you're an enterprise, you're likely already running Nexus, Artifactory, or some other artifact manager. The additional overhead to store containers in these systems is so close to zero, we can round down for our purposes. It's all blobs and SHA hashes anyway. Storage is cheap.

If you're not an enterprise, fall back to your cloud provider's container registry (which is likely backed by highly durable and reliable storage; AWS ECR & S3 for example). It's likely you already are using Jenkins or some other runner to build your own containers (and if you're not setup to do so with your CI pipeline, you should be); it's trivial to support caching to your in house cloud provider registry as part of the container build/retrieval/deployment process. This functionality is a handful of properly written Jenkins jobs based on my experience.

I'm not trying to avoid going to get a coffee while you wait for external provider interfaces to settle when your systems are nominal. I'm trying to avoid those moments where you absolutely need to deploy an existing or updated container and can't (which you mention in your comment). Critical infra requires redundancy. Container storage is critical infra when part of a deployment process. One cannot say, "Sorry engineering, that hotfix can't go out until the registry is back up. See you in 30. exit stage right to coffee shop" in most environments.

EDIT: Also consider Docker's finances are precarious and it's possible they're going to suddenly go dark. Plan accordingly.

Disclaimer: Previously infra/devops engineer.

I don't disagree.

The opinion I gave pretty much would help you define what is critical.

Every single smart enteprise should be able to build from source, push to a registry and deploy via a pipeline defined in source control. This likely gives you most of the DR you need, and it also helps you work as part of a team.

However, when you take these decisions you should be able to quantify why. You can always spend more money on more 9s and tighter SLAs, but at some point you need to draw a line in the sand and call it good enough.

For a small startup, that might be before running their own registry, for a large ecommerce website it's probably after. Humorously, a tech first startup would likely do it, whilst a sales first established business probably won't, because neither of them are really quantifying their efforts.

Disclaimer: Also do infra and devops. These are concepts from the SRE book, though I don't use the SRE terms because I can't remember them off the top of my head. Interestingly, the book publishes how google add errors into some products, so people don't rely them unsustainable relability attribuites.

Maintaining your own registry as a cache.

pull-through mirror with tons of disk

If we only had listened our sysadmin...

So my CI environment requires access to other docker images, all hosted on Docker Hub.

Seems like the tech giants should load balance these images for the good of the Internet to provide some decent redundancy and for my sanity at 11.30pm.

Whenever one of our essential 3rd party services go down, I can only shrug and hope they figure it out quickly. They provide a good service and nobody has 100% uptime. Still better than solving it internally, which is even more likely to have downtime.

Partial failure is just fact of life. If this is a major issue for your process, it might be better to try and find ways to alter your process so this isn't an issue. Alternatively, mirror locally.

You're absolutely right.

Being honest no build is worth losing sleep over. We are piggybacking on their service and bandwidth. For us to start building the infrastructure to cache their images doesn't make financial sense, we deploy daily and their uptime always allows for that.

No, you should rewrite your CI not to depend on external stuff, if you want sane evenings.

If you are getting paid to do CI you are simply doing it wrong.

Rule #1 Host your own stuff, never rely on others.

Rule #2 automate everything

This has broken fresh containerized deploys on Heroku, which is surprising since they run their own registry. They should be proxying Hub, it'd save them a ton of bandwidth.

What a coincidence, the same minute all my lightsail instances got unresponsive and then 20 minutes stuck on "stopping".

Launched a new one.. docker pull bam error. Customer unsatisfied.

Looks like it's back up now, whew!

It went from orange to red.

Incident Status Full Service Disruption

uuugh. was just about to do some testing

More confidence for the folks at Docker Inc.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact