Hacker News new | past | comments | ask | show | jobs | submit login

I'm not familiar with how Docker works, so forgive the ignorance. I thought the point of docker images was portability? Is it not just taking the references and pointing to a new instance under your control?



Most production workloads do not use docker directly, but rather use it as sorts of "installation format" that other services schedule (spin up, connect, spin down, upgrade). One of typical defaults is to always try and pull new image even if requested version is available in node-local cache. On one hand it prevents issues where services would fail to start on certain nodes in the event of repository downtime. On the other hand it blocks service startup altogether. With such a set up availability of registry is mission critical for continuous operation.

Some people think it is a perfectly reasonable idea to set up defaults to always pull, point to latest version and not have local cache/mirror. Judging from the number of upvotes on OP, depending on third party remote without any SLA to be always available for production workloads seems to be the default.


I'm not too familiar with docker myself, but gitlab's selfhosted omnibus includes a container registry that Just Works™ for our small team.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: