Docker Hub still does not have any form of 2FA. Even SMS 2FA would be something / great at this point.
As an attacker, I would put a great deal of focus on attacking company’s registries on Docker Hub. They can’t have 2FA, so the work/reward ratio is quite high.
Good time to point out that if you’re depending on public docker images, use the SHA reference. Named tags are mutable and you can’t depend on your layer cache for stability. Any and all of that can change under your nose on a new build and you won’t know how to debug it without wasting a few hours sourcing the bug. The SHA protects you from that because there can only be one version of it.
(e.g I’ve had a few where a ruby image switched from Debian 9 to Debian 10, on most of the labels. It broke a lot of stuff and you just had to guess that they kept the old version on its own label, because Docker Hub didn’t say anything about it.)
It's easy.
You can host a Docker registry yourself (e.g. Harbor or Portus), or use a managed one, like Google Cloud Registry if you use GCP / GKE.
To publish an image to your own registry, you just need to pull from Docker Hub, re-tag the image, and push to your own repo. It's three commands.
You can add some automated and/or manual steps to check the images for security flaws before publishing them.
They are supposed to. But as far as I remember among the bad practices perpetrated by the Docker developers there is even an unshakeable stance against adding an option to the mainline Docker so it doesn't try to connect to DockerHub when searching for the image to download.
I can't speak for all of those who downvoted you, but the comment you responded to mentioned how SMS based 2FA would be better than what they do today (i.e. nothing).
This is a fact. SMS 2FA, regardless of how bad it is, is still another hurdle an attacker would have to overcome. An additional hurdle, no matter how small, is still better than nothing at all. Therefore the assertion that SMS 2FA would be better than what they do today is simply an irrefutable fact.
If you left off the "Oh god please no." portion of your comment, you may not have been downvoted.
Generally companies treat the SMS 2FA as an additional check, so it's a security improvement. But some companies also then allow it to be used for password recovery, which is generally a security regression. Also multiple companies have used SMS 2FA numbers for ad targeting.
Not really. It means I now have to prove prove to the site that I got my sim hacked and has to go to a ton of trouble getting my phone number back.
Seriously auth over sms should not only be froned upon, but illegal. It is a nice cover you ass for the site that does it, but if you do 2f any way that is not using a uf2 physical token you should not be allowed near a computer.
You would be surprised on how large a hurdle this is, even inside 'serious companies' when stable external/internal artifacts come up. A lot of developers who cut their teeth on these public repositories complain heavily over the 'friction' of ensuring that they have a reliable place to pull things from. This isn't unique to docker and also includes things like python, ruby, go, node, and other modern packaging systems. The fact that in 3 years from now when they need to update the code the vulnerable/deprecated package is gone/changed URL/whatever seems to be a learned lesson.
AWS Docker registry (ECR) is a good middle ground since you don't have to maintain your own registry and you can take advantage of IAM to improve security.
Similarly there is Azure Container Registry. Only issue I've found with it is that access permissions are not granular enough - for example, you can't specify that a user/service principal should only be able to access particular tags or repos. Not sure how that compares to the AWS and GCP offerings?
This is pretty much a requirement for any serious company, but it still doesn't solve the problem completely as lots of builds depend on public docker hub containers. When the docker hub outage happened earlier this week our production systems were fine, but our dev processes slowed down quite a bit until it was resolved.
Unfortunately though docker hub really is a dumpster fire of a service, otherwise I'd have no problem just using their private repo features. However there are so many problems with that site that it's easier just to jump to AWS ECR or Github's new service.
Same reason they use public apt/rubygems/npm/etc. Same reason they use a secured, but public, Github/Gitlab. Same reason many use hosted CI, bug tracker, APM, etc.
We're talking about fixing the underlaying images to a fixed point, not about baking in secrets into the image.
And there are plenty of uses for using proprietary images. For example, I have done workflows where the properitary code is layered on top of open-source code. (Although if I were to redo those CI/CD pipelines again, I would use the de-facto buildpack standard instead; even then, the buildpack slug itself would be properitary and requires an artifact store of some kind).
Either way, using GCP or AWS registry that integrates with their IAM systems works well. The newer versions of docker contains a plugin system, so both GCP and AWS command line tooling can hook in their own authentication mechanisms in there.
This is the exact class of problem that docker itself attempts to avoid. This is why I run docker-compose inside a docker container, so I can control exactly what it has access to and isolate it. There's a guide to do so here[1]. It has the added benefit of not making users install docker-compose itself - the only project requirement remains docker.
Mount the docker socket. There's some quirks with storage volume paths. Also, security implications. Was not super hard to get working though.
I'd love to go straight to containerd or even basic linux containers but I'm not willing to run kubernetes on my personal machine and haven't found any ergonomic enough ways to run containers.
The others are probably fine, but for anyone thinking about this, minikube uses 50% CPU even on powerful machines for no reason [0]. I switched to kind and it works perfectly, super lightweight.
I was going to use Docker Compose for setting up a lot of my self hosted stuff; since so many projects already had docker compose files in their git repos and it would seem to be easy to leverage that. Early on I got super frustrated with compose though (can't remember all the reasons why) and ended up just writing my own custom provisioner:
It has unit tests, but not a lot of good errors messages and is pretty specific to the stuff I host. I'm glad I did it though; great learning experience around the Docker API and how the internals of the Docker Engine work. I still use it to maintain all my self-hosted sites and tooling.
There are a lot of good libraries around the Docker API for Ruby, Python, Java/Scala, etc. If you're on a green field project setting up your local docker environments, and have the time, I'd almost suggest trying to build your own tooling from scratch rather than leveraging docker-compose at this point.
I actually really like Docker Compose, and especially Swarm. Despite being a relative Docker noob, I've found it to be relatively straightforward to use, and the configuration and secret options are pretty comprehensive. Haven't hit any issues I recall.
Would CNAB work with a custom provisioner, or would support for that need to be coded in?
I use Docker Compose and Traefik (for routing) and self hosted a bunch of stuff given that most self hosting programs have some type of Docker support.
The workaround that I've found for this problem is to set "AUTOENV_ENV_FILENAME=.env.user" and then inside the project folder, I'm now using ".env.user" instead of just ".env".
I'll use this hack until Docker guys fix Docker Compose.
Those applications running inside the container would see .env as the mounted file but Docker-compose would still see the original .env file in the filesystem.
They do mention that being an option in the comments on the bug report, but still sound the alarm as that’s (seemingly) the only option.
One-off fixes are possible, but that’s not a viable solution at scale.
I don't get it. I'm not using docker just to run containers. I'm also using it to build images. Therefore having more container runtimes doesn't actually allow me to switch away from docker.
Call me ol' fashioned, but if I'm not administrating hundreds or thousands of micro-services... I just use "Plain Ol' Unix Processes on ZFS," or as I lovingly call it "POUPZ."
I can honestly say this exact issue is impossible for anyone who POUPZ. If you haven't tried POUPZ in production, or even POUPZ at home, my recommendation is to give it a try.
I think you'll be pretty glad for once that you POUPZ where you eat.
Does this address the package-like behavior that is one of docker's big selling points? I guess I could imagine building something out of zfs send+receive, but it feels like you'd need tooling to make it nice, and now you've reinvented the wheel.
As an attacker, I would put a great deal of focus on attacking company’s registries on Docker Hub. They can’t have 2FA, so the work/reward ratio is quite high.