Hacker News new | past | comments | ask | show | jobs | submit login
Docker Compose reads your “.env” without opt-out (github.com/docker)
160 points by otobrglez on Oct 17, 2019 | hide | past | favorite | 77 comments



Docker Hub still does not have any form of 2FA. Even SMS 2FA would be something / great at this point.

As an attacker, I would put a great deal of focus on attacking company’s registries on Docker Hub. They can’t have 2FA, so the work/reward ratio is quite high.


Good time to point out that if you’re depending on public docker images, use the SHA reference. Named tags are mutable and you can’t depend on your layer cache for stability. Any and all of that can change under your nose on a new build and you won’t know how to debug it without wasting a few hours sourcing the bug. The SHA protects you from that because there can only be one version of it.

(e.g I’ve had a few where a ruby image switched from Debian 9 to Debian 10, on most of the labels. It broke a lot of stuff and you just had to guess that they kept the old version on its own label, because Docker Hub didn’t say anything about it.)


I haven't done a ton of Docker DevOps at scale, but shouldn't teams be pulling images, vetting, and then hosting on their own hub?


That'd probably be ideal but in reality most people don't do that.


Worse, I've had people scoff at the idea.


And then things like the Docker Hub outage this week happen and people wonder why their pipelines grind to a halt.


How hard is it to set up your own hub for private use ?


It's easy. You can host a Docker registry yourself (e.g. Harbor or Portus), or use a managed one, like Google Cloud Registry if you use GCP / GKE. To publish an image to your own registry, you just need to pull from Docker Hub, re-tag the image, and push to your own repo. It's three commands. You can add some automated and/or manual steps to check the images for security flaws before publishing them.


basically a docker run, but if you need it in production, there's a couple other things you need to do.

https://docs.docker.com/registry/configuration/


They are supposed to. But as far as I remember among the bad practices perpetrated by the Docker developers there is even an unshakeable stance against adding an option to the mainline Docker so it doesn't try to connect to DockerHub when searching for the image to download.


can you imagine a team volunteering for that? Nah, they’ll hope for the best from the public registry.

The way people tackle their dependencies inside docker is utterly irresponsible.


> Even SMS 2FA would be something / great at this point.

Oh god please no.

TOTP isn't that difficult to set up. I'll even help make it happen. Don't use SMS for 2FA.


I’d argue it’s easier to setup TOTP as you don’t need to sign up for anything (ie SMS delivery).

Plus it works on a locked down intranet. Just don’t forget to run ntp!


SMS 2FA is better than nothing if, and only if, you don't allow password resetting by owning the SMS.


TOTP is better than SMS in that it's secure with fewer caveats.

Why am I being downvoted?

I'm literally willing to volunteer days of my time, unpaid, to prevent SMS 2FA in favor of something more secure (i.e. TOTP).


> Why am I being downvoted?

I can't speak for all of those who downvoted you, but the comment you responded to mentioned how SMS based 2FA would be better than what they do today (i.e. nothing).

This is a fact. SMS 2FA, regardless of how bad it is, is still another hurdle an attacker would have to overcome. An additional hurdle, no matter how small, is still better than nothing at all. Therefore the assertion that SMS 2FA would be better than what they do today is simply an irrefutable fact.

If you left off the "Oh god please no." portion of your comment, you may not have been downvoted.


SMS 2FA includes the negative energy of "we have this, so we don't need TOTP or something better." It may well be a net negative.

The corollary to don't let the perfect be the enemy of the good is don't let the barely better be the enemy of the substantially better.


Generally companies treat the SMS 2FA as an additional check, so it's a security improvement. But some companies also then allow it to be used for password recovery, which is generally a security regression. Also multiple companies have used SMS 2FA numbers for ad targeting.

https://news.ycombinator.com/item?id=21197553


Not really. It means I now have to prove prove to the site that I got my sim hacked and has to go to a ton of trouble getting my phone number back.

Seriously auth over sms should not only be froned upon, but illegal. It is a nice cover you ass for the site that does it, but if you do 2f any way that is not using a uf2 physical token you should not be allowed near a computer.


You're right but guess what VISA 3DSecure is using?


It’s using whatever your bank is using, and my bank does auth via their mobile app, not SMS. Although I don’t see 3D-Secure being used too often.


Why would any serious company not use a private registry?


You would be surprised on how large a hurdle this is, even inside 'serious companies' when stable external/internal artifacts come up. A lot of developers who cut their teeth on these public repositories complain heavily over the 'friction' of ensuring that they have a reliable place to pull things from. This isn't unique to docker and also includes things like python, ruby, go, node, and other modern packaging systems. The fact that in 3 years from now when they need to update the code the vulnerable/deprecated package is gone/changed URL/whatever seems to be a learned lesson.


Really the most difficult part is mirroring everything, but managing your dependencies without using the latest tag


AWS Docker registry (ECR) is a good middle ground since you don't have to maintain your own registry and you can take advantage of IAM to improve security.


Some CICD services such as GitLab already provide docker image registries per project for free, and enable users to create any number of projects.


Similarly GCP registry if you are on GCP.

I have used them. It isn't that much of a hardship if you are already cloud native.

Companies with legacy or hybrid infrastructure may need to jump through additional hoops.


Similarly there is Azure Container Registry. Only issue I've found with it is that access permissions are not granular enough - for example, you can't specify that a user/service principal should only be able to access particular tags or repos. Not sure how that compares to the AWS and GCP offerings?


This is pretty much a requirement for any serious company, but it still doesn't solve the problem completely as lots of builds depend on public docker hub containers. When the docker hub outage happened earlier this week our production systems were fine, but our dev processes slowed down quite a bit until it was resolved.

Unfortunately though docker hub really is a dumpster fire of a service, otherwise I'd have no problem just using their private repo features. However there are so many problems with that site that it's easier just to jump to AWS ECR or Github's new service.


Same reason they use public apt/rubygems/npm/etc. Same reason they use a secured, but public, Github/Gitlab. Same reason many use hosted CI, bug tracker, APM, etc.


If you follow best practices your containers don't contain secrets; distributing credentials to access a private registry is itself a bit of overhead.

Not everyone makes/uses software where the software itself is secret.


We're talking about fixing the underlaying images to a fixed point, not about baking in secrets into the image.

And there are plenty of uses for using proprietary images. For example, I have done workflows where the properitary code is layered on top of open-source code. (Although if I were to redo those CI/CD pipelines again, I would use the de-facto buildpack standard instead; even then, the buildpack slug itself would be properitary and requires an artifact store of some kind).

Either way, using GCP or AWS registry that integrates with their IAM systems works well. The newer versions of docker contains a plugin system, so both GCP and AWS command line tooling can hook in their own authentication mechanisms in there.


I think the risk in this case is an attacker writing to the registry, not reading it.

If I can push changes to your 'prod' tagged image, I'm going to have a very good time.


No kidding. Tensorflow/Nvidia GPU docker containers are an industry standard for AI. The security of frequent internet updates is scary.


Docker Hub 2FA was launched today


This is the exact class of problem that docker itself attempts to avoid. This is why I run docker-compose inside a docker container, so I can control exactly what it has access to and isolate it. There's a guide to do so here[1]. It has the added benefit of not making users install docker-compose itself - the only project requirement remains docker.

1: https://cloud.google.com/community/tutorials/docker-compose-...


> docker-compose inside a docker container

Do you use Docker-in-Docker or do you mount the docker socket inside your docker-compose container?

Oh dear god .. it's Docker all the way down.


Mount the docker socket. There's some quirks with storage volume paths. Also, security implications. Was not super hard to get working though.

I'd love to go straight to containerd or even basic linux containers but I'm not willing to run kubernetes on my personal machine and haven't found any ergonomic enough ways to run containers.


Check out https://podman.io?

Like docker (uses CRI images) but daemonless.


I thought it didn't even support compose like functionality? Or did they add that now?


Docker-compose is an add-on script that only automates how containers are launched/shut down.



I'm more than willing to run K8S on my personal machine in the form of microk8s, k3s, minikube, and similar cut-down versions of k8s.


The others are probably fine, but for anyone thinking about this, minikube uses 50% CPU even on powerful machines for no reason [0]. I switched to kind and it works perfectly, super lightweight.

[0] https://github.com/kubernetes/minikube/issues/3207


I have used minikube before. Back when it was using localkube, it was ok. It isn't as much now. (Now, it uses kubeadm to bring up the full suite).

microk8s might not give as much gains.

k3s from Rancher actually cuts out a lot of code, and from I hear, can run fine on Rapsberry Pis.

I have not heard of "kind", but neat.


Using the KVM driver reduces minkube cpu waste a lot, only supported on Linux...


do you have any article links about your kind setup?


I don't know about the other guy's setup, but here's the github repo: https://github.com/kubernetes-sigs/kind

KIND - Kubernetes In Docker


You can run docker-compose.yml in any folder in the tree but it only reads the .env from cwd. Just CD into some place and run docker-compose


This works. I discovered this by accident trying to figure out why my .env files were not being read.

I was in a different dir to the docker-compose.yml file and launched docker-compose with the -f filename option and could not get .env to load.


Thank you for the clarification.


I was going to use Docker Compose for setting up a lot of my self hosted stuff; since so many projects already had docker compose files in their git repos and it would seem to be easy to leverage that. Early on I got super frustrated with compose though (can't remember all the reasons why) and ended up just writing my own custom provisioner:

https://github.com/sumdog/bee2

It has unit tests, but not a lot of good errors messages and is pretty specific to the stuff I host. I'm glad I did it though; great learning experience around the Docker API and how the internals of the Docker Engine work. I still use it to maintain all my self-hosted sites and tooling.

There are a lot of good libraries around the Docker API for Ruby, Python, Java/Scala, etc. If you're on a green field project setting up your local docker environments, and have the time, I'd almost suggest trying to build your own tooling from scratch rather than leveraging docker-compose at this point.


I actually really like Docker Compose, and especially Swarm. Despite being a relative Docker noob, I've found it to be relatively straightforward to use, and the configuration and secret options are pretty comprehensive. Haven't hit any issues I recall.

Would CNAB work with a custom provisioner, or would support for that need to be coded in?


I did exactly what you said you had problems with: https://sdan.xyz/sd2.

I use Docker Compose and Traefik (for routing) and self hosted a bunch of stuff given that most self hosting programs have some type of Docker support.


For reasons like and to avoid confusion I always name my environment file something specific like `.env.development` or `.env.dockercompose`

Any system that reads `.env` files usually allows some way to specify the exact file to be read.


Huh, I wasn't aware of this! If nothing else, I like the idea of separate files for separate concerns.


As a workaround you could rename your .env file to something else and mount it as .env in the docker-compose volume options.


I'm using autoenv - https://github.com/inishchith/autoenv.

The workaround that I've found for this problem is to set "AUTOENV_ENV_FILENAME=.env.user" and then inside the project folder, I'm now using ".env.user" instead of just ".env".

I'll use this hack until Docker guys fix Docker Compose.


Does this (and the one linked to: https://direnv.net/docs/development.html) do anything that a simple bash entry can't:

  function lenv {
    if [ -f '.env' ]; then
      source .env
    fi
  }
  function cd { builtin cd $@; lenv; }


Depending on how desperately you need to fix this and how hacky you're willing to go, LD_PRELOAD could help here as well. (i.e. https://github.com/randombk/rust-ldpreload-read)


How does this help all the existing applications, tools, scripts etc. that use .env?


Those applications running inside the container would see .env as the mounted file but Docker-compose would still see the original .env file in the filesystem.


If you wanna make it work; this is sadly the only way until Docker Compose is patched.

Rename .env to something else and reconfigure autoenv.


Right. Usually workarounds actually, you know, work :-P


No great fan of docker and on a train with a demo laptop and no way to install it to test, but I would expect that

> docker-compose --env-file /dev/null

will get it to not read a .env file.

Will test later/tomorrow.


They do mention that being an option in the comments on the bug report, but still sound the alarm as that’s (seemingly) the only option. One-off fixes are possible, but that’s not a viable solution at scale.


This doesn't work. At least not with 1.24.1.


Stuff like this is why docker alternatives exist.


Requiring a daemon with root privileges is another big reason to explore alternatives to docker.


Have you looked at podman? You can do daemon-less, user run containers.


I've heard of podman and buildah, but honestly I haven't looked too closely into it. They do sound like Docker done right.


I don't get it. I'm not using docker just to run containers. I'm also using it to build images. Therefore having more container runtimes doesn't actually allow me to switch away from docker.


This doesn't have anything to do with Docker, but specifically docker-compose. What are the alternatives you suggest to that? Minikube?


Yeah, I like Kubernetes. But I’ll admit it’s not exactly the most inviting thing to get into. It’s cool once you get the hang of it.

But the attitude runs all the way through docker products: arbitrary decisions made for their benefit with no thought to the externalities.


garden.io is what we're replacing docker-compose with.


Call me ol' fashioned, but if I'm not administrating hundreds or thousands of micro-services... I just use "Plain Ol' Unix Processes on ZFS," or as I lovingly call it "POUPZ."

I can honestly say this exact issue is impossible for anyone who POUPZ. If you haven't tried POUPZ in production, or even POUPZ at home, my recommendation is to give it a try.

I think you'll be pretty glad for once that you POUPZ where you eat.


Does this address the package-like behavior that is one of docker's big selling points? I guess I could imagine building something out of zfs send+receive, but it feels like you'd need tooling to make it nice, and now you've reinvented the wheel.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: