Hacker News new | comments | ask | show | jobs | submit login

One thing that confuses me with Docker is that how do you configure your containers to communicate with each other.

So say I have a fancy Django image, and a fancy Postgres image.

How do I then have the Django one learn of the Postgres one's IP, and then auths (somehow), and then communicates seperately.

Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained), and how does that even work with a DaaS like this? I'm pretty confused. Is there an idiomatic way in which to do this?

Do service registration/discovery things for Docker already exist?




> One thing that confuses me with Docker is that how do you configure your containers to communicate with each other.

Docker now supports linking containers together:

http://blog.docker.io/2013/10/docker-0-6-5-links-container-n...

> Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained)

The recommended advice for production is to create a persistent volume with 'docker run -v', and to re-use volumes across containers with 'docker run -volumes-from'.

Mounting directories from the host is supported, but it is a workaround for people who already have production data outside of docker and want to use it as-is. It is not recommended it you can avoid it.

Either way, you're right, it is an exception to the self-contained property of containers. But it is limited to certain directories, and docker guarantees that outside of those directories the changes are isolated. This is similar to the "deny by default" pattern in security. It's more reliable to maintain a whitelist than a blacklist.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: