This is an interesting idea. It might be useful to have a high level architecture diagram to illustrate what is happening between the containers.
A couple issues:
1. Using localhost seems like a bad idea (I have the expectation that traffic is local to the instance if using localhost). 2. Managing more then a couple instances of this will be unmaintainable if done manually. There will need to be some controller logic happening.
Maybe I am missing something, but why not add a second dedicated virtual ethernet adapter where all your containers can talk (works across containers and across servers)? This is traditionally how you would handle something like this. We have dedicated nonroutable reserved address space configured to handle all this internal datacenter traffic.
That certainly seems preferable, but I guess "localhost" has the advantage of always being there (which is what you want for a service discovery bootstrap mechanism)
I believe Docker is guaranteed to have the docker0 interface[1] available, but perhaps the proposed mechanism doesn't want to tie itself to Docker so tightly?
It starts to fall over when you have, for example a MySQL slave that needs to connect to a MySQL master. But now the listening slave on 3306 is trying to connect to localhost:3306.
I would strongly recommend that the CoreOS team reach out to ARIN (the IANA operator) to get a special use /24 assigned for these "magic addresses" and not hijack 127.0.0.0/8.
You'd actually setup it all up as you would if everything was on your localhost or dev environment. Meaning you'd need your master on 3306, slave1 on 3307, slave2 on 3308, etc. You'd still need to setup a configuration for each of these services, pointing to one another, but the configuration would be fixed.
That seems sloppy when you could just stick to standard port assignments and use special use addresses. If you'd like help getting special use addresses, I'd be happy to help where I can.
This is true if the application needs to talk to the slaves directly. If the application doesn't care, then you could do the smarts in the proxy layer underneath the application. I see three scenarios:
* Stateless or Transparent master/master backend
Example: Memcached cluster
Use load balancing in the proxy layer
* Failover backend with failover on server side
Examples: Mysql master/slave
Use failover logic in the proxy layer
* Failover backend with failover on the client side
Example: HA RabbitMQ cluster
Above suggestion from polvi is needed
I don't know how much it helps, but you can just listen on other ips on localnet, with no need to plumb e.g.
nc -l 127.0.0.2 9999
Then "all" you need to something to map the 127.0.0.n against the docker instance.
This even works on windows by the way. Just start up that second tomcat or whatever with the next 127.0.0.n address and enjoy.
I agree that hijacking 127.0.0.1 is not a good idea, but don't see any issues with picking something else in 127.0.0.0/8? In fact it seems like a perfect use-case for that
This is some pretty interesting stuff. I've been working on something similar in my spare time. The cost of running everything through a proxy can be mitigated by having the proxy do other smart things like load balancing and/or autoscaling.
"localhost" seems like a really bad name for this when in reality, the socket is anything but local. Instead of "localhost:3306" how about something like "docker:3306"?
localhost is but a name, which happens to be tied to ::1 or 127.0.0.1. I agree though that it should be outside of the default 127.0.0.1 instance, instead it can be anywhere on the 127.0.1.x network for example.
That's incorrect. Everything which makes this proposal suitable for clustering also works for links with the ambassador pattern. The difference is in how the local discovery is implemented (retrieve a local IP address and port vs. hardcoded loopback IP and port).
But, I am still hoping for a local OpenFlow compatible software switch, that can exploit network namespaces as a local optimization. That way you can leave your cluster's networking setup, from network edge to docker container, to a centralized controller.
A couple issues:
1. Using localhost seems like a bad idea (I have the expectation that traffic is local to the instance if using localhost). 2. Managing more then a couple instances of this will be unmaintainable if done manually. There will need to be some controller logic happening.
Maybe I am missing something, but why not add a second dedicated virtual ethernet adapter where all your containers can talk (works across containers and across servers)? This is traditionally how you would handle something like this. We have dedicated nonroutable reserved address space configured to handle all this internal datacenter traffic.