Hacker News new | past | comments | ask | show | jobs | submit login
Jumpers and the Software-Defined Localhost (coreos.com)
47 points by vishvananda on Jan 13, 2014 | hide | past | favorite | 21 comments



This is an interesting idea. It might be useful to have a high level architecture diagram to illustrate what is happening between the containers.

A couple issues:

1. Using localhost seems like a bad idea (I have the expectation that traffic is local to the instance if using localhost). 2. Managing more then a couple instances of this will be unmaintainable if done manually. There will need to be some controller logic happening.

Maybe I am missing something, but why not add a second dedicated virtual ethernet adapter where all your containers can talk (works across containers and across servers)? This is traditionally how you would handle something like this. We have dedicated nonroutable reserved address space configured to handle all this internal datacenter traffic.


That certainly seems preferable, but I guess "localhost" has the advantage of always being there (which is what you want for a service discovery bootstrap mechanism)

I believe Docker is guaranteed to have the docker0 interface[1] available, but perhaps the proposed mechanism doesn't want to tie itself to Docker so tightly?

[1] http://docs.docker.io/en/latest/use/networking/


Actually, Docker doesn't always have docker0; you can configure it to use a different bridge.

On my setup, for instance, I use br0 and share the same bridge for containers and VMs.


Yeah, ok - Docker defaults to having docker0. I guess this conversation kinda encapsulates the problem..


It is an interesting proposal.

It starts to fall over when you have, for example a MySQL slave that needs to connect to a MySQL master. But now the listening slave on 3306 is trying to connect to localhost:3306.

I would strongly recommend that the CoreOS team reach out to ARIN (the IANA operator) to get a special use /24 assigned for these "magic addresses" and not hijack 127.0.0.0/8.


Or just register your own ipv6 private /48 here https://www.sixxs.net/tools/grh/ula/ and use that


You'd actually setup it all up as you would if everything was on your localhost or dev environment. Meaning you'd need your master on 3306, slave1 on 3307, slave2 on 3308, etc. You'd still need to setup a configuration for each of these services, pointing to one another, but the configuration would be fixed.


That seems sloppy when you could just stick to standard port assignments and use special use addresses. If you'd like help getting special use addresses, I'd be happy to help where I can.

At the minimum, hijack 255.255.255.255 instead.


OK. We'd love to get your help getting a special use address: alex.polvi@coreos.com


This is true if the application needs to talk to the slaves directly. If the application doesn't care, then you could do the smarts in the proxy layer underneath the application. I see three scenarios:

* Stateless or Transparent master/master backend

  Example: Memcached cluster

  Use load balancing in the proxy layer
* Failover backend with failover on server side

  Examples: Mysql master/slave

  Use failover logic in the proxy layer
* Failover backend with failover on the client side

  Example: HA RabbitMQ cluster

  Above suggestion from polvi is needed


I don't know how much it helps, but you can just listen on other ips on localnet, with no need to plumb e.g.

  nc -l 127.0.0.2 9999
Then "all" you need to something to map the 127.0.0.n against the docker instance. This even works on windows by the way. Just start up that second tomcat or whatever with the next 127.0.0.n address and enjoy.


I agree that hijacking 127.0.0.1 is not a good idea, but don't see any issues with picking something else in 127.0.0.0/8? In fact it seems like a perfect use-case for that


This is some pretty interesting stuff. I've been working on something similar in my spare time. The cost of running everything through a proxy can be mitigated by having the proxy do other smart things like load balancing and/or autoscaling.


"localhost" seems like a really bad name for this when in reality, the socket is anything but local. Instead of "localhost:3306" how about something like "docker:3306"?


nslookup docker 127.0.0.1

localhost is but a name, which happens to be tied to ::1 or 127.0.0.1. I agree though that it should be outside of the default 127.0.0.1 instance, instead it can be anywhere on the 127.0.1.x network for example.

Or even better use an IPv6 ULA prefix instead.


How different is this to Docker link containers[1] for service discovery?

It seems to me like this is a network layer proposal, while link containers are more about name based discovery at the Docker layer.

[1] http://docs.docker.io/en/latest/use/working_with_links_names...


Docker links are only valid on a single machine. This proposal operates at the cluster level and allows for much higher availability for the services.

edit: I'm wrong.


That's incorrect. Everything which makes this proposal suitable for clustering also works for links with the ambassador pattern. The difference is in how the local discovery is implemented (retrieve a local IP address and port vs. hardcoded loopback IP and port).


This is a great step forward.

But, I am still hoping for a local OpenFlow compatible software switch, that can exploit network namespaces as a local optimization. That way you can leave your cluster's networking setup, from network edge to docker container, to a centralized controller.


But OpenFlow (or SDN more broadly) doesn't really help with the service discovery layer does it?


It sounds like you could do this with OVS.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: