
Jumpers and the Software-Defined Localhost - vishvananda
http://coreos.com/blog/jumpers-and-the-software-defined-localhost/
======
WestCoastJustin
This is an interesting idea. It might be useful to have a high level
architecture diagram to illustrate what is happening between the containers.

A couple issues:

1\. Using localhost seems like a bad idea (I have the expectation that traffic
is local to the instance if using localhost). 2. Managing more then a couple
instances of this will be unmaintainable if done manually. There will need to
be some controller logic happening.

Maybe I am missing something, but why not add a second dedicated virtual
ethernet adapter where all your containers can talk (works across containers
and across servers)? This is traditionally how you would handle something like
this. We have dedicated nonroutable reserved address space configured to
handle all this internal datacenter traffic.

~~~
nl
That certainly seems preferable, but I guess "localhost" has the advantage of
always being there (which is what you want for a service discovery bootstrap
mechanism)

I believe Docker is guaranteed to have the docker0 interface[1] available, but
perhaps the proposed mechanism doesn't want to tie itself to Docker so
tightly?

[1]
[http://docs.docker.io/en/latest/use/networking/](http://docs.docker.io/en/latest/use/networking/)

~~~
jpetazzo
Actually, Docker doesn't always have docker0; you can configure it to use a
different bridge.

On my setup, for instance, I use br0 and share the same bridge for containers
and VMs.

~~~
nl
Yeah, ok - Docker _defaults_ to having docker0. I guess this conversation
kinda encapsulates the problem..

------
dsl
It is an interesting proposal.

It starts to fall over when you have, for example a MySQL slave that needs to
connect to a MySQL master. But now the listening slave on 3306 is trying to
connect to localhost:3306.

I would strongly recommend that the CoreOS team reach out to ARIN (the IANA
operator) to get a special use /24 assigned for these "magic addresses" and
not hijack 127.0.0.0/8.

~~~
polvi
You'd actually setup it all up as you would if everything was on your
localhost or dev environment. Meaning you'd need your master on 3306, slave1
on 3307, slave2 on 3308, etc. You'd still need to setup a configuration for
each of these services, pointing to one another, but the configuration would
be fixed.

~~~
dsl
That seems sloppy when you could just stick to standard port assignments and
use special use addresses. If you'd like help getting special use addresses,
I'd be happy to help where I can.

At the minimum, hijack 255.255.255.255 instead.

~~~
polvi
OK. We'd love to get your help getting a special use address:
alex.polvi@coreos.com

------
vishvananda
This is some pretty interesting stuff. I've been working on something similar
in my spare time. The cost of running everything through a proxy can be
mitigated by having the proxy do other smart things like load balancing and/or
autoscaling.

------
skybrian
"localhost" seems like a really bad name for this when in reality, the socket
is anything but local. Instead of "localhost:3306" how about something like
"docker:3306"?

~~~
X-Istence
nslookup docker 127.0.0.1

localhost is but a name, which happens to be tied to ::1 or 127.0.0.1. I agree
though that it should be outside of the default 127.0.0.1 instance, instead it
can be anywhere on the 127.0.1.x network for example.

Or even better use an IPv6 ULA prefix instead.

------
nl
How different is this to Docker link containers[1] for service discovery?

It seems to me like this is a network layer proposal, while link containers
are more about name based discovery at the Docker layer.

[1]
[http://docs.docker.io/en/latest/use/working_with_links_names...](http://docs.docker.io/en/latest/use/working_with_links_names/)

~~~
robszumski
Docker links are only valid on a single machine. This proposal operates at the
cluster level and allows for much higher availability for the services.

edit: I'm wrong.

~~~
shykes
That's incorrect. Everything which makes this proposal suitable for clustering
also works for links with the ambassador pattern. The difference is in how the
local discovery is implemented (retrieve a local IP address and port vs.
hardcoded loopback IP and port).

------
jared314
This is a great step forward.

But, I am still hoping for a local OpenFlow compatible software switch, that
can exploit network namespaces as a local optimization. That way you can leave
your cluster's networking setup, from network edge to docker container, to a
centralized controller.

~~~
nl
But OpenFlow (or SDN more broadly) doesn't really help with the service
discovery layer does it?

