
Show HN: Wormhole – A smart proxy that connects docker containers - vishvananda
https://github.com/vishvananda/wormhole
======
contingencies
I applaud the exploration, though the author appears to class everything
outside of the container as a 'communications layer' and vaguely ascribe
knowledge of the credentials and processes necessary to facilitate
orchestration, configuration management, etc. across multiple containers to
this area. This is not solving the problem, only hand-waving it elsewhere.

The potential use of a container-initiated networking event (socket open) to
trigger responses at the infrastructure-level is the key architectural novelty
here. This is essentially a suggestion for 'implicit infrastructure' as
opposed to 'explicit infrastructure'; with all of the benefits and drawbacks
you would expect from such a paradigm shift.

Personally I believe that the implicit paradigm is mostly useful in specific,
relatively controlled use cases, such as service testing or behavioral
profiling prior to live deployment.

More thoughts on the same:
[http://stani.sh/walter/pfcts](http://stani.sh/walter/pfcts)

~~~
vishvananda
The point of hand waving orchestration, configuration management, etc. is to
separate the concern of what is running from the concern of how it is
connected to other things. My hope is that container builders could simply
focus on dependencies and application code and that the act of connecting
components together can be handled by a different layer in the system.

~~~
contingencies
Sure, it's just that there's no solution on the other side of the hand-waving
... and your portal to the other side is also a non-standard one with
limitations. As I said, I think the exploration is meaningful and the paradigm
has merit but it's hard to target general applicability with such an approach.

~~~
vishvananda
I totally agree. This is just one of the options I outline for standardizing
networking in my blog post[1] and this one probably involves a little too much
magic to be generally applicable.

It is interesting to note that most of the frameworks that are trying to solve
the the rest of the problem have moved along similar lines. For example
Kubernetes provides both the ability to launch containers in the same
namespace and also provides a proxy. Openshift uses iptables rewriting via
GearD.

[1] [https://medium.com/@vishvananda/standard-components-not-
stan...](https://medium.com/@vishvananda/standard-components-not-standard-
containers-c30567f23da6)

~~~
contingencies
Your comments are very reasonable, but I would caution against reading much in
to the current state of open source offerings. I've tried different approaches
to resolving the networking side myself and concluded that there is no one
true solution here and therefore it must be left out of service scope (ie.
abstracted adequately).

This is particularly the case if portability is required, because different
business-level requirements call for different service and infrastructure
topologies utilizing different technologies, not all of which actually support
any single approach. Think hypervisor-based VMs, containers, bare-metal
clusters, etc.

Therefore I believe that solutions like _docker_ and _kubernetes_ are - given
present architecture as I understand them - critically misaligned, in that
they do not go far enough to abstract some target infrastructure related
questions away from services. To do so would require reshuffling their APIs
significantly and is thus unlikely to spontaneously occur in the near-term
without concerted effort.

More recent iterations of the approach I have been taking assume that
hypervisors, containers, jails, JVM guests or embedded systems firmware (eg.
Android) should all be conceptually in-scope at the level of service authors.
When considering the target deployment environment, networking is optionally
in scope, and then only optionally TCP/IP. Given this redefined perspective, I
believe that if a minimalist, process-oriented solution developed to
facilitate the SaaS/unix SOA market is not robust enough to serve at least
this range of alternative development trajectories, then in my view it is
probably making invalid assumptions somewhere along the line. Of course,
providing so general a tool usefully and without irritating developers or
throwing up barriers to adoption is a real challenge.

~~~
vishvananda
This is very insightful. It is very hard to produce something that provides
clear value in the microcosm (i.e. useful to developers) but is also valuable
in the macrocosm (i.e. can be deployed to all targets).

One approach is to build something that solves the small problem but keeps
enough options open in order to eventually grow to encompass the larger
issues. I think this generally superior to the reverse approach, which tends
to die from lack of adoption.

I don't know if docker, kubernetes, et. al. will actually get there. I agree
that they will have to reshuffle and rethink to make it happen, but I hope
that they can.

------
coderzach
This is great! I've been thinking about the problems this solves a lot, and
this seems really on the mark.

~~~
vishvananda
Thanks, author here.

I've been thinking about what we need to simplify distributed applications for
the past few years, and we really need reusable components[1]. Anything we can
do to make containers more consistent and easier to build is important.

[1] [https://medium.com/@vishvananda/standard-components-not-
stan...](https://medium.com/@vishvananda/standard-components-not-standard-
containers-c30567f23da6)

~~~
walterbell
Is the IPSEC implementation based on existing code?

~~~
vishvananda
The ipsec implementation just configures ipsec in the kernel using a go
netlink library[1]. It is similar to how it would be accomplished using
iproute2 via `ip xfrm policy` and `ip xfrm state`.

[1]
[https://github.com/vishvananda/netlink](https://github.com/vishvananda/netlink)

