
Linkerd-tcp: A lightweight, service-discovery-aware, TLS-ing TCP load balancer - williamallthing
https://blog.buoyant.io/2017/03/29/introducing-linkerd-tcp/
======
erickt
This is great news! Congrats on the launch. How is the performance compared to
the other Level 3/4 load balancers, like HAProxy, NginX, AWS ELB, and GCE's
Cloud Load Balancer?

disclosure: rust core team member

~~~
stevej_buoyant
So far latency has been comparable with other native load balancers: we've
built performance profiles for nginx proxy_pass and haproxy to compare against
although as we add features our numbers are starting to come down a bit.

We're still building our collection of tools for tracking down performance
issues like lock contention, allocation hotspots, etc. I've had good luck
finding cpu hotspots with linux perf and valgrind but less luck tracking down
mutex holders and allocation hotspots. Any advice is welcome!

~~~
samsk
Shameless plug: for memory allocation tracking you can use
[https://github.com/samsk/log-malloc2](https://github.com/samsk/log-malloc2)

------
politician
This is extremely exciting! I love the linkerd concept, but couldn't justify
bringing the JVM into our architecture in the major way that a full-scale
linkerd rollout would entail. Next step, namerd in Rust?

------
caleblloyd
Is linkerd-tcp able to preserve Source IP when doing TCP load balancing? I was
under the impression that this had to be done at the kernel level with
something such as IPTables. Or does linkerd-tcp have a kernel module that
allows it to achieve this functionality?

~~~
stevej_buoyant
We do not preserve source IP currently.

------
moondev
Well this looks exciting! So impressed with quality of stuff coming out of
cncf.

I'm evaluating linkerd in our k8s clusters right now so it will be interesting
to see how this fits in. Is it designed to drop in for ingress like
nginx/traefik works?

~~~
olix0r
linkerd is a router and not a "web server", so there may be features web-
servery features that you still want from nginx that don't belong in linkerd.
However, linkerd should be useful as an ingress server. To this end, we've
just recently added support for the kubernetes Ingress API, which is available
in 1.0-RC1.

------
evanweaver
How did you find the move from Scala to Rust? Any surprises?

Does Tokio have similar performance to Finagle/Netty?

~~~
stevej_buoyant
It's tough to compare the performance of the two systems because linkerd-tcp
is not an HTTP proxy currently, it's a TCP proxy. That said, the resource
footprint has been much lower than a finagle service. My first prototype,
which had no fancy features like load balancing or metrics, was able to proxy
a gigabit worth of traffic in less than a core on an older Ivy Bridge machine
and used about 10Mb of RSS.

The biggest challenge has been learning to use the ownership system properly.
It took me a few frustrating weeks to learn a new set of patterns and
techniques for structuring code in a Rust-friendly method. Reading through the
tokio code and example projects was really helpful in that regard.

------
sandGorgon
This is very cool.

Are you working with the k8s teams for the sidecar spec...Or whatever SIG is
working on unifying the Ingress vs Service debate.

It would be cool to have a k8s distribution with linkerd doing the Ingress and
proxying.

~~~
josephjacks
My team at Apprenda built the first K8s distro/toolkit (0) with support for
Linkerd last year.

0:
[https://github.com/apprenda/kismatic/blob/master/docs/LINKER...](https://github.com/apprenda/kismatic/blob/master/docs/LINKERD.md)

------
brew-hacker
Love the linkerd project/community. Can't wait to play with this on top of the
other things.

------
nullnilvoid
This is awesome. We are using Nginx + HAProxy here. A new option is always
welcome.

~~~
cestith
We sometimes proxy to another DC across the Internet and haven't vetted
HAProxy's SSL support thoroughly. So we're running Nginx to terminate SSL,
then to HAProxy, then to an stunnel instance per backend server. To get
keepalives working and increase the throughput of the solution we've worked a
bit on the Nginx config and put HAProxy into tunnel mode. We don't rewrite any
headers or anything of that nature at the HAProxy link of this chain, so we
don't need anything more. With proxying through all that, our page load times
are basically indistinguishable from hitting a server directly unless it goes
to the far data center. All the coordination is handled by a Puppet module so
the complexity isn't that scary.

If Linkerd-tcp can top that or even come close with all the automation and
integration in its ecosystem, though, it's a definite reason to take a hard
look at it. Getting the Prometheus and Graphana output really simply is a
great benefit.

