
NGINX open sources TCP load balancing - realityking
http://hg.nginx.org/nginx/rev/61d7ae76647d
======
fasteo
Many installations would go from haproxy->nginx to nginx->nginx. Having to
support a single product will make many devops happy.

In the same tense, haproxy is adding Lua support[1], which has been available
in nginx - using openresty[2] - since 2011, and nginx core is doing the same
with Javascript[2].

Interesting times aroung haproxy and nginx.

[1] [http://blog.haproxy.com/2015/03/12/haproxy-1-6-dev1-and-
lua/](http://blog.haproxy.com/2015/03/12/haproxy-1-6-dev1-and-lua/)

[2] [http://www.openresty.org](http://www.openresty.org)

[3] [http://www.infoworld.com/article/2838008/javascript/nginx-
ha...](http://www.infoworld.com/article/2838008/javascript/nginx-has-big-
plans-for-javascript.html)

~~~
amenod
Not sure about that... HAproxy is a proven technology (very reliable and a joy
to use at that) in this field while Nginx is a newcomer and needs to establish
its credibility first. I personally wouldn't use such technology for load
balancer until it is properly battle-tested. Also, I can't see much of an
advantage over (proven) HAproxy - am I missing something?

As for supporting a single product, I don't see the point of that. Using Nginx
for load balancing will probably be much different than using Nginx as a web
server, so the learning curve is similar.

Not that I don't welcome competition, I just don't see a real need in this
space.

EDIT: btw, the Lua thing was an April Fool's joke...

EDIT 2: no it wasn't, my mistake. I was surprised by this so I checked the
page and jumped a gun when I saw "April 1st" on
[http://www.haproxy.org/news.html](http://www.haproxy.org/news.html). Sorry
about that...

~~~
derefr
> As for supporting a single product, I don't see the point of that.

It's not about configuration; it's about security. Fewer products in your
stack means fewer things to patch. Rather than updating nginx some times and
haproxy other times, you just update nginx across all your machines (both web
servers and load balancers), and you're done. This also gives you more time
with which to vet any given nginx update.

~~~
cbsmith
> It's not about configuration; it's about security. Fewer products in your
> stack means fewer things to patch.

Kind of the reverse of the defense-in-depth principle eh? ;-)

~~~
derefr
Defense-in-depth doesn't work very well for infrastructure software packages:
many projects share the same libraries with the same vulnerabilities (e.g.
OpenSSL) but still have to be updated with independent package updates.

A shared-library vulnerability means both Nginx and HAProxy get broken in
their own ways, which is _worse_ , I think, than just having your whole stack
rely on one or the other, and having that one break—it's more similar to
having two independent vulnerabilities arise simultaneously.

~~~
cbsmith
You're only going to do TLS encrypt/decrypt in one place, so in that
particular case... something is wrong.

However, the scenario you describe is one where you would likely NOT be doing
defense in depth, because you'd be using the same library to handle a vital
piece of your security infrastructure.

Regardless, when a shared library is updated for security, you _don 't_ need
to apply updates to packages using the shared library. That's kind of the
point. The only exception is when the flaw is in the interface to the library.

The win entirely derives from the case of having the two independent
vulnerabilities. Since they are broken in their own ways it isn't sufficient
to find a way to exploit one system (which would work great for attacking a
system with both). You have to find a way to exploit each, and you have to
find a way to connect the two so you can get all the way through.

------
warpech
The blog post tells more and has some nice diagrams:
[http://nginx.com/blog/nginx-plus-r6-released/](http://nginx.com/blog/nginx-
plus-r6-released/)

~~~
coldcode
Its NGINX+, not the open source version.

~~~
theGimp
If you click on the original link, you'll notice it says "Port from Nginx+"

~~~
coldcode
When I commented the link didn't come up.

------
virtualSatai
I got a 502 when visiting this url, I think it's just irony bmiling at me:
[http://i.imgur.com/q3n8PpZ.png](http://i.imgur.com/q3n8PpZ.png)

~~~
jimjag
Same here... heh heh heh

~~~
bennylope
Not sure what's so ironic about this. That's the Mercurial server that's not
responding.

~~~
mattdeboard
Yes, the Mercurial server being mercurial isn't ironic, it's coincidental

~~~
6chars
It would be ironic if the Mercurial server were saturnine though.

------
IMTDb
Does that mean that I can now put NGINX in front of a cluster of TCP (non
HTTP) servers and get NGINX to cleverly load balance the incoming requests to
the individual nodes ?

~~~
jfroma
> load balance the incoming requests to the individual nodes

Correct me if I am wrong but I think this is actually incorrect, because there
is no concept of "request" at the tcp level. If I understand correctly it will
rather load balance "connections".

------
mryan
The NGINX Plus docs on TCP Load Balancing: [http://nginx.com/resources/admin-
guide/tcp-load-balancing/](http://nginx.com/resources/admin-guide/tcp-load-
balancing/)

Something to read while we wait for the announcement page to come back up :-)

~~~
shawabawa3
Just FYI it's not an announcement page, it's a link to the source code commit
(probably explains why the page is down - isn't expected to have high traffic)

------
cbsmith
I didn't see anything about proxy protocol support, which is kind of nice with
TCP load balancing... [http://www.haproxy.org/download/1.5/doc/proxy-
protocol.txt](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt)

~~~
jzelinskie
I agree. nginx already supports forwarding from proxy protocol[0] via the
http_realip module; time to go full circle.

[0]:
[http://nginx.org/en/docs/http/ngx_http_realip_module.html](http://nginx.org/en/docs/http/ngx_http_realip_module.html)

~~~
cbsmith
I've had problems getting that module to work properly with AWS ELB (though
I'd kind of assumed the problem was with ELB), so I'm not sure how solid the
support is even for that. It'd be nice to test it against nginx itself as a
baseline.

~~~
jzelinskie
We use proxy protocol ELB -> nginx in production at CoreOS for Quay.io, but we
use the Tengine 2.1.0 fork of nginx for some other patches.

[http://tengine.taobao.org/](http://tengine.taobao.org/)

------
toomuchtodo
Can someone explain how this is superior to HAproxy?

~~~
lobster_johnson
HAProxy's primary feature is HTTP/HTTPS load balancing. This new feature
competes only with HAProxy's TCP load balancing support.

Note that Nginx already has a simple proxy built in that does very basic HTTP
load balancing. HAProxy's is vastly superior to Nginx's in that it supports a
sophisticated set of filters ("ACLs"), transformations (eg., header
rewriting), queue behaviours (eg., queue limits, backup backends, health
checks, retries) and proxy-specific request logging.

A big difference is that HAProxy's main balancing algorithm is "fair", in that
traffic is distributed evenly among target backends, whereas Nginx's load
balancing is purely round-robin (there is a third-party fair balancing module
[1], but it's not maintained).

[1] [https://github.com/gnosek/nginx-upstream-
fair](https://github.com/gnosek/nginx-upstream-fair)

~~~
Jgrubb
Actually, Nginx has a couple of different load balancing algos that you can
pick from --
[http://nginx.org/en/docs/http/load_balancing.html](http://nginx.org/en/docs/http/load_balancing.html)

~~~
lobster_johnson
I wasn't aware of that. Thanks.

------
simonplus
[http://trac.nginx.org/nginx/changeset/61d7ae76647d/nginx](http://trac.nginx.org/nginx/changeset/61d7ae76647d/nginx)

------
fideloper
That's neat!

HAProxy has been able to do SSL termination OR pass-thru. It's ability to do
TCP load balancing allows it to do ssl pass-thru, where SSL connections are
"passed through" to other servers (so the web nodes would de-encrypt the SSL
connection, rather than the load balancer). This is a good use case for some
where they prefer or require data to be encrypted up to the last minute
(although it's not the only way to do it).

TCP load balancing is neat for doing things like load balancing MySQL
connections, which aren't HTTP (although that's not necessarily recommended
according to some things I've read).

I believe, but can't find the sources, that Nginx can be as efficient a load
balancer as HAProxy. I know I for one would prefer to use Nginx over HAProxy
to keep my stack simpler (same technologies throughout), although HAProxy may
have more advanced balancing algorithms and some more power around it's tcp
socket "API" for adding/removing nodes dynamically. (I think Nginx Plus can
already do some of that).

Would love to hear the opinions of those with more experience/knowledge on the
differences between the two!

~~~
ibakirov
Yes you right, both are good enough and everyone want to simplify his backend
schemas...

I want to note big difference between haproxy and nginx, first of all "nginx
is webserver" (nginx HTTP server) and then proxy/loadbalancer(may be used)
whereas haproxy is pure loadbalancer. Enterprise bare metal and hardware
appliances for proxy/loadbalance built on top of haproxy

nginx is wide spread because of usage as good minimalistic web server

------
novaleaf
does this include the dynamic reconfiguration feature?

[http://nginx.com/resources/admin-guide/tcp-load-
balancing/#u...](http://nginx.com/resources/admin-guide/tcp-load-
balancing/#upstream_conf)

------
phildougherty
This is great. Perfect for when one doesn't want to deal with running a poorly
supported 3rd party module in nginx. Prior to this the only other easy option
was Haproxy. I'm happy.

------
smwht
Anyone know if the nginx TCP load balancing supports the PROXY protocol?
Doesn't appear to, which is unfortunate.

~~~
sarahnovotny
It doesn't at this stage. That is in the plan, but there are other features
we'd like to implement first.

~~~
smwht
Thanks. For reference, the use case is to distribute SSL negotiation without
losing access to client IP addresses.

------
adrenalinup
Why would somebody need a TCP load balancer in a web server ?

Is there a use-case where the TCP load balancer being with the web-server made
a lot of sense ?

Integrating too many features into a single software can be risky as it may
compromise simplicity and the UNIX way, 1 tool for 1 job..

~~~
hbz
[http://en.wikipedia.org/wiki/WebSocket](http://en.wikipedia.org/wiki/WebSocket)
are a completely legitimate use.

~~~
wtarreau
WebSocket works over HTTP, not TCP. A properly implemented HTTP stack will
have no problem passing WebSocket to the next server. Some non-compliant HTTP
stack still experience trouble with it though.

~~~
hbz
Technically incorrect. WebSockets handshake over HTTP and then "work" over
TCP. I'm actually curious which HTTP stacks are non compliant and what you
mean by that.

As to my original comment, you can probably get by without having full TCP
load balancing.

------
based2
[https://www.varnish-cache.org/trac/wiki/LoadBalancing](https://www.varnish-
cache.org/trac/wiki/LoadBalancing)

------
DonnyV
Not really sure how they can call this project open source anymore. Every new
feature is now tied to there subscription service. You better off with
haproxy.

------
justizin
Great to see functionality migrating from Plus to FOSS!

------
ibakirov
haproxy is good enough with it's: \- full stats rather os nginx stab_status
(full stats only in nginx plus) \- much load balancing mechanism support than
os nginx (full support needs nginx plus, e.g: for sticky sessions)

for me, in the first place haproxy and then nginx

I good welcome opensourcing parts of nginx, take that way

------
oimaz
can this loadbalance redis and memcache?

~~~
elementai
Don't know for NGinx yet, but one can balance with HAProxy, e.g. create 2
backends for read/write respectively.

[http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-
he...](http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-
check/)

------
chubs
Reading through the code, it's extremely tidy. Very confidence inspiring :)

------
MichaelGG
Maybe they'll bring in the health checks for HTTP balancing, next :).

~~~
CrLf
It already supports it:
[http://nginx.org/en/docs/http/ngx_http_upstream_module.html#...](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#health_check)

Or you're referring to something else?

~~~
MichaelGG
" This directive is available as part of our commercial subscription. "

------
emcrazyone
huh, I thought all of NGINX was open source and so I'm confused by the title
"NGINX open sources TCP load balancing."

Just a bad title or am I missing something?

~~~
jffry
A bit under two years ago, NGINX announced NGINX+ [1], with an open source
"core" and paid-for extras [2]

[1]:
[https://news.ycombinator.com/item?id=6255592](https://news.ycombinator.com/item?id=6255592)

[2]: [http://nginx.com/products/feature-
matrix/](http://nginx.com/products/feature-matrix/)

------
andrewpe
Perfect timing, at least for my company.

~~~
takeda
Slightly disappointing though, it looks like it is just TCP and no health
check.

------
dschiptsov
This is, perhaps, a canonical example of how management's attempt to monetize
an open source project will cause sub-optimal results both in code quality and
profits.

New features should be developed and tested in an open version, so the
feedback, testing, patches and even unexpected new improvements from high
skilled enthusiasts would be incorporated much more quickly than any closed
team with QA (look at the Linux kernel).

We have seen too many examples of "acquiring" open source projects to monetize
on its user base (how I hate that idiotic MBA slang) which then became
stagnant - from MySQL to Xen, you name it.

I wonder what mr. Sysoev is writing these days?)

~~~
IgorPartola
Not that I don't want me some more nginx features, but how will this work
exactly? Are you suggesting these features, once developed, become closed
source? Honestly, I am not sure how nginx could be profitable long term. It is
so good, that you don't need paid support or whatever the Plus version offers.

~~~
gizzlon
Guess it would work more like Fedora -> Redhat Enterprise Linux. You might be
right about the profits, I have no idea..

The "open core" model is horrible IMO. It pits the open source version and the
commercial version againts each other. What happens when someone would like to
contribute features already planned for the commercial version.. ?

~~~
dschiptsov
Redhat has really clever model, btw. Once, in times of RHEL3 and 4 they have
tried to maintain zillion patches against vanilla kernel to be "Enterprise
Linux", you know, so you could run Oracle cheap (a hot topic in that time).
Then they have realized, that it is much smarter to give the patches to the
mainstream, so everyone will benefit.

Fedora became a test-bed for a new technologies, to amortize the too rapid
changes (systemd and other crap, you know), so they could provide stable and
_compatible_ RHEL versions for existing customers.

And Redhat is a service, not the code company.

------
ninjazee124
502, how ironic.

