* ACL rules with full support for logical if statements 
* active health checks
* end-to-end HTTP/2 
* Robust logging or a dashboard with metrics
* The ability to read env variables
* session stickiness
* DNS service discovery 
These are just things I'm aware of, there could be a lot more.
HAProxy has shown itself to perform better for certain users such as Booking.com 
I think this is supported.
We are using NGINX with its core Stream module to receive HTTP/2 encrypted traffic, and loadbalance it (with random or least_conn) algorithms -- to each of our backends.
Traffic stays encrypted end-to-end, and it remains HTTP/2 (because the Stream module works at TCP level, not http so it does not care http/2 or http/1 is used).
It seems that in the ticket  that you mentioned, the commenter at the end is asking exactly for this. And that works well.
It is called often 'pass-through proxy'.
The article here explains how to set it up
We loose information about the Web-browser's IP address at our backend.
For for privacy-enforcement reasons, we actually do not want to have it at our terminating points (our backend apis).
And also, if we ever need it -- I thin this can be enabled with the proxy protocol.
With haproxy you can combine any set of H1/H2 on any side (protocol translation). It can even dynamically choose H1 or H2 depending on the negotiated ALPN the server presents, just like a browser does!
In our case, we are not un-encrypting at the load balancer, so we cannot see the HTTP headers anyway. Instead we use NGINX to load-balance based on TCP-level info.
* Admin socket for live server adds/removes
* Full header manipulation without compiling extra modules
Broadly speaking, HAProxy is a more fully-featured choice for a HTTP/TCP/UDP load balancer.
However, it is not a web server, as it lack file serving and caching abilities.
That is a terrifying "feature".
Turing completeness is not a feature. That "feature" allows complete emulation of other computation types.. Including an infinite ways of doing something wrong or bad.
Have you read The Maglev paper out of Google? At the time right before that was published, one of my engineers was implementing much the same thing. They considered both HAProxy and nginx for the second layer of load balancing. The were technical reasons in either direction. There was some pressure (from me) to go with nginx because we were already using it as our primary webserver, so we could avoid introducing another new technology for people to master.
We went with HAProxy. Why? Because when said engineer contacted them, describing what he was doing, they (here: the main HAProxy dev) engaged in discussion, helped, even included his needs in their planning. At least at the time, nginx folks just responded with that they'd talk to him after he had secured a licensing deal. The uphill battle this engineer would've had to fight in corporate politics to get licensing sorted out that early in the prototyping phase would have been rough. Last I heard, the company still had a licensing/support deal with HAProxy.
Good presales matter!
(Edit: can't spell.)
We had bought a ton of Fortinet gear to firewall+load balance for us, but in the end could never quite get it deployed. I got the haproxy set up instead and it's been amazing!
Nginx however is $1900 per year per server. There are plenty of critical features missing from the free edition, for example the status page to see available servers or metrics exporting for monitoring.
NGinx (see above) seems to cost $1900 per server per year. What are the costs for HAProxy Enterprise?
From all accounts, if you really need a load balancer, or even if you just need failover, HAProxy should be your default choice. It has high-availability features, monitoring, supports TLS & SNI, HTTP2, session replication, Lua scripting, and almost any other feature you might need. It was also designed from the ground up to be a high-performance, high-availability load balancer. (NGinx does several other things.)
Stack Exchange (the company behind Stack Overflow) uses it in front of their IIS application servers, and so do a lot of other smart people.
Also, it appears NGinx has commercial/OSS conflict of interest issues — like in recent versions all monitoring functionality was removed from the OSS distribution.
Maybe for a few edge cases, HAProxy works better, but overall, I'd pass on it.
nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.
If you are using nginx and it is working good, I'd recommend against trying out HAProxy.
If it's not working good, I'd first look into fixing whatever is wrong with your setup and only trying HAProxy if some experienced with it helps you out with it. HAProxy requires much more configuration tweaking than nginx (at least to gain any benefit from using it).
In 2.0 we've set a number of things by default to work better and use all the capacity with no need for tweaking. Just doing this will start a proxy on all CPU cores, support both H1 and H2 and will automatically enable round robin load-balancing, tune the maxconns, something which used to be a hassle in previous versions, and enable connection pooling and reuse by default:
server www1 192.168.1.1:80
server www2 192.168.1.2:80
server www3 192.168.1.3:80
However, haproxy actively fights being compared to nginx.
There's no 101 guide to setup haproxy as a reverse proxy for nodejs application with separate domain names, ssl certificate configuration (I don't even know how to create the correct chain for haproxy after buying it from a commercial vendor), good security defaults (CORS/CORB) and docker defaults.
As of RIGHT NOW, haproxy has not updated it's official docker image and has 6 day old docker images which docker hub flags as having vulnerabilities (screenshot at https://imgur.com/a/SiYoZzc). So I'm a little hesitant at calling this release "Cloud Native"
Latest nginx docker image is not flagged for any vulnerabilities.
1. The “official” Docker image is not maintained by HAProxy itself. “Official” refers to being blessed by Docker. See: https://github.com/docker-library/official-images#what-do-yo...
2. The vulnerability scan of Docker Hub is bogus: https://github.com/docker-library/faq#why-does-my-security-s...
3. There's a pull request created by me to switch from 2.0-rc to 2.0: https://github.com/docker-library/haproxy/pull/89. I created it immediately after learning about the release. Any further delay is caused by the Docker Official Images team.
Disclosure: I'm a community contributor to HAProxy and I help maintain the issue tracker on GitHub. I also maintain a few “official” Docker images and by that I know the process.
I'm not trying to be an ass. I've been looking forward to haproxy to be more docker/k8s/cloud friendly. This release claims so, but how do I deploy to k8s now ?
should everyone be compiling their own images ? If haproxy is not able to support official docker images, then we are back in "let's just use nginx. they atleast have official images"
and in replying to the comment that this thread belongs to ...this is one of the "difficulties"
Please note that I'm a community contributor. I am not employed by HAProxy Technologies and I cannot speak for the open source project in any official capacity either.
AFAIK nginx doesn't implement HTTP/2 prioritisation effectively, and this can result in responses being served in a non-optimal order
Disclosure: I'm a community contributor to HAProxy and I help maintain the issue tracker on GitHub.
Multi-tenant hosting for customers with lots of custom domains (with SSL) with different customers on different versions, and an app with lots of legacy. In older versions, different paths are handled by different servers. Some paths use source-ip sticky. One part of the app uses websockets. Some paths are handled via S3+cache layer (offloading traffic from app servers without changing the app).
There's also a bunch of special paths to access specific servers directly to get some health metrics (from app written without thinking about running in an auto-scaling environment). One fun thing I built handles some old SOAP requests from a defunct service running on hundreds (maybe down to dozens now?) of external systems that will retry every request forever (exponential traffic growth) if they don't get a "success" response; using Haproxy and some request capture regexes, it can return one of a dozen specially crafted hard coded "success" responses. The date is wrong but the service doesn't care, and now few lines in Haproxy replace a dedicated "black hole" app server we used to run.
Haproxy handles all this in a single hop for all traffic (and Haproxy itself runs in an autoscaling group). The config is complex, but still understandable.
All the variable stuff is generated by scripts at runtime, which also lets us use an admin UI to manage customer domains, versions, and automatically picks up available (deployed) application versions in the region.
Having used both, I'd say in general nginx is easier for simple things, and in many ways has more capabilities (like static hosting, authentication support). In fact we use nginx on the Haproxy servers for hosting static pages and being a caching proxy for S3. Haproxy makes simple hosting more complicated, but you get a lot of very fine-grinned control over everything (eg, it's pretty easy to do almost anything to the traffic like changing request/response headers/paths at any point). For anything new, I'd use nginx only if I could get away with it (note: all the above may be possible in nginx now, but I'm not going to rewrite our infrastructure unless there's a very good reason).
Other than not logging to stdout (which 2.0 seems to fix), that's the only thing that bugs me about HAproxy.
HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.
I use HAProxy as a frontend proxy to distribute requests into my intranet, it handles about 3-50 requests per second on a very dinky little VM (1 CPU core, 512/1024 CPU time allocation, 512MB ram) with a lot of CPU still left over.
Normally you choose between something like Apache and nginx, where nginx is a bit easier to configure and write modules for, but the former has more functionality and third party support.
You choose between haproxy and something like Varnish, where the former is a bit more featureful on the routing part and the latter has more focus on the caching part.
It is not uncommon to use both.
These days I think they’ve grown a lot closer together on capabilities. nginx remains a true web server where HA Proxy is not.
NGINX historically was a web server (an excellent one) which has evolved towards proxying and a bit of load balancing. But in terms of LB features it lacks a lot, just like you wouldn't try to cross-dress haproxy into a web server in any way. It's true that for many quite simple setups nginx is often enough and saves the admin from having to learn another tool. But when you start to handle tens of thousands of domains, need to perform DDoS protection, handle thousands of servers, perform inter-DC state synchronization , stream tens of Gbps of traffic, or perform advanced actions on health check status change, it's not well suited aanymore. It's still the best web server I know, and many hosting infrastructures combine haproxy+varnish+nginx together, taking the best of each and deliver unrivaled quality of service. The beauty here is that these 3 best components in their respective categories are all free to use so there's no reason to have to choose between one or the other, just use all 3 and be happy!
For more use cases look at Kong, which is almost completely built on openresty.
HAProxy is a load balancer with philosophy of "do one thing and do it well".
Perhaps things changed with nginx, but when I tried it for load balancing it was very basic, it didn't even have status page or health checks unless you purchased the commercial version. It only had round robin load balancing method etc.
If your goal is load balancing haproxy is far superior, if your goal is to have a web server that hosts some static files and then redirect dynamic requests to another app then nginx might be better.
Nginx in comparison is for me, THE thing to use as a web server. When it comes to being a single layer away from reverseproxying requests to apps running on an instance or static file serving with caching rules Nginx is purpose built for this.
Both apps have support for what each other do though. For the more pedantic it's easy to say "But x can do this too with this feature". Usability wise though, nginx and HAProxy are distinct. I went through the comparison very recently while testing setup of HAProxy as a load balancer in front of multiple webservers running Nginx which in turn sat in front of several apps running on each instance.
TL;DR - At a slightly larger scale (scale being number of instances + apps) HAProxy and Nginx are great to use together as opposed to one over the other.
Up until HAProxy 1.8, I used to have Haproxy -> NGinx caching layer -> Apache. I have since been able to remove all the NGinx servers. (For my use case, won't work for everyone)
At the same time, some of the HAProxy 2.0 features have already been available in Envoy and tested in production, at scale (if HAProxy provided those features, there wouldn't be a big need for Envoy). For example, Envoy is pretty extensible, has good performance and has good support for dynamic cert management (including service-to-service mutual TLS).
Envoy is also used by Istio and has a lot of infrastructure support for deploying in Kubernetes and such which HAProxy doesn't currently have.
I haven't used HAProxy in any environments other than testing, but as far as I can tell both variants behave equally. In fact, haproxy.cfg for both images is the same, they only differ in their build flags.
Correction: this requires to add -latomic there (just tested). I should mention this in the INSTALL file.
Edit: seems many still do! I thought it was dying slowly as php popularity was going down.
There are also far fewer bugs and updates to the Apache core. I rarely have to recompile anything.
I have also had many frustrating interactions with the lead developer of NGinx. There are many assumptions made and many things hard coded in the Makefile, especially as it pertains to pcre, zlib, openssl and CFLAGS, LDFLAGS, etc. Also, I can't just point to existing pcre and zlib deployments for inclusion. NGinx wants the source and to recompile the extra libraries each time.
Over the last 10 years, it probably lost half of its share. The exact figures vary with the source: according to the link below, Apache's share of the million busiest web sites went from 66% in 2011 to 32% now.
Been using apache2 for like 20+ years now. It is doable to switch to something else, but would probably require effort with various details, etc. It works well for our moderate loads, so not really urgent to change it.
There is also stuff running on mod_php/mod_python/mod_wcgi that is bound to apache, however these are deprecated and unstable technologies that should not be used in this decade.
I imagine it supports cgi calls too
I looked into Haproxy, set a bunch of rules and fall into static IP management hell. Then I tried Traefik mainly because of the HTTPS auto-renewal feature but the ability to tag docker containers with DNS regex (so traefik knows how to reverse proxy traffic) is a god send.
Is there something like that in HaProxy 2.0 (HTTPS auto-renewal and container tagging) ?
I also see no option for client certificate auth or TLS versions and cipher suites in the repo.
I guess it's still better to handle TLS outside of haproxy.
Further, just look at https://istlsfastyet.com/ and you'll see that haproxy, H2O and nghttpx are the only 3 implementations checking everything (and haproxy was the one inventing dynamic record sizing).
So it seems your opinion on haproxy's TLS support is not that spread!
v1 of the ingress controller does not update OCSP. That said, this is planed for a next release.
Stay tuned :)
Or you can write the controlling program in the language of your choice and run it outside as an SPOA agent. Have a look at spoa_server which provides examples for Python and Lua.
Works quite nicely and can set ACL’s dynamically.
Last I checked, only Nginx really did it right.
AFAIK it's solid.
h2o is probably the only server that has done HTTP/2 right fora long time, others a finally getting it right
There is a hacky way to serve single page static files directly via haproxy, by creating a http file (including headers) and add it to a custom back-end. This is really only useful if you have one or two pages you need to serve and don't want to run another web server. Some people use this for outage or maintenance pages.
acl landing path_beg /demo
use_backend landing if landing
errorfile 503 /opt/ha/landing_page.http
http-request set-log-level silent
# cat landing_page.http
HTTP/1.1 200 Ok
Cache-Control: no-cache, no-store
This is a static text file served directly from HAproxy 1.9
I will catch grief for showing this.
Here it is in use: 
 - https://tinyvpn.org/demo
These are two different projects. One was initially designed for the hostile edge and excels here. The other one was initially designed to be used as a side car deep into your infrastructure and excels there. There is obviously quite some overlap between the two, sometimes with different terminology (like "circuit breaking" in envoy that haproxy calls "timeouts" and "queue limits"), and users demands make each of them evolve a bit in the area they are less good (i.e. where the other one excels). But they are still quite different beasts.