Hacker News new | past | comments | ask | show | jobs | submit login
HAProxy 2.0 (haproxy.com)
389 points by guthriej 31 days ago | hide | past | web | favorite | 117 comments

I've always used Nginx as a proxy, but I've seen HAProxy mentioned, what are some of the benefits of using HAProxy over nginx as a proxy or load balancer?

What open-source NGINX lacks that open-source HAProxy has:

* ACL rules with full support for logical if statements [1]

* active health checks

* end-to-end HTTP/2 [2]

* Robust logging or a dashboard with metrics

* The ability to read env variables

* session stickiness

* DNS service discovery [3]

These are just things I'm aware of, there could be a lot more.

HAProxy has shown itself to perform better for certain users such as Booking.com [4]

[1] https://www.nginx.com/resources/wiki/start/topics/depth/ifis... [2] https://trac.nginx.org/nginx/ticket/923 [3] https://danielparker.me/haproxy/nginx/comparison/nginx-vs-ha... [4] https://events.static.linuxfound.org/sites/events/files/slid...

WRT > end-to-end HTTP/2 [2]

I think this is supported.

We are using NGINX with its core Stream module to receive HTTP/2 encrypted traffic, and loadbalance it (with random or least_conn) algorithms -- to each of our backends.

Traffic stays encrypted end-to-end, and it remains HTTP/2 (because the Stream module works at TCP level, not http so it does not care http/2 or http/1 is used).

It seems that in the ticket [2] that you mentioned, the commenter at the end is asking exactly for this. And that works well.

It is called often 'pass-through proxy'. The article here explains how to set it up


We loose information about the Web-browser's IP address at our backend. For for privacy-enforcement reasons, we actually do not want to have it at our terminating points (our backend apis). And also, if we ever need it -- I thin this can be enabled with the proxy protocol.

Thus it's just a plain TCP proxy and cannot route the traffic intelligently (based on Host or path) nor cache it. Following the same principle it could be said that haproxy has been supporting end-to-end H2 since version 1.0 long before H2 even existed!

With haproxy you can combine any set of H1/H2 on any side (protocol translation). It can even dynamically choose H1 or H2 depending on the negotiated ALPN the server presents, just like a browser does!

HAProxy can proxy HTTP/2 at Layer 4 or at Layer 7, to get all the HTTP message data and perform routing based on that, etc.

Thx. Yes, NGNIX will not be able to balance HTTP/2 traffic based on HTTP headers. But HAProxy 2.0 can.

In our case, we are not un-encrypting at the load balancer, so we cannot see the HTTP headers anyway. Instead we use NGINX to load-balance based on TCP-level info.

Environment variables can be used in Nginx if you compile with Lua support or use the pre-built OpenResty distro.


* Admin socket for live server adds/removes

* Full header manipulation without compiling extra modules

Broadly speaking, HAProxy is a more fully-featured choice for a HTTP/TCP/UDP load balancer.

However, it is not a web server, as it lack file serving and caching abilities.

> * ACL rules with full support for logical if statements [1]

That is a terrifying "feature".

Turing completeness is not a feature. That "feature" allows complete emulation of other computation types.. Including an infinite ways of doing something wrong or bad.

AFAIR first-order logic is not Turing-complete.

I'll let others comment on the technical differences.

Have you read The Maglev paper out of Google? At the time right before that was published, one of my engineers was implementing much the same thing. They considered both HAProxy and nginx for the second layer of load balancing. The were technical reasons in either direction. There was some pressure (from me) to go with nginx because we were already using it as our primary webserver, so we could avoid introducing another new technology for people to master.

We went with HAProxy. Why? Because when said engineer contacted them, describing what he was doing, they (here: the main HAProxy dev) engaged in discussion, helped, even included his needs in their planning. At least at the time, nginx folks just responded with that they'd talk to him after he had secured a licensing deal. The uphill battle this engineer would've had to fight in corporate politics to get licensing sorted out that early in the prototyping phase would have been rough. Last I heard, the company still had a licensing/support deal with HAProxy.

Good presales matter!

(Edit: can't spell.)

I also had a great experience with HA Proxy open source support. We had some sort of config problem when we enabled H2 (IIRC, maybe it was going to 1.8 from 1.7). I did a bunch of testing of it, documented what I found and where I found it acting funny, and posted to the haproxy mailing list. A dev replied with something like "Sounds like you have this config option set wrong". Problem solved!

We had bought a ton of Fortinet gear to firewall+load balance for us, but in the end could never quite get it deployed. I got the haproxy set up instead and it's been amazing!

What are you spending with HAProxy a year now?

Can't tell you. Don't work there any more and don't remember in detail, but wouldn't disclose that if I was still there. Let's say: a significant fraction of an engineer per year. (But note that this was a very large internet company, worth tens of billions, to have had to build it's own multi level load balancer, not a startup.)

You can't spend money on haproxy license. It's free. They make money by selling appliances (servers with haproxy pre installed).

Nginx however is $1900 per year per server. There are plenty of critical features missing from the free edition, for example the status page to see available servers or metrics exporting for monitoring.

Just to do justice to my coworkers working on the ALOHA appliance, it's not just a "server with haproxy preinstalled" but a tight integration of haproxy plus a few management tools into a dedicated distro built from scratch and packaged as an upgradable image like you'd have on your routers or switches. The whole OS image is around 16 megabytes, kernel included, and it contains a 10 Gbps-capable anti-ddos module, a web interface, and troubleshooting tools. And of course you have root access on it and it doesn't void your support to start to hack on it (not pointing the finger at anyone, but still a little bit :-))

Also, HAProxy has brilliant minds like Willy Tarreau behind it. That as a reason is enough. We are lucky to have you around.

In fact there is a software-based Enterprise version of HAProxy that is subscription based.


NGinx (see above) seems to cost $1900 per server per year. What are the costs for HAProxy Enterprise?

I’ve never used either (just an F5 appliance), but I know HAProxy is built by a core Linux kernel contributor and has a reputation as the foremost infrastructure-grade software load balancer. In fact, a lot of hardware load balancer appliances run HAProxy under the hood.

From all accounts, if you really need a load balancer, or even if you just need failover, HAProxy should be your default choice. It has high-availability features, monitoring, supports TLS & SNI, HTTP2, session replication, Lua scripting, and almost any other feature you might need. It was also designed from the ground up to be a high-performance, high-availability load balancer. (NGinx does several other things.)

Stack Exchange (the company behind Stack Overflow) uses it in front of their IIS application servers, and so do a lot of other smart people.

Also, it appears NGinx has commercial/OSS conflict of interest issues — like in recent versions all monitoring functionality was removed from the OSS distribution.

In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed.

Maybe for a few edge cases, HAProxy works better, but overall, I'd pass on it.

nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.

If you are using nginx and it is working good, I'd recommend against trying out HAProxy.

If it's not working good, I'd first look into fixing whatever is wrong with your setup and only trying HAProxy if some experienced with it helps you out with it. HAProxy requires much more configuration tweaking than nginx (at least to gain any benefit from using it).

It would be great if you can explain the type of difficulties you've met. There are probably certain points that could easily be improved to satisfy users with your needs.

In 2.0 we've set a number of things by default to work better and use all the capacity with no need for tweaking. Just doing this will start a proxy on all CPU cores, support both H1 and H2 and will automatically enable round robin load-balancing, tune the maxconns, something which used to be a hassle in previous versions, and enable connection pooling and reuse by default:

     listen foo
        bind :80
        mode http
        server www1
        server www2
        server www3
It's hard to do much simpler, even with nginx.

Same here. We wanted to use haproxy from day zero because it could inject proxy protocol headers.

However, haproxy actively fights being compared to nginx.

There's no 101 guide to setup haproxy as a reverse proxy for nodejs application with separate domain names, ssl certificate configuration (I don't even know how to create the correct chain for haproxy after buying it from a commercial vendor), good security defaults (CORS/CORB) and docker defaults.

As of RIGHT NOW, haproxy has not updated it's official docker image and has 6 day old docker images which docker hub flags as having vulnerabilities (screenshot at https://imgur.com/a/SiYoZzc). So I'm a little hesitant at calling this release "Cloud Native"

Latest nginx docker image is not flagged for any vulnerabilities.

> As of RIGHT NOW, haproxy has not updated it's official docker image and has 6 day old docker images which docker hub flags as having vulnerabilities (screenshot at https://imgur.com/a/SiYoZzc). So I'm a little hesitant at calling this release "Cloud Native"

1. The “official” Docker image is not maintained by HAProxy itself. “Official” refers to being blessed by Docker. See: https://github.com/docker-library/official-images#what-do-yo...

2. The vulnerability scan of Docker Hub is bogus: https://github.com/docker-library/faq#why-does-my-security-s...

3. There's a pull request created by me to switch from 2.0-rc to 2.0: https://github.com/docker-library/haproxy/pull/89. I created it immediately after learning about the release. Any further delay is caused by the Docker Official Images team.

Disclosure: I'm a community contributor to HAProxy and I help maintain the issue tracker on GitHub. I also maintain a few “official” Docker images and by that I know the process.

in which case is there your own registry where you maintain docker images ?

I'm not trying to be an ass. I've been looking forward to haproxy to be more docker/k8s/cloud friendly. This release claims so, but how do I deploy to k8s now ?

should everyone be compiling their own images ? If haproxy is not able to support official docker images, then we are back in "let's just use nginx. they atleast have official images"

and in replying to the comment that this thread belongs to ...this is one of the "difficulties"

The "official" HAProxy docker builds are strictly controlled by the Docker team. For builds created directly by HAProxy Technologies you can find them here: https://hub.docker.com/u/haproxytech

> in which case is there your own registry where you maintain docker images ?

Please note that I'm a community contributor. I am not employed by HAProxy Technologies and I cannot speak for the open source project in any official capacity either.

It’s been awhile since I’ve used HAProxy (changed roles, loved the product). But are there drawbacks running on all cores? I seem to remember sticky routing and acls not working properly as each core had its own set. Has that changed?

You probably remember the nbproc [1] setting which indeed is multiple, unrelated processes. There's proper threading now (since 1.8).

[1] http://cbonte.github.io/haproxy-dconv/2.0/configuration.html...

> nginx … has solid HTTP/2 support (and other features) …

AFAIK nginx doesn't implement HTTP/2 prioritisation effectively, and this can result in responses being served in a non-optimal order


Crazy, HAProxy is free, will proxy and load balance anything not just HTTP, and it's absolutely trivial to configure and install; there's nothing remotely complicated about setting it up. Most importantly for trivial offloading of certificates at the edge; even if using nxinx for your app servers, you should front end it with HAProxy. nxinx just doesn't compare and isn't free. Nxinx is a web server, haproxy is a tcp/ip load balancer; they're really not comparable and are for different things.

Why do you say nginx is not free? If anything, nginx is more free than HAProxy (that has a viral license).

For me it's the superior HTTP / header rewriting capabilities. With nginx you are more or less restricted to just adding headers the last time I looked into it.

Disclosure: I'm a community contributor to HAProxy and I help maintain the issue tracker on GitHub.

Looks like you have to add in a non core module:


From what I remember, HAProxy is better at having detailed monitoring metrics, while in the case of nginx a lot of those monitoring features are nginx plus only.

HAProxy allows easier proxying of WebSockets (without having to know all the WebSocket URLs). It also allows proxying TCP traffic without much hassle (in nginx you have to compile/enable stream module - at least in default Ubuntu package). Until long nginx did not allow proxying HTTP requests without caching request body which may be undesired behaviour if you are uploading big files and want the backend to start processing the body as soon as possible.

It's part of our infrastructure at work to do some fairly complex routing, which might be possible in nginx but is easier with haproxy.

Multi-tenant hosting for customers with lots of custom domains (with SSL) with different customers on different versions, and an app with lots of legacy. In older versions, different paths are handled by different servers. Some paths use source-ip sticky. One part of the app uses websockets. Some paths are handled via S3+cache layer (offloading traffic from app servers without changing the app).

There's also a bunch of special paths to access specific servers directly to get some health metrics (from app written without thinking about running in an auto-scaling environment). One fun thing I built handles some old SOAP requests from a defunct service running on hundreds (maybe down to dozens now?) of external systems that will retry every request forever (exponential traffic growth) if they don't get a "success" response; using Haproxy and some request capture regexes, it can return one of a dozen specially crafted hard coded "success" responses. The date is wrong but the service doesn't care, and now few lines in Haproxy replace a dedicated "black hole" app server we used to run.

Haproxy handles all this in a single hop for all traffic (and Haproxy itself runs in an autoscaling group). The config is complex, but still understandable. All the variable stuff is generated by scripts at runtime, which also lets us use an admin UI to manage customer domains, versions, and automatically picks up available (deployed) application versions in the region.

Having used both, I'd say in general nginx is easier for simple things, and in many ways has more capabilities (like static hosting, authentication support). In fact we use nginx on the Haproxy servers for hosting static pages and being a caching proxy for S3. Haproxy makes simple hosting more complicated, but you get a lot of very fine-grinned control over everything (eg, it's pretty easy to do almost anything to the traffic like changing request/response headers/paths at any point). For anything new, I'd use nginx only if I could get away with it (note: all the above may be possible in nginx now, but I'm not going to rewrite our infrastructure unless there's a very good reason).

I've found it easer to do some traffic-shaping with (i.e. queue up more than N connections per ip). The status page/dashboard is really nice. The English docs are better. This might have changed in the last three years

One advantage is that HAProxy has is the built-in status page. In order to get something similar in Nginx, the last time I've tried (a couple of years ago), the only option was a 3rd party, buggy, plugin.

The Problem with HAProxy is largely configuration. I've written ansible templates that generate the complicated and repetitive parts of that config for me.

Other than not logging to stdout (which 2.0 seems to fix), that's the only thing that bugs me about HAproxy.

HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.

I use HAProxy as a frontend proxy to distribute requests into my intranet, it handles about 3-50 requests per second on a very dinky little VM (1 CPU core, 512/1024 CPU time allocation, 512MB ram) with a lot of CPU still left over.

That's not usually a choice. nginx is first and foremost a web service and a proxy. It is not built primarily for rewriting traffic and simple run time state changes, which is what you generally need an http router for

Normally you choose between something like Apache and nginx, where nginx is a bit easier to configure and write modules for, but the former has more functionality and third party support.

You choose between haproxy and something like Varnish, where the former is a bit more featureful on the routing part and the latter has more focus on the caching part.

It is not uncommon to use both.

The biggest difference a few years ago was HA Proxy could do layer 4 load balancing where nginx was layer 7.

These days I think they’ve grown a lot closer together on capabilities. nginx remains a true web server where HA Proxy is not.

I'm not sure what you mean. HAProxy has always been a layer 4/7 load balancer and has indeed never been a web server.

NGINX historically was a web server (an excellent one) which has evolved towards proxying and a bit of load balancing. But in terms of LB features it lacks a lot, just like you wouldn't try to cross-dress haproxy into a web server in any way. It's true that for many quite simple setups nginx is often enough and saves the admin from having to learn another tool. But when you start to handle tens of thousands of domains, need to perform DDoS protection, handle thousands of servers, perform inter-DC state synchronization , stream tens of Gbps of traffic, or perform advanced actions on health check status change, it's not well suited aanymore. It's still the best web server I know, and many hosting infrastructures combine haproxy+varnish+nginx together, taking the best of each and deliver unrivaled quality of service. The beauty here is that these 3 best components in their respective categories are all free to use so there's no reason to have to choose between one or the other, just use all 3 and be happy!

Nginx is a network tool with web server capabilities. It can also do pretty much everything you want as long as it's ipv4 or ipv6 over TCP, UDP or HTTP as it, and specifically the openresty distribution are highly scriptable. For example, a recent use case I solved with it was using it as a DNS (53 and DoT) proxy which also provided DoH and filtering.

For more use cases look at Kong, which is almost completely built on openresty.

It is very usable to proxy other services than HTTP.

word. It’s been really fun to put in front of games with servers like Cube World.

Nginx is a web server that happens to have reverse proxy functionality (similar to Apache).

HAProxy is a load balancer with philosophy of "do one thing and do it well".

Perhaps things changed with nginx, but when I tried it for load balancing it was very basic, it didn't even have status page or health checks unless you purchased the commercial version. It only had round robin load balancing method etc.

If your goal is load balancing haproxy is far superior, if your goal is to have a web server that hosts some static files and then redirect dynamic requests to another app then nginx might be better.

haproxys config format is concise and approachable. I like the cbonte single-page html doc page where ctrl+f to elaborate explanations can go a long way clearing up (mis)understandings.

Without knowing the scale it's hard to answer this. But in general HAProxy is THE layer to sit in front of web servers. From DDoS protection rules to a true set of metrics and a real dashboard (good luck w/ Datadog metrics for Nginx), it's purpose IS to be a highly available Load Balancer/Proxy.

Nginx in comparison is for me, THE thing to use as a web server. When it comes to being a single layer away from reverseproxying requests to apps running on an instance or static file serving with caching rules Nginx is purpose built for this.

Both apps have support for what each other do though. For the more pedantic it's easy to say "But x can do this too with this feature". Usability wise though, nginx and HAProxy are distinct. I went through the comparison very recently while testing setup of HAProxy as a load balancer in front of multiple webservers running Nginx which in turn sat in front of several apps running on each instance.

TL;DR - At a slightly larger scale (scale being number of instances + apps) HAProxy and Nginx are great to use together as opposed to one over the other.

Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.

haproxy has supported SSL since version 1.5, released 2014, so it's been available for some time now

Nginx it's more general purpose, ex: can serve and cache static files.

As of 1.9, haproxy can cache files as well. Well, 1.8 technically had caching, but more controls have been added around the cache in 1.9. The controls are not as fine grained as in NGinx, but it's fine for when you just need a small object caching layer.

Up until HAProxy 1.8, I used to have Haproxy -> NGinx caching layer -> Apache. I have since been able to remove all the NGinx servers. (For my use case, won't work for everyone)

Thoughts on HAProxy vs. Envoy, or as the data plane for a service mesh?

It definitely depends on your use case, so it's hard to tell what's better for you. HAProxy is solid and doesn't take a long time to get started.

At the same time, some of the HAProxy 2.0 features have already been available in Envoy and tested in production, at scale (if HAProxy provided those features, there wouldn't be a big need for Envoy). For example, Envoy is pretty extensible, has good performance and has good support for dynamic cert management (including service-to-service mutual TLS).

Envoy was built for that purpose and has more functionality around it, as well as better support for service discovery (developing the now open standard APIs), more protocol introspection and observability, full-duplex connections (no upstream/downstream split in what's possible), and easy interchange between protocols.

Envoy is also used by Istio and has a lot of infrastructure support for deploying in Kubernetes and such which HAProxy doesn't currently have.

PSA: if you are building your own HAProxy binaries, 2.0 replaces the confusing linux `TARGET`s (`linux2628` and the like) with a single target "`linux-glibc`", that name may be even more confusing, as that's the target you need to build HAProxy even if you are using musl instead of glibc.

If you're seeing good support for musl, I'd be interested in receiving a patch to add it as another combination. I prefer to keep the libc apart from the kernel (the mistake we made long ago was to mix them) so that we don't have issues anymore when building on other libcs. For example getaddrinfo tends to be bogus on uClibc and must not be enabled there. And threads do not work on dietlibc if I remember well.

My experience with HAProxy is limited to maintaining https://github.com/ricardbejarano/haproxy, a HAProxy Docker image, which has both glibc and musl variants.

I haven't used HAProxy in any environments other than testing, but as far as I can tell both variants behave equally. In fact, haproxy.cfg for both images is the same, they only differ in their build flags.

Oh yes absolutely! For example last time I checked libmusl, you didn't need -lrt, -ldl, -lcrypt nor a few others which I forgot about. It just provides empty stubs for those so that you can use the same build options as you regularly use with glibc. However for me threads were not supported (it was on a MIPS, lacking some 64-bit atomic ops haproxy relies on). So I'd be tempted to suggest having less options by default with musl since it's mostly aimed at embedded systems, and leaving it to users to choose if they want to enable more or not.

Correction: this requires to add -latomic there (just tested). I should mention this in the INSTALL file.

This is a list of nearly every feature I've ever wanted from haproxy. Truly wonderful work!

Same here, except for a Consul integration, so we didn't have to rely on SRV-records, but I guess you can't have everything :)

With the Data Plane API, expect to see tighter integration with Consul. For now, there is this integration https://www.haproxy.com/blog/building-a-service-mesh-with-ha...

The conversation in this thread has made me wonder after reading it if anyone uses Apache2 as their webserver anymore.

Edit: seems many still do! I thought it was dying slowly as php popularity was going down.

Yes. Any time I need to use cgi's, php, or anything where security outside of Apache can be controlled by an Apache module, I will always default to Apache as the security control story is better. Performance wise, Apache 2.4 using the latest APR libraries is equal to NGinx.

There are also far fewer bugs and updates to the Apache core. I rarely have to recompile anything.

I have also had many frustrating interactions with the lead developer of NGinx. There are many assumptions made and many things hard coded in the Makefile, especially as it pertains to pcre, zlib, openssl and CFLAGS, LDFLAGS, etc. Also, I can't just point to existing pcre and zlib deployments for inclusion. NGinx wants the source and to recompile the extra libraries each time.

The global trend for Apache is not looking good, and I believe a large part of its popularity is simply a legacy of its dominant position a decade ago.

Over the last 10 years, it probably lost half of its share. The exact figures vary with the source: according to the link below, Apache's share of the million busiest web sites went from 66% in 2011 to 32% now.


What conversation?

Been using apache2 for like 20+ years now. It is doable to switch to something else, but would probably require effort with various details, etc. It works well for our moderate loads, so not really urgent to change it.

Hehe I still have the unbreakable 1.3 running on some home machines. It doesn't want to die so I'm not forcing it :-)

No one up till this point ever mentioned using HAProxy with Apache

The conversation has mostly been about NGinx vs HAProxy, at the balancer level, has it not?

Of course, for anything that is cgi/fastcgi. nginx doesn't support that.

There is also stuff running on mod_php/mod_python/mod_wcgi that is bound to apache, however these are deprecated and unstable technologies that should not be used in this decade.

nginx supports fast cgi


I imagine it supports cgi calls too

Nginx does not support CGI, but you can for example use uWSGI as a application server behind nginx.

For reference, WSGI is python only, it's an interprocess communication method very similar to fastcgi with the same purpose.

Seems like it can work with others too, though I don't know how well: https://uwsgi-docs.readthedocs.io/en/latest/LanguagesAndPlat...

Reading that documentation page will tell you that nginx cannot spawn fastcgi processes ;)

Proper Layer 7 retrying is huge. I’ve been waiting for this for a while.

Is the v1 config backwards compatible with this? I can't see it mentioned anywhere so assume you can just upgrade in place?

Apart from a few new warnings for long-deprecated options it is compatible. HAProxy 2.0 is not a major version. Willy apparently just dislikes two-digit versions in the second place.

Exactly, I want directory listings to remain alphanumerically ordered, not like when you want to download Git and end up believing 2.9 is the latest one :-)

Yes, the same configuration should work between both. Although some options are no longer required to be set such as nbproc/nbthread. To be safe pass it through the configuration checker (-c on the command line).

Now that HaProxy uses HTX internally to quickly represent header flags, I wish they'd add that to their "Proxy Protocol". Back in the day, Apache/Tomcat used AJP to transmit parsed HTTP state to backend servers to avoid the re-parsing overhead.

Some months ago I decided to move every little things running on some VPS to docker (so I could move those apps at will and have apps with incompatible dependencies running on the same VPS).

I looked into Haproxy, set a bunch of rules and fall into static IP management hell. Then I tried Traefik mainly because of the HTTPS auto-renewal feature but the ability to tag docker containers with DNS regex (so traefik knows how to reverse proxy traffic) is a god send.

Is there something like that in HaProxy 2.0 (HTTPS auto-renewal and container tagging) ?

I would check out https://github.com/caprover/caprover. You can run multiple apps on 1 VPS and HTTPS renewal is automatic.

You can also try Caddy: https://caddyserver.com/

If I were to use it as a k8s ingress, how would I do OCSP stapling? nginx does that for you, but with haproxy you've always had to hack something together to add a .ocsp file (which has to exist at startup) and reload externally.

I also see no option for client certificate auth or TLS versions and cipher suites in the repo.

I guess it's still better to handle TLS outside of haproxy.

Strange that you see no option for client certs because that has been supported from day one. In addition we even support SNI-based client auth even with wildcard certs. Same for TLS versions and cipher suites.

Further, just look at https://istlsfastyet.com/ and you'll see that haproxy, H2O and nghttpx are the only 3 implementations checking everything (and haproxy was the one inventing dynamic record sizing).

So it seems your opinion on haproxy's TLS support is not that spread!

I know haproxy itself supports that and have used those features with static configuration, but does the k8s ingress controller out of the box?

I don't know as I have no use for it. Just check the article, it presents some of the things done with the ingress controller, it should answer some of your questions I guess.

Yes, it does. We'll blog about those use cases during the summer.

As you explained, HAProxy does support OCSP stapling through flat file, but also support it through the runtime API.

v1 of the ingress controller does not update OCSP. That said, this is planed for a next release.

Stay tuned :)

Are there any programmble http proxy servers? I write a fair bit of VM/container control software and often need to map URLs to specific entities on the network dynamically. Never found a good programmable proxy with routing table API and always had to hand roll.

You have Lua in haproxy if you want to do complex things. If you just want a programmable routing table, the maps in haproxy do exactly this and can be updated on the fly (including from the traffic itself if needed).

Or you can write the controlling program in the language of your choice and run it outside as an SPOA agent. Have a look at spoa_server which provides examples for Python and Lua.

Shameless plug. I wrote a REST wrapper around HAproxy some years back. Wrote it in Go. Works pretty nice. Willy was a great support also. https://github.com/magneticio/vamp-router

Works quite nicely and can set ACL’s dynamically.

You can do Lua on HAProxy.

Traefik is a very good programmable proxy.

By programmable you mean configurable via a REST api?

Yes, pretty much. E.g. DigitalOcean's online SSH terminal. Programmable routing of websockets code to backend VM.

maybe openresty? nginx with embedded lua plenty of library support for external systems like redis, memcached, mysql, etc...

Such a great project! I was a squid guy, then nginx and now since nginx stopped getting new features due to the commercial edition I am switching. Thank you developers for this amazing work!

To be fair to squid and nginx, they don't do the same things. Squid is mainly a forward proxy. Nginx is mainly a web server. There's no reason for not using them anymore for these use cases where they excel.

I use haproxy as a forward proxy on my personal computers mainly for ability to control SSL options, sniffing SSL traffic and to support non-SSL enabled clients. I do not need any caching so squid seems inappropriate.


Does it have proper support for HTTP/2.0?

Last I checked, only Nginx really did it right.

As of last April, several implementations (including HAProxy) were more "right" than Nginx: https://twitter.com/tunetheweb/status/988196156697169920

I never saw this classification. Since 1.9 haproxy passes 100% of the h2spec tests.

Nice work!

HTTP/2 support was added in 1.8, at the beginning of last year.

AFAIK it's solid.

As in nginx that still doesn't have support for h2 in the backend?

AFAIK nginx does not "do it right", it can get HTTP/2 prioritisation pretty wrong and when it gets it right it can appear to be more by luck than design


h2o is probably the only server that has done HTTP/2 right fora long time, others a finally getting it right

Can HA proxy serve static files like nginx?

Although HAProxy is not a web server, it does have Small Object Caching so files can be cached on the proxy. https://www.haproxy.com/blog/whats-new-haproxy-1-8/#http-sma...

It can proxy and cache them.

There is a hacky way to serve single page static files directly via haproxy, by creating a http file (including headers) and add it to a custom back-end. This is really only useful if you have one or two pages you need to serve and don't want to run another web server. Some people use this for outage or maintenance pages.

    acl landing path_beg /demo
    use_backend landing if landing

    backend landing
        mode http
        fullconn        10000
        errorfile 503 /opt/ha/landing_page.http
        http-request    set-log-level silent

    # cat landing_page.http

    HTTP/1.1 200 Ok
    Cache-Control: no-cache, no-store
    Connection: close
    Content-Type: text/plain

    This is a static text file served directly from HAproxy 1.9

    I will catch grief for showing this.

Then use acl's to send specific requests to that page. Obviously change/add headers as required for the content in the file and use-case. Perhaps useful for outage or maintenance pages. Use your imagination for other use cases where you might want to automatically serve such a page.

Here it is in use: [1]

[1] - https://tinyvpn.org/demo

Sounds like HAProxy 2.0 is Envoy. I would personally (and do) just use Envoy, as everyone else is already using it and the bugs they've found have been fixed.

This is a strange assertion. This is not envoy, it's haproxy as you've always known it plus all the features people have been asking for recently, without removing what makes it fast, robust, compact and flexible. From what I've seen you can't for example use dynamic weights in envoy, protect from DDoS, perform queuing to protect your servers, use true leastconn or weighted hash/roundrobin, stick on arbitrary information nor synchronize it between members of the cluster, create complex routing rules, set the source address from headers, perform transparent proxing, etc.

These are two different projects. One was initially designed for the hostile edge and excels here. The other one was initially designed to be used as a side car deep into your infrastructure and excels there. There is obviously quite some overlap between the two, sometimes with different terminology (like "circuit breaking" in envoy that haproxy calls "timeouts" and "queue limits"), and users demands make each of them evolve a bit in the area they are less good (i.e. where the other one excels). But they are still quite different beasts.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact