It takes a little effort to fully understand the configuration file format (hint: you've got to read the documentation, not just look at examples to fully grok it), but it's so worth it, IMO.
It's also a nice treat to have the founder and technical leader (Willy Tarreau) of the HAProxy company being so active in the community, so many years later (the initital release was in 2001). I regularly see him answering e.g. newbie questions.
It's become my swiss army knife of TCP. I nearly always terminate TCP first with haproxy "out of process" of whatever, then have it proxy over a unix socket to "whatever". This allows an immense amount of flexibility, from being able to "wiretap" whats going on in the real world, to default error pages, alarms, monitoring, handling CORS... tons of uses.
Agreed. Haproxy is an absolute wonder compared to similar systems. It all just feels so much cleaner, thought out, and built from the ground up for many different use cases. It very much has a feel that reminds me a lot of the spirit of sqlite.
Caddy is a bit quick and dirty, rapidly-developing, with neat plugins but hard to configure for more complex scenarios and too light on the docs (IMO).
HA Proxy is robust, comprehensive, mature, and bulletproof. It's basically boring because it works so well.
If you have to choose only one to learn, choose HA Proxy.
I wanted to try it out just now but hit a roadblock immediately - it cannot automatically obtain and maintain TLS certificates. You have to use an external client (e.g. acme.sh), set up a cron to check/renew them, and poke HAProxy to reload them if necessary. I'm way past doing this in 2023.
If getting Let's Encrypt to work with HAProxy is your only struggle, you'll soon overcome it and be loving HAProxy. And there are multiple ways to set up Let's Encrypt, if you don't want to use acme.sh. For example, you could use certbot. There are blog posts that cover that pretty well.
I had the privilege of reporting a few bugs in HAProxy in the last few months. Willy's a real treasure; he's friendly and knowledgeable, and he clearly cares a ton about HAProxy even after 22 years of development.
After rigorous testing, we have been able to confirm that our implementation of the HTTP/2 protocol can handle the Rapid Reset Attack without increasing the resource usage or compromising the parallelism of the protocol.
But doesn’t this mean the servers behind the reverse proxy would still suffer from increased/wasted resources responding to the rapid reset requests?
Not by definition. Looking at Cloudflare's summary of the attack[0], part of it seems to rely on sending a request and then cancelling it in the very same packet.
A trivial implementation might walk through the packet front-to-back, firing off requests and cancellations immediately as it encounters them. That would indeed still result in a lot of load on the servers behind the proxy.
However, a reasonable alternative would be to only collect a set of actions to execute while walking through the packet, firing them off all at once when you finish. For example, a "launch request" could create a new entry in the backend requests list with a state of "NEW". The "cancel request" part immediately afterwards could then look in the backend request list and set the state of the corresponding request to "CANCEL".
Now when the backend request list is being processed next, it'll only see a request marked "CANCEL" without a corresponding socket to a backend, shrug, and just delete the entry because there is nothing to do.
I thought you were going to suggest it to be processed like one of those trick exams of reading all of the questions before answering any of the questions where the last question is something so obvious that like stand up sit down, then turn in the test with out writing anything on it. So in this case, read all of the instructions in the packet. If the last is CANCEL, do nothing.
It's basically like that - the "fastest" servers will begin responding before processing the entire packet; but it's likely that's never really needed (and if it is, you'll know and can turn it on) but the smarter thing to do is process the whole packet at once.
If you're doing tcp load balancing sure, but http is terminated at the proxy and wouldn't be vulnerable. This is why you put $proxy or $webbserver in front of your application webserver.
> After rigorous testing, we have been able to confirm that our implementation of the HTTP/2 protocol can handle the Rapid Reset Attack without increasing the resource usage or compromising the parallelism of the protocol.
You are free to conduct your own tests? AFAIK the software in question is free (both libre and commercially).
It takes a little effort to fully understand the configuration file format (hint: you've got to read the documentation, not just look at examples to fully grok it), but it's so worth it, IMO.
It's also a nice treat to have the founder and technical leader (Willy Tarreau) of the HAProxy company being so active in the community, so many years later (the initital release was in 2001). I regularly see him answering e.g. newbie questions.
(HAProxy docs: https://docs.haproxy.org/ - pick 2.8/LTS)