Hacker News new | past | comments | ask | show | jobs | submit login
HAProxy is not affected by the HTTP/2 Rapid Reset Attack (haproxy.com)
166 points by nickramirez on Oct 10, 2023 | hide | past | favorite | 33 comments



I'm quite impressed with HAProxy.

It takes a little effort to fully understand the configuration file format (hint: you've got to read the documentation, not just look at examples to fully grok it), but it's so worth it, IMO.

It's also a nice treat to have the founder and technical leader (Willy Tarreau) of the HAProxy company being so active in the community, so many years later (the initital release was in 2001). I regularly see him answering e.g. newbie questions.

(HAProxy docs: https://docs.haproxy.org/ - pick 2.8/LTS)


It's become my swiss army knife of TCP. I nearly always terminate TCP first with haproxy "out of process" of whatever, then have it proxy over a unix socket to "whatever". This allows an immense amount of flexibility, from being able to "wiretap" whats going on in the real world, to default error pages, alarms, monitoring, handling CORS... tons of uses.


Please write a blog post about this and submit it to HN!


Agreed. Haproxy is an absolute wonder compared to similar systems. It all just feels so much cleaner, thought out, and built from the ground up for many different use cases. It very much has a feel that reminds me a lot of the spirit of sqlite.


Yeah. Stringency and rigor are words that come to mind.


What about nginx? I'm not too familiar but I was under the impression that it was the safe choice


It's primary focus is/was being a web server - a faster Apache. This shows.

Also (after the acqusition by F5 in 2019?) more features are kept away from the open source version compared to HAProxy.


How does it compare with Caddy?


Caddy is a bit quick and dirty, rapidly-developing, with neat plugins but hard to configure for more complex scenarios and too light on the docs (IMO).

HA Proxy is robust, comprehensive, mature, and bulletproof. It's basically boring because it works so well.

If you have to choose only one to learn, choose HA Proxy.


I wanted to try it out just now but hit a roadblock immediately - it cannot automatically obtain and maintain TLS certificates. You have to use an external client (e.g. acme.sh), set up a cron to check/renew them, and poke HAProxy to reload them if necessary. I'm way past doing this in 2023.

https://www.haproxy.com/blog/haproxy-and-let-s-encrypt

https://github.com/haproxy/haproxy/issues/1864


If getting Let's Encrypt to work with HAProxy is your only struggle, you'll soon overcome it and be loving HAProxy. And there are multiple ways to set up Let's Encrypt, if you don't want to use acme.sh. For example, you could use certbot. There are blog posts that cover that pretty well.


you may wish to use certbot instead:

https://github.com/acmesh-official/acme.sh/issues/4659


That is some very well written documentation IMHO.


Can’t seem to read it on mobile.


You probably don't want to. Usually it needs at least a browser window, an editor and maybe an open TTY.

haproxy.cfg can be ... tricky.


I had the privilege of reporting a few bugs in HAProxy in the last few months. Willy's a real treasure; he's friendly and knowledgeable, and he clearly cares a ton about HAProxy even after 22 years of development.


More details [0] about the mitigation are discussed on the mailing list:

> So at first glance we indeed addressed this case in 2018 (1.9-dev) with this commit:

> f210191dc ("BUG/MEDIUM: h2: don't accept new streams if conn_streams are still in excess")

> It was incomplete by then an later refined, but the idea is there. But I'll try to stress that area again to see.

[0] https://www.mail-archive.com/haproxy@formilux.org/msg44134.h...


Not all surprised by this, HaProxy is some of the best built software I've ever seen. But glad to know they checked.


    After rigorous testing, we have been able to confirm that our implementation of the HTTP/2 protocol can handle the Rapid Reset Attack without increasing the resource usage or compromising the parallelism of the protocol.
But doesn’t this mean the servers behind the reverse proxy would still suffer from increased/wasted resources responding to the rapid reset requests?


Not by definition. Looking at Cloudflare's summary of the attack[0], part of it seems to rely on sending a request and then cancelling it in the very same packet.

A trivial implementation might walk through the packet front-to-back, firing off requests and cancellations immediately as it encounters them. That would indeed still result in a lot of load on the servers behind the proxy.

However, a reasonable alternative would be to only collect a set of actions to execute while walking through the packet, firing them off all at once when you finish. For example, a "launch request" could create a new entry in the backend requests list with a state of "NEW". The "cancel request" part immediately afterwards could then look in the backend request list and set the state of the corresponding request to "CANCEL".

Now when the backend request list is being processed next, it'll only see a request marked "CANCEL" without a corresponding socket to a backend, shrug, and just delete the entry because there is nothing to do.

[0]: https://blog.cloudflare.com/technical-breakdown-http2-rapid-...


I thought you were going to suggest it to be processed like one of those trick exams of reading all of the questions before answering any of the questions where the last question is something so obvious that like stand up sit down, then turn in the test with out writing anything on it. So in this case, read all of the instructions in the packet. If the last is CANCEL, do nothing.


It's basically like that - the "fastest" servers will begin responding before processing the entire packet; but it's likely that's never really needed (and if it is, you'll know and can turn it on) but the smarter thing to do is process the whole packet at once.


If you're doing tcp load balancing sure, but http is terminated at the proxy and wouldn't be vulnerable. This is why you put $proxy or $webbserver in front of your application webserver.


Wondering if anyone knows the exposure when using an nginx proxy?



This is news because of an exploit found against NginX, I believe.

That's why HAProxy did testing to see if they were vulnerable.


This is news because of a massive DDoS against AWS/Cloudflare/Google, and isn't related to a particular flaw in NginX

https://cloud.google.com/blog/products/identity-security/how...


[flagged]


If you want lots of details, this specific post on the mailing list is there https://www.mail-archive.com/haproxy@formilux.org/msg44136.h...


haproxy mitigated this attack in 2018 as an implementation bug:

https://news.ycombinator.com/item?id=37833365


That would have been wonderful context for them to include.


Whoa. So they're 5 years ahead of everyone else?


> After rigorous testing, we have been able to confirm that our implementation of the HTTP/2 protocol can handle the Rapid Reset Attack without increasing the resource usage or compromising the parallelism of the protocol.

You are free to conduct your own tests? AFAIK the software in question is free (both libre and commercially).


> make anyone else think they just failed to properly test/verify their claim?

Nope, not me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: