Hacker News new | past | comments | ask | show | jobs | submit login
HTTP/2 Denial of Service Advisory (github.com)
190 points by rdli 7 days ago | hide | past | web | favorite | 39 comments

Entirely unsurprising. With all this complexity, HTTP2 is on par with a full TCP/IP stack. All major operating systems had decades to optimize and bulletproof these, and still to this day we find issues with them every now and then. What did people expect would happen when we start reinventing the wheel yet again, on top of what we already have?

And this is just the tip of the iceberg. Consider this a warm-up exercise.

Version 2 syndrome on full display aka the "second-system effect" [1].

> The second-system effect (also known as second-system syndrome) is the tendency of small, elegant, and successful systems, to be succeeded by over-engineered, bloated systems, due to inflated expectations and overconfidence.

They "simplified" it by making it more complex and layers of binary/connection complexity... failing to address many common attack vectors and adding fully new attack vectors that should have been expected. Now debugging those in HTTP/2 is harder and in a big binary blob.

I miss text based/MIME HTTP 1.1, simplicity should be the goal always, not more complexity that solves almost nothing, but HTTP/2 did give some big orgs control of the layer/protocol/standard which was most of the driving force.

Much of this could have been fixed lower level in SCTP or something similar on the transport layer, but instead we get another abstraction on top of it all, with more holes and tougher to debug.

Solution: let's do it again in HTTP/3... new standard to fix all the old standards, more complexity that leads to misunderstanding in implementations that leads to more attack vectors. Obligatory xkcd [2]

[1] https://en.wikipedia.org/wiki/Second-system_effect

[2] https://xkcd.com/927/

Versions prior to HTTP/2 give the appearance of being simple but in actuality there are many many edge cases that also are the cause of vulnerabilities.

HTTP/2's binary syntax addresses some of those edge cases mainly through better-formed header fields and a common chunking of body data.

SCTP is undeployable on the Internet. HTTP/2 is deployed on the Internet. It brings multiplexing and of course that brings some additional complexity and constraints. But these have proven more deployable than HTTP/1.x-style solutions such a pipe lining.

HTTP/3 is actually a simpler application mapping than HTTP/2. This is due to the complexity getting pulled into the transport. But it's a zero sum game, just moving the concerns around doesn't mean they are automatically fixed.

I feel like the technical merits are overshadowed by the elephant in the room: modern sites are slow because to render a 10 KiB article, they're loaded with 5 MiB of Javascript libraries, 85 separate resource requests, offsite ad network and affiliate link requests, streaming video popup players, and on and on. In light of that, is it really that important to eg compress HTTP headers?

We should definitely still improve upon HTTP, no question, but I hope that we don't end up having HTTP/1.x force-deprecated through tactics like search page derankings. HTTP/1.x is a workhorse. Even if you really shouldn't, there's something to be said about being able to create a client or server for it in ~100 lines of code that'll work in 99% of non-edge cases. I'd hate for us to lose that as an option for simpler applications.

Agreed, with HTTP/2 we're treating the symptoms, not the problem. Unfortunately there's no nice way to fix this: users don't blame sites, it's "my internet is slow".

I've never been one to advocate NoScript but it's getting more and more appealing by the day.

HTTP/3 has a better goal in that it aims to push these elements back down to the transport layer using QUIC, in HTTP/2 transport abstractions bubbled up to the protocol layer making for a messy abstraction. HTTP/2 is a leaky abstraction and blends too much of the transport and protocol layers and unnecessarily complex.

Though I will add that much of the protocol and standards work in recent years or even the last 5-10 has largely been companies aiming to take control of the standards by implementing standards that benefit them the most over maybe sensible simplifying, adding complexity to own it more. That definitely is a factor and HTTP/2 was rushed probably for this reason.

SCTP would be doable as it is a transport layer and yes difficulty in rolling it out, but Google went after QUIC, which is also a transport layer and is similar to SCTP (UDP capabilities or essentially a transport version of RUDP mixed with ordering/verification), because they also call the shots on that. It makes sense for Google to push that but does it make sense for everyone to just allow that? People have to understand that standards are now pushed company level rather than from engineering solely. The needs of HTTP/2 went beyond a better system, it ventured into control the standards and market standards territory.

Hopefully HTTP/3 is better and less complexity, but judging by who wants it in and how companies want to control these layers more, I have my doubts. We now have 3 HTTP protocol layers to support, more and more it will box out engineers from being able to compete or other browsers/web servers to compete. I don't know that the pros outweigh the cons in some of these scenarios.

Who really gains/ed from HTTP/2 HTTP/3? UDP was always available as well as reliable UDP. HTTP/2 and HTTP/3 feel more like a standards grab with minimal benefits but major benefits to the pushers. I am not against progress in any way, I am against power grabs and iterations that provide little benefits from major overhauls and 'second-system syndrome' as well as complexity rather than simplicity to some of those ends.

Did we really benefit from obfuscating protocol layers of HTTP (the HyperTEXT Transfer Protocol) into binary? What did we gain? We lost plenty, easier debugging, simplification, control of the standard, competition etc. Hopefully we gained from it but I am not seeing it. We already had encryption and compression to stop ad networks/data collection, binary gains are minimal for lots of complexity. Simplification was destroyed for what? Resource inlining breaks caching. Multiplexing is nice but it came at great cost and didn't really improve the end result.

HTTP/2 reminds me of the over complexity in frameworks, SOAP vs REST, binary vs text, binary JSON and many other things that were edge cases that now everyone has to deal with. As engineers we must take complexity and simplify it, that is the job, I don't see lots of that in recent years with standards and development. Minimalism and simplicity should be the goal, complexity should be like authority, it should be questioned harshly and be allowed only when there is no other way.

Making another version and more complexity is easy, making something simple is extremely difficult.

Multiplexing is beneficial, otherwise we'd never have seen HTTP user agents opening up multiple TCP connections.

The gain of moving to binary was the ability to multiplex multiple requests is a single TCP connection. It also fixes a whole class of error around request and response handling - see the recent request smuggling coverage from Blackhat [1].

HTTP/2 isn't perfect but I dont buy that HTTP/1.1 is the ideal simple protocol. There are a ton of issues with its practical usage and implementation.

We live in complex times and the threat models are constantly advancing. Addressing those requires protocols that by their nature end up more complex. The "simple" protocols of the past weren't designed under the same threat models.

[1] https://portswigger.net/blog/http-desync-attacks-request-smu...

No doubt multiplexing is good, better in the transport layer though not protocol.

Protocols should be programmable. Currently HTTP/2 requires libraries and being binary by nature there are more changes for vulnerable libraries, that is a fact. Due to them being more complex there is more chance for error and holes.

The smarter move would have been transport or a combination of a protocol surface and a protocol transport layer if putting it in the transport layer was undoable.

Iterations are better than the "second-system effect" in most cases. Engineers are making too many breaking changes and new standards that benefit companies over the whole of engineering and internet freedom and control. Companies largely wanted to control these protocols and layers, and they have done that, you have to see that was a big part of this.

HTTP/2 benefits cloud providers and Google (especially since they drove this with SPDY then HTTP/2 then QUIC now HTTP/3) etc more than most engineering and it was done that way on purpose. The average company was not helped by adding this complexity for little gains. The layers underneath could have been smarter and simpler, making the top layers easy is difficult, but that is the job.

In a way HTTP was hijacked into a binary protocol and should have just been Binary Hypertext Transfer Protocol (BHTTP) or something that HTTP rides on top of. Just too much transport bubbled up from transport layer into the protocol layer with HTTP/2 and a bit of a mess or leaky abstraction or a big ball of binary.

> We live in complex times and the threat models are constantly advancing. Addressing those requires protocols that by their nature end up more complex. The "simple" protocols of the past weren't designed under the same threat models.

The top most layers can still be simple even if you are evolving. Surface layers and presentation layers can be simplified and making them more complex does not lower their threat models or vulnerabilities, as we see with the OP article, vulnerabilities always will exist and when it is more complex they happen more often in engineering.

I have had to implement MIME + HTTP protocols and RFCs for other standards like EDIINT AS2 and others. Making a standalone HTTP/HTTPS/TLS capable web/app server product is now many times more complex and will be moreso when HTTP/3 comes out. Not fully yet but as time goes on consolidation happens and competition melts away by making things more complex for minimal gains. There are lots of systems that can still use HTTP servers that are embedded and other things that will be more complex now, and again for minimal gains.

Software should evolve to be simpler, and when complexity is needed, it needs to be worth it to get to another level of simplicity. Making things simple is the job and what engineers and standards or product people should do. Proprietary binary blobs is where the internet is going when you start down this path. I am looking forward to another Great Simplification that happened when the internet and early internet standards were set out, as that spread technology and knowledge. Now there is a move away from that into complexity for power + control and little benefit. We are just in that part of the wheel, cycle or wave.

Indeed. Looking at the descriptions of the attacks all these are simple. They probably would have occurred to any halfway competent attacker looking at ways to DOS your sever in the first several hours of playing around with it.

Consider that one of the attacks is described as a "ping flood", remember when we first dealt with that? Decades ago. And the "data dribble" looks like a re-heated version of the HTTP "Slowloris" attack.

It's extremely regrettable that the creators of the vulnerable software didn't take a look at any of the plethora of existing attacks and imagine how they might be adapted to attack their implementations.

Also, issues like CVE-2019-9517 ("Internal Data Buffering") provide strong motivation for integrating those layers, as HTTP/3 is doing.

Yeap. Emperor's new clothes / Not Invented Here / reinventing the wheel / new is "better" than old - churn / kitchen-sink design-by-committee (cough OpenSSL cough). This happens more and more. Instead of fixing existing infrastructure that has a well-established/proven history, throwing everything away and starting over like what came before never existed. And let's over-engineer it and add every possible feature that we'll never use! It's so new and broken, isn't it awesome!?!

> Instead of fixing existing infrastructure that has a well-established/proven history, throwing everything away and starting over like what came before never existed.

Isn't HTTP/2 just that, an attempt to fix existing infrastructure? I mean, HTTP/2 was a revision of HTTP/1.* that aimed to fix some problems such as the inability to address latency issues with basic techniques such as pipelining and multiplexing requests.

Of course a new protocol might have new bugs, but what's your alternative? The web needed to evolve. HTTP2 offers a lot of new benefits.

Maybe if the TCP standards could actually update in time frames other than decades then that's how it would've went. And if did get TCP2, you could say exactly the same thing about it being new and more open to bugs.

The other way to look at it is that the only people who deeply need to care about things like this are cloud providers.

Basically anyone who cares about DoS resistance is using a CDN and/or cloud load balancers, or has enough scale to build out their own CDN.

Otherwise you'll get soaked at layer 3 anyway, L7 DoS resistance is "nice to have" but not enough.

I am very interested in my little vps not being reduced to a smoking heap just because some script kiddie wanted some cheap lulz. Of course, it's not hard to dos a tiny vps, but I think it very important not to court unnecessary trouble.

We can happily look forward to HTTP/3 then.

It actually IS a full stack...

I'm curious about how practical UDP amplification/reflection attacks will be in the wild with QUIC. A cloudflare blog mentions it in passing ( https://blog.cloudflare.com/head-start-with-quic/ ). I haven't dug into the protocol, but since what replaces the TCP SYN/ACK is now also setting up encryption, a lot more data is flying back and forth.

Putting HTTP/3 on a userspace QUIC stack solves a number of these problems.

HTTP/2 already is userspace.

This was the response from Willy for haproxy:

    Yes, just discussed them with Piotr. Not really concerned in practice,
    maybe only the 1st one (1-byte window increments) might have a measurable CPU
    impact but the rest is irrelevant to haproxy but could 
    possibly harm some implementations depending how they are implemented of course.
I use haproxy to provide http/2 and ALPN for my site, so if there is a PoC I will test it.

At least for the PING flood I think you can just send HTTP/2 PINGs over and over (that's what the Go test case does) and confirm it uses more and more resources as it queues the control messages.

I'm also wondering about golang http2

https://github.com/golang/go/issues/33631 (this is part of go 1.12.8 and 1.11.13 which got released because of this today)

Here is a server vulnerability matrix... pretty much if you are running HTTP/2 you are exposed and your vendor has a patch waiting for you.


A little bit disappointed they didn't test my implementation (https://github.com/Matthias247/http2dotnet). It's more or less the only feature complete standalone HTTP/2 implementation for .NET - but somehow that ecosystem seams to be too niche so that nobody ever got interested in using it.

Actually I feel like it avoids most if not all of the described issues. async/await in combination with the design principle of always exercising backpressure on the client and having no internal queues goes a long way in this.

Envoy appears to have been updated today to 1.11.1 to mitigate some of these issues. I upgraded and have not experienced any problems yet.

It's a nice write-up, really, but any HTTP/2 implementation should be tested with a nice packet fuzzer. Indeed, server providers should compete in the square miles of the datacenter they use to run the fuzzer. Also, the best servers should come with several defense perimeters, including one with geo-ip-directed tactic missiles. Nothing less will do.

For some of the DOS things, I'm not sure fuzzers would be that effective. For the flooding issues, it's about quantity and frequency instead of how the packet is crafted.

Is there a better list of fixed versions for E.G. Apache / Lighttpd (n/a No http/2 support) / Nginx?

For nginx, versions 1.16.1+ and 1.17.3+ from upstream fix at least three of the vulnerabilities (CVE-2019-9511, CVE-2019-9513, CVE-2019-9516), however if you use a version provided by a distribution’s repositories (e.g., nginx 1.16 included with Ubuntu 18.04 LTS) you’ll need to watch for those security advisories and fixes separately which may have different version numbers due to the backporting.

Flow control in application protocols over TCP has been tried (in SSHv2), and it's failed. In SSHv2 flow control acts as a handbrake on all channels -- not good, though it does fix the starving of non-bulk channels by bulk channels. It's bound to fail in HTTP/2 as well.

I wish application-level flow control would be paired with a database of internet speeds per subnet. They already track everything else, my connection shouldn’t repeatedly slow because the congestion algorithm doesn’t know my maximum download speed.

Not subnet but link. Anyways, that can't quite work because even if you could make that accurate there's still congestion to worry about. What you want is something like explicit congestion control with IP options for reporting max path bandwidth or whatever, but that too is susceptible to spoofing and DoSing.

No, not explicit congestion control, the amount of entropy in what internet plan a user has (outside of a lossy CDMA-type connection) is fairly low. The sender should set window sizes, not assume ISPs should set an IP-level flag. Probes for speeds should be less aggressive since it isn’t unknown how fast a typical connection is.

Not, link, but subnet, since dynamic IPs exist. IP addresses are usually assigned geographically.

Of course this affects DoH servers, too.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact