Hacker News new | past | comments | ask | show | jobs | submit login
HTTP/2 – How did we get here (kamranahmed.info)
215 points by kamranahmed_se on Aug 13, 2016 | hide | past | web | favorite | 19 comments



How we get here? By pushing the previous protocol to the limit, observing where it breaks down, and fixing those things. We could even predict what will replace HTTP/2 by looking at where it starts to break down.

I work with HTTP/2 daily, and there are some pain points when running at high speeds:

- Headers are still head of line blocking. You must synchronize sending them to maintain the HPACK table state. At a high number of requests per second, this is a bottle neck

- Running over TLS is CPU bottleneck since encrypted messages are sequential. We get around this by making multiple TCP connections.

- Long lived HTTP/2 connections will often break because of NAT's (home internet), or changing IP address (mobile). A single dropped packet kills throughput for highspeed links too.

All of these are addressed by the QUIC protocol. I suspect that eventually HTTP/2 will be last major protocol over TCP because of most of the aforementioned problems come from running over it.


> Running over TLS is CPU bottleneck since encrypted messages are sequential.

Does TLS off-loading help in this case?

> Headers are still head of line blocking. You must synchronize sending them to maintain the HPACK table state

Is this still an issue if you only use the static table, and send any other namevalue-pairs as either dynamic non-indexed [1] or more likely, dynamic never-indexed [2]?

[1] https://http2.github.io/http2-spec/compression.html#literal....

[2] https://http2.github.io/http2-spec/compression.html#literal....


> Running over TLS is CPU bottleneck since encrypted messages are sequential.

ChaCha20 takes around 4cycles/byte on a xeon. At 3GHz that's 750MB/s. Or about a 10Gbit pipe fed with a single core, i.e. without any need for parallelization.


Is that running in a loop or mixed in a larger application context where cache lines might be flushed and replaced by others. That often make a things behave very differently when running in small benchmarks vs production.


Small message sizes don't seem to slow it down much according to DJB's paper[0]. It doesn't use large s-boxes or a lot of other internal state, unlike AES does. Its cache footprint is fairly small.

[0] https://cr.yp.to/snuffle/salsafamily-20071225.pdf#2 (page 2)


Exactly! One of the big pain points in HTTP/1.1 is that you don't know who the client is. You get six or more connections at a time, so you have to correlate them together using cookies or other ids. It's easy to proxy and cache so you don't know if it's a single person on the other end or many people. Since there are so many connections they close in a minute or less so you have figure out which connections are the same user from earlier.

Now with HTTP/2 there's only one connection and it stays open for a long time so that connection info can be used to identify a client session. It's TLS only so much less likely to be a proxy, since the proxy needs a trusted certificate added to the browser. And also connections to a third-party domain are shared across all pages, so if you are doubleclick.net you know the same user is browsing different sites.

The new protocol makes tracking users much easier and fixes a huge privacy problem in HTTP/1.1.


SMTP is way beyond its limit and I see no good replacement in sight.


The TLS one, is that specific to HTTP/2, or TLS in general? That's a important distinction.

NAT one is interesting though.


Really great article. Outlines history, evolution and problems solved by each iteration. One thing they didnt get into was the methods (options, patch, put, delete, etc) i always found that one to be weird as it seemed proprietary. After a bit of research [0], i found that it was to because the creator had identified multiple ways they wanted to modify a document. I find that facinating because there is no "pay" method or "mutate 5 objects in a database and send 2 emails but make sure that the post data is encrypted" method. Perhaps the creator didnt have the forsight in simply specifying whether data should be parsed from the url or from a body. Perhaps the http method still has more room to grow

[0] https://www.quora.com/What-is-the-history-of-HTTP-verbs-PUT-...


One thing that is slightly misleading about the article is that it talks as if HTTP/1.0 first came into being in 1996, and everyone was using version 0.9 before then. In fact, one need only observe that the original spec for adding cookies to HTTP came out in 1994 to realize that HTTP/1.0 must have been widely deployed by then. In fact, it is only the standard that took until 1996 to write up.


Even then, nobody really implemented 1.0 per-se, because nobody really changed to match what the standard (ex postfacto) said.


Wasn't the spec designed so that pretty much everything was already mostly conformant?


Correction: HTTP/1.0 doesn't have Host headers at all, while HTTP/1.1 requires them.

It's one of the easiest ways to distinguish the two versions. Host headers enable virtual hosts (multiple domains on one IP), letting the Internet grow to billions of domains without requiring an IP for each of them.


Host headers were widely deployed before HTTP/1.1 - IIRC MS IE 3 was the first version to support Host: in 1996 - Netscape supported it earlier.


Neat! I thought host was optional, I didn't realize it didn't exist at all.


Iv been upset with HTTP/2 / SPDY since the very first beginning (yet i never used it, nor try). Now, thanks to you, i understand why.

It's not a replacement/ enhancement of HTTP, it's definitively not a Transport layer. It has no use/gain for webservices, and as single page apps tends to optimise resources delivery (webpack, browserify, scss, stylus & co, very few resources are left to deliver) so no urgent "need" for it here.

So, for an _application layer_ it's comming a little late, i guess "block SPA" with intrisected client side & server side (nodejs/ express & spa) apps might gain of it, but it have to be considered as the very first line of the app developpement , the transition/ enhancement cost / gain ratio for existing apps is too hard to plan.

Great writing !


I actually disagree with both your conclusions - the techniques you mentioned exist to get around painful deficiencies with HTTP/1.1, and are anti patterns in http/2 - sprites exist to reduce the latency of loading many small images, which is now no longer an issue (see httpvshttps.com for a demonstration).

Also my brief experiments using HTTP/2 in Haskell involved zero changes to my web service handler code, and maybe one or two extra lines to give paths to the TLS cert files. Web services will benefit massively from HTTP/2 in some domains, I do a lot of web geospatial work and using it for servers which return hundreds of image tiles will be a massive win


Many people think that the HTTP was the ultimate invention, often doesn't seem to know that it's clearly evolution of Gopher. https://en.wikipedia.org/wiki/Gopher_(protocol)


Nice post!




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: