
Hello HTTP/2, Goodbye SPDY - Nimi
http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html
======
drderidder
I got a copy of Paul-Henning Kamp's critique "HTTP/2.0 - The IETF is Phoning
It In" off the ACM website before the link went dead. Here's a bit of what he
said about it:

"Some will expect a major update to the world’s most popular protocol to be a
technical masterpiece and textbook example for future students of protocol
design. Some will expect that a protocol designed during the Snowden
revelations will improve their privacy. Others will more cynically suspect the
opposite. There may be a general assumption of "faster." Many will probably
also assume it is "greener." And some of us are jaded enough to see the "2.0"
and mutter "Uh-oh, Second Systems Syndrome."

The cheat sheet answers are: no, no, probably not, maybe, no and yes.

If that sounds underwhelming, it’s because it is.

HTTP/2.0 is not a technical masterpiece. It has layering violations,
inconsistencies, needless complexity, bad compromises, misses a lot of ripe
opportunities, etc. I would flunk students in my (hypothetical) protocol
design class if they submitted it. HTTP/2.0 also does not improve your
privacy. Wrapping HTTP/2.0 in SSL/TLS may or may not improve your privacy, as
would wrapping HTTP/1.1 or any other protocol in SSL/TLS. But HTTP/2.0 itself
does nothing to improve your privacy. This is almost triply ironic, because
the major drags on HTTP are the cookies, which are such a major privacy
problem, that the EU has legislated a notice requirement for them. HTTP/2.0
could have done away with cookies, replacing them instead with a client
controlled session identifier. That would put users squarely in charge of when
they want to be tracked and when they don't want to—a major improvement in
privacy. It would also save bandwidth and packets. But the proposed protocol
does not do this.

[He goes on to tear a strip off the IETF and the politics behind HTTP/2.0 ...]

~~~
enneff
Whatever PHK wants it to be, HTTP/2 is a great step forward from where we are
today. Check this out:
[https://http2.golang.org/gophertiles](https://http2.golang.org/gophertiles)

This is going to make the web so much faster, particular on mobile devices.

~~~
jimktrains2
> Whatever PHK wants it to be, HTTP/2 is a great step forward from where we
> are today.

A hugely bloated, binary protocol is better than the simple, text-based on we
have today? I greatly disagree. HTTP/1.1 could use an update, but HTTP/2 was
not the answer.

~~~
MichaelGG
I wonder if anyone complaining about binary formats has ever written a high
performance parser.

Particularly, HTTP's text format, is a mess. You can continue headers from one
line to another. You can _embed comments into header values_. Seriously.
Comments. In a protocol's messages. It's moronic and indefensible. Why anyone
would prefer that is probably them thinking that text equals easy to implement
or something like that.

~~~
jimktrains2
I never said HTTP/1.1 is perfect. There are many possible optimizations and
fixes possible, of which you've alluded to one.

------
anderspetersson
Looking forward to when HAProxy support for HTTP/2 lands since they refused to
implement SPDY support.

Here's a list of common servers support for SPDY/HTTP2:
[https://istlsfastyet.com/#server-
performance](https://istlsfastyet.com/#server-performance)

~~~
js4all
Current HAProxy already supports the handshake of SPDY/HTTP2 via NPN and ALPN.
You have to route to proper backends. You also need to provide a HTTP/1.1
fallback implementation for incapable clients. Once setup that works very
well. I am using it for our blog
([https://blog.cloudno.de](https://blog.cloudno.de))

~~~
Apofis
What about SSL Termination? Would it still work if I terminate?

~~~
js4all
In the described setup, HAProxy is doing SSL termination. See the gist for the
cert and crypto parameters. This is getting a A+ from ssllabs.

~~~
Apofis
Thank you!

------
drawkbox
HTTP/2 might have version 2 syndrome.

Another better way would have been keep SPDY, as there is usefulness there,
separate and then on HTTP/2, to incrementally get there, and use an iteration
of something like AS2/EDIINT
([https://tools.ietf.org/html/rfc4130](https://tools.ietf.org/html/rfc4130))
which does encryption, compression and digital signatures on top of existing
HTTP (HTTPS is usable as current but not required as it uses best
compression/encryption currently available that the server supports). This
standard still adheres to everything HTTP and hypertext transfer based and
does not become a binary file format but relies on baked in MIME.

An iteration of that would have been better for interoperability, secure and
fast. I have implemented it directly previously from RFC for an EDI product
and it is used for sending all financial EDI/documents for all of the largest
companies in the world Wal-mart, Target, DoD as well as most small and medium
businesses with inventory. There are even existing interoperability testing
centers setup for testing out and certifying products that do this so that the
standard works for all vendors and customers. An iteration of this would have
fit in as easily and been more flexible on the secure, compression and
encryption side, and all over HTTP if you want as it encrypts the body.

~~~
UnoriginalGuy
I've used AS2 extensively (in EDI) and to be frank, fuck that. AS2 is a really
bad version of HTTPS, you take HTTPS, you remove the auto-negotiation (email
the certificates!), you disable certification CA checking (self-signed for all
the things), and then you allow optional HTTPS on top of AS2 (which is a huge
nightmare in its own right).

Imagine this scenario, two people want to interconnect, here's the process:

\- They insecurely email their public key (self-signed) and URL (no MitM
protection)

\- You insecurely email your public key (self-signed) and URL

\- They have a HTTPS URL

\- Now the thing to understand about AS2 is that when you connect to THEM you
give them a return URL to confirm receipt (MDN) of the transaction.

\- HTTPS becomes a giant clusterfuck in AS2 because people try to use standard
popular HTTPS libraries (e.g. that do CA checking, domain checking, and other
checks which are fine for typical web-browser-style traffic, but not for
specialised AS2 traffic) but in the context of AS2 where certificates are
often local self-signed (some even use this for HTTPS), and the URL is rarely
correct for the certificate, they fall over all of the time.

\- Worse still some sites want to use either HTTP or HTTPS only, so when you
connect to a HTTPS URL but give them a HTTP MDN URL sometimes they will work,
sometimes they will try the HTTPS version of the URL then fall over and die,
and other times they will error just because of the inconsistency.

Honestly I used AS2 for over five years, looking back, it would have saved
everyone hundreds of man-hours to have just used HTTPS in the standard way and
implement certificate pinning (e.g. "e-mail me the serial number," or heck
just list it in your documentation).

The only major advantage of AS2 is the MDNs. However even there there exists
massive inconsistency, some return bad MDNs for bad data, while others only
return bad MDNs for bad transmission of data (i.e. they only check that what
you send is what is received 1:1, so you could send them a series of 0s and
get a valid MDN, because they check the data later and then email).

To be honest I hate MDN errors. They don't provide human-readable information
in an understandable way. They're designed for automation which rarely exists
in the wider world (between millions of different companies with hundreds of
systems).

Give me an email template for errors any day, that way there can be a brief
generic explanation and formatted data, to better explain things. The only
thing MDNs do well is data consistency checking which is legitimately nice,
however almost every EDI format I know has that in it already (i.e. segment
counters, end segments, etc).

If I was to re-invent AS2, I'd just build the entire thing on standard HTTPS.
No HTTP allowed, no hard coded certificates (i.e. you receive a public key the
same way your web browser does), certificate pinning would be a key part, and
scrap MDNs in place of a hash as a standard header in the HTTPS stream. Normal
HTTP REST return codes would be used to indicate success (e.g. 200 OK/202
ACCEPTED, 400 Md5Mismatch/InvalidInput/etc).

That way nobody has to deconstruct an MDN to try and figure out the error. And
handling a small handful of HTTP codes is much easier to automate than the
information barriage an MDN contains anyway, it is both easier to automate,
and easier for humans.

~~~
drawkbox
I wasn't saying use AS2 directly but an iteration of all the pain points of
before solved, it is a decade old now. There are some things that wouldn't be
needed and an iteration needed.

The things that AS/2 got right was that it rides on top of an existing
infrastructure of MIME/HTTP. The other part is doing encryption/compression of
any type specified by the server/client. And there is some benefit to
encryption/compression/digital signing over plain HTTP.

HTTP/2 might be the first protocol for the web that isn't based on MIME for
better or for worse. We are headed to a _binary protocol_ that is called
Hypertext Transfer Protocol.

HTTP/2 looks more like TCP/UDP or small layer on top of it that you might
build in multiplayer game servers. Take a look at the spec and look at all the
binary blocks that look like file formats from '93:
[https://http2.github.io/http2-spec/](https://http2.github.io/http2-spec/). It
is a munging of HTTP/HTTPS/encryption in one big binary ball. It will
definitely be more CPU intensive but I guess we are going live either way!

Plus AS2 was a huge improvement over nightly faxing of orders, large companies
were doing this as late as 2003. AS1 (email based) and AS3 (FTP based) were
available as well but HTTP with AS2 is what all fulfillment processes use now.
And yes it has tons of problems but the core idea of
encryption/compression/signatures/receipts over current infrastructure is
nice. Everything else you mention exists and definitely are the bad parts
though much of that wouldn't be needed in the core.

------
klapinat0r
SPDY came and went before I had to implement it. Phew.

On a serious note: it's nice to see ALNP being used in HTTP/2

~~~
billyhoffman
ALNP has been used with SPDY for a while now. It one of the nice improvements
that fell out of testing/iterating SPDY in public. The NPN approach was a bad
idea since the client drove what got picked (With NPN, Server tells the client
what other protocols it supports in the ServerHello and the client picks
whatever it wants. ALNP reverses that.)

~~~
klapinat0r
Sorry I didn't make it clear: that's my sentiment aswel. I'm glad it survived,
so to speak.

------
jjcm
Are there any good reverse proxies out there that support HTTP/2? Right now
I'm using varnish, but I'd love to switch over to something supporting this.

------
donatj
Can someone explain to me the actual upside of header compression? I work on a
fairly major educational site and calculating now our request + response
headers comes out to 1,399 bytes. Gzipping them they come out to 1,421 bytes.
A small net increase.

Am I missing something? Do some people have so many cookies that this makes a
difference or something?

~~~
hacst
The header compression in HTTP 2.0 isn't based on gzip or something like that.
The CRIME attack pretty much killed those approaches dead. It's more akin to
differential updates for header during the lifetime of the connection. So if
you request a lot of files with fairly similar headers you'll effectively only
have to transmit the bulk of the header once while the other request will
efficiently re-use the previously transmitted fields.

So to answer your question: Header compression as employed in HTTP 2.0 helps
if you do many requests with similar headers on the same connection.

~~~
dragonwriter
> So to answer your question: Header compression as employed in HTTP 2.0 helps
> if you do many requests with similar headers on the same connection.

In general, HTTP/2.0 seems to be about improving things _if_ you do many
requests over the same connection.

~~~
ademarre
Doing many requests over the same connection is very common.

~~~
frankzinger
And one of the fundamental goals of HTTP/2 was to get clients to create only a
single connection to a server.

------
hannob
Unfortunately right now apache doesn't support HTTP/2 at all. There was a
mod_spdy, but it's pretty much dead. Apache took it over from google some time
ago, but since then nothing happened.

~~~
josteink
This is what happens when you let Google (or other big corporations) write
internet-standards.

If it isn't community-driven, you can't expect it to be implemented in the
places the big corp doesn't care for.

So in this case, Apache one of the major drivers for propelling the WWW may
end up not supporting a "crucial" WWW-related standard, because the community
was never invited.

If anyone still has any doubts why letting Google control internet-standards
is bad, this is currently my best example.

Technically speaking, the internet is the result of what we come up with, when
we all work together. Not working together will quickly end up as not working
at all.

~~~
netik
I think the reality here is, this is what happens when you let companies fight
over a standard in private.

What I saw on the HTTP/2 mailing lists was "We have a new standard." "It
demands SSL, but we don't want that." Then, SPDY is everywhere, let's use
that.

Shortly after it was "Omg, we can't call it spdy, because then Microsoft's
interests will be left behind and Google will have won. Let's abandon the
mandatory SSL requirement and rename SPDY to HTTP2..."

I feel like we've all lost here.

We implemented SPDY at Twitter - the savings were fantastic and the browser
performance, amazing. Google and FB did the same. It's nearly like, 800M users
said it was great, can we move on now?

------
xpose2000
Does anyone know if Cloudflare has plans to implement HTTP/2? RIght now they
support SPDY.

I found the answer from their blog:

"Part of the service CloudFlare provides is being on top of the latest
advances in Internet and web technologies. We've stayed on top of SPDY and
will continue to roll out updates as the protocol evolves (and we'll support
HTTP/2 just as soon as it is practical)."

~~~
derefr
Since CloudFlare is an OpenResty (nginx + lua) shop, they'll likely get it as
soon as it's in nginx.

~~~
MichaelGG
OpenResty does not include SPDY as there are incompatibilities with it and
Lua. But I'm sure CloudFlare has the engineering resources in house to decide
what they want to support and when :)

------
fletchowns
Anybody know when nginx will support it?

~~~
listic
They say it already does: "Right now, both the Apache and nginx web servers
support HTTP/2" [http://moz.com/blog/http2-a-fast-secure-bedrock-for-the-
futu...](http://moz.com/blog/http2-a-fast-secure-bedrock-for-the-future-of-
seo)

The thinking is, I believe, that "SPDY/4 revision is based upon HTTP/2
wholesale" [http://http2.github.io/faq/](http://http2.github.io/faq/) and
nginx already supports SPDY via ngx_http_spdy_module.
[http://nginx.org/en/docs/http/ngx_http_spdy_module.html](http://nginx.org/en/docs/http/ngx_http_spdy_module.html)
Version 3.1 though...

So it's either there or almost there.

~~~
fletchowns
Here's the related thread on the mailing list:
[http://mailman.nginx.org/pipermail/nginx/2015-February/04658...](http://mailman.nginx.org/pipermail/nginx/2015-February/046583.html)

------
mahouse
Is HTTPS mandatory on HTTP/2 like it was on SPDY?

~~~
dragonwriter
> Is HTTPS mandatory on HTTP/2 like it was on SPDY?

Not in terms of the protocol spec, but most major browser vendors have
indicated that they only intend to support HTTP/2 in-browser over TLS
connections, so in practice for typical, browser-targeting use cases, it looks
like it will, at least initially.

~~~
Touche
That's so lame. It's so easy to set up a new website today, it's going to be a
huge burden in the future. Some of us still make websites for fun, not as
businesses. I guess I have to buy a cheap ssl certificate from some sleezy
website every time I feel creative.

~~~
icebraining
I'm puzzled, did you miss the announcement? The EFF, Mozilla, and others are
creating a CA that will give free certs to everyone:
[https://letsencrypt.org/](https://letsencrypt.org/)

~~~
byuu
Will they give out wildcard domain certificates?

~~~
mahouse
No.

------
fdsary
What happens if someone built a service based on it? Should they never trust
browsers keeping alive even the shitty (in comparison to free and standardised
HTTP/2) features? What's great about the web is that now 20 year old services
still are working in the latest runtimes (browsers).

~~~
azakai
It is always risky to build a service based on something that is not yet
standardized. SPDY was in progress to be standardized, but the process ended
up with parts of it in HTTP2, making SPDY unnecessary, as I understand things.

It would be the _right_ thing for Google to remove SPDY at this point,
otherwise it would be running a nonstandard protocol that other browsers do
not, which can lead to fragmentation - as we saw just recently with an API
that sadly Google has not removed despite it being nonstandard (FileSystem in
the WhatsApp "Web" app).

edit: To clarify, I mean what Google is doing with SPDY sounds like the right
thing. I don't mean it should remove it right now, I meant it was the right
thing to do, right now, to announce it would be removed after a reasonable
delay (and 1 year sounds reasonable).

~~~
kozhevnikov
To be fair Google did happily kill Gears when HTML5 became a viable [early
draft] standard.

~~~
azakai
Agreed, Google did the right thing to remove Gears.

My concern is because, overall, Google has a bad track record in this area:
FileSystem is still enabled, WebSQL is still enabled, PNaCl is still enabled
edit: and H.264 was never removed despite announcing the intent to do so.

~~~
kllrnohj
> PNaCl is still enabled

Eh? What is the spec competitor to PNaCl? asm.js is a cute trick but it still
lacks threads which is easily one of the biggest features of PNaCl. So what
actual viable alternatives are there to PNaCl?

~~~
azakai
We can discuss alternatives to PNaCl, but that isn't really the issue. Even if
you have something you believe has no peer at the moment, that doesn't mean
you can ship it without regards for the the standards process. It's still
wrong for all the usual reasons.

Of course, not having a good alternative _might_ mean that the other parties
in the standards process should take another look at it. But again, that's a
totally separate issue from whether it is ok to just ignore the standards
process and ship whatever you want, which is what Google is doing here.

~~~
kllrnohj
> that doesn't mean you can ship it without regards for the the standards
> process. It's still wrong for all the usual reasons.

What Google is doing with PNaCl _is_ the standards process. Standards start
life by being not-standards that someone shipped and enough people liked to
make it into a standard.

There is nothing wrong here, nothing whatsoever. This is exactly how the
process should work. Design-by-committee standards suck. Standards that won
through raw competition? Those are all the good ones.

~~~
azakai
While I agree with you that competition is crucial, and without
experimentation we will get nowhere, it is worth remembering that IE6 and all
of its specific behaviors "won" through "raw competition".

Often things win _not_ through fair competition. For example, WebSQL "won" on
mobile because WebKit won on mobile, and WebKit happened to have WebSQL. If
WebKit had had, say, the Audio Data API (which it did not), then the Audio
Data API would have "won". Neither of those APIs won or would have won on its
own merits, but because it was backed by the 800 pound gorilla in the space.
(I chose Audio Data as an example because it is not in the same space as
WebSQL, i.e. not competing with it, and was a nice API, that failed).

And the problem is that PNaCl will fragment the web, and already has. That's a
serious problem - for everyone but Google.

~~~
dragonwriter
> it is worth remembering that IE6 and all of its specific behaviors "won"
> through "raw competition".

It is worth noting that the findings in the antitrust actions in the US over
Microsoft's illegal and anti-competitive behavior in establishing IE's
dominance indicate that that claim is, at best, misleading.

~~~
azakai
I would argue the opposite, in fact - that it shows what happens with pure
unrestrained competition. Which leads to monopolies and other forms of
competition suppression, ironically, of course.

Regardless, we don't need to agree on that point. There are plenty of other
examples in tech (and outside) of things winning through "raw competition"
that are just not that good.

------
est
Well how about the fate of the cute little protocol called QUIC?

~~~
wmf
Maybe QUIC is the prototype for HTTP/4.

~~~
briandh
QUIC is not a replacement for HTTP; it works below it. See
[https://docs.google.com/document/d/1lmL9EF6qKrk7gbazY8bIdvq3...](https://docs.google.com/document/d/1lmL9EF6qKrk7gbazY8bIdvq3Pno2Xj_l_YShP40GLQE/edit)

------
amelius
Anybody aware of a good C++ server framework supporting most of HTTP/2,
including websockets?

~~~
fmela
Facebook's proxygen has HTTP/2 support "in progress":
[https://github.com/facebook/proxygen](https://github.com/facebook/proxygen)

~~~
amelius
Yes, but I believe they don't support websockets yet. At least, searching
their github for "websockets" gives only two broken links.

UPDATE: I noticed somebody wrote websocket support [1], but it didn't get
merged yet with the master.

[1]
[https://github.com/kekekeks/proxygen](https://github.com/kekekeks/proxygen)

------
therealmarv
Does somebody has good nginx configurations for HTTP/2? Good that browser go
this directions but at the moment I have no clue on how to implement HTTP/2
(is there a SPDY fallback?) on my nginx server :(

------
drawkbox
HTTP/2 is an ugly mess of taking something simple and making it more complex
for minimal benefit. It could have been so much better than a binary mess.

As engineers, the ones that take simple concepts and add complexity, those are
not engineers, those are meddlers.

It could be as long lived as XHTML.

I was hoping for more SCTP rather than a bunch of cludge on top of what is a
pretty beautiful protocol in HTTP 1.1. Protocol designers of the past seemed
to have a better long view mixed with simplicity focused on interoperability
that you like to see from engineers.

~~~
smegel
For those slamming HTTP/2.0, how do they rate SPDY?

~~~
drawkbox
SPDY was great for Google and allowed them to change and take hold of HTTP/2.

It saved them lots of money I am sure in improved speed but at the trade-off
of complexity and minimal adoption of the standard because it wasn't
beneficial to everyone. HTTP/2 is a continuation of that effort by Google
which I would do if I were them as well probably. But in the end both are not
that big of improvements for what they take away.

Of course I use both but I don't think they will last very long until the
next, it was too fast and there are large swaths of engineers that do not like
being forced into something that has minimal benefits when it could have been
a truly nice iteration.

HTTP/2 is really closer to SPDY and I wish they would have just kept it as
SPDY for now. Let a little more time go by to see if that is truly useful
enough to merge into HTTP/2\. HTTP/2 is essentially SPDY from Google tweaked
and injected into HTTP/2 which has huge benefits for Google, so I understand
where the momentum is coming from.

Google also controls the browser so it is much easier for them to be the lead
now on web standards changes. We will have to use it if we like it or not. I
don't like the heavy hand that they are using with their browser share, just
like Microsoft of older days (i.e plugins killed off, SPDY, HTTP/2, PPAPI,
NaCL etc)

------
jcoffland
Google just loves exerting their power. It will take more than Chrome devs
declaring it a done deal to make this happen. The browser is only half the
issue. Web servers must get on board for this to matter. Obviously Safari,
FireFox and IE have some say in this too.

~~~
enneff
Pretty much everyone in the industry is on-board with HTTP/2\. It's not just
Google.

------
itsbits
Hardly a surprise..

------
ommunist
@klapinat0r - welcome to the club. I was just about to say the same.

------
striking
I'm not ever supporting HTTP/2\. For something "monumental" enough to be
called the whole second revision of HTTP, what have we really gained? A
Google-backed "server push" mechanism and some minor efficiency additions? Add
to that the fact that SPDY was pushed through as HTTP/2 because nothing else
was ready.

Please.

Downvoters: although I don't usually do this, I'd ask you to enter into a
discussion with me instead of just hitting the down arrow. Do you honestly
think my discussion is worth being silenced?

~~~
mrb
If that doesn't convince you to support HTTP/2, then nothing will:
[https://www.httpvshttps.com/](https://www.httpvshttps.com/) HTTP/1.1 is
5x-15x slower in this benchmark! These insane perf gains are possible only
thanks to HTTP/2, specifically thanks to its support for multiplexing. Please
read the spec and understand the technical implications before criticizing.

On some unrelated note: I found this tidbit of humor in the RFC draft
([https://tools.ietf.org/html/draft-ietf-httpbis-
http2-16](https://tools.ietf.org/html/draft-ietf-httpbis-http2-16)):

    
    
      ENHANCE_YOUR_CALM (0xb):  The endpoint detected that its peer is
      exhibiting a behavior that might be generating excessive load.

~~~
dlubarov
Not really a fair benchmark. It's making tons of requests with tiny payloads,
so that most browsers will hit a connection limit and requests will be queued
up.

Heavily optimized pages like google.com use data urls or spritesheets for
small images, and inline small css/javascript.

On the bright side, reducing the need to minimize request count will make our
lives as developers a bit easier :-)

~~~
venaoy
Many of the sites I visit frequently are exactly like that: tons of requests
with tiny payloads.

The nytimes.com homepage makes 100+ requests to tiny images.

Same thing for the yahoo.com homepage.

An ebay.com listing page makes many requests to small thumbnails of items on
sale.

And so on... This makes it a perfectly fair benchmark IMHO.

~~~
dlubarov
I don't know how you're assessing those pages, but bear in mind that

\- Counting images can be misleading, since well-optimized sites use
spritesheets or data URIs.

\- If you're using something like Chrome's dev console to view requests, a lot
of them are non-essential requests which are intentionally made after the page
is functional.

\- HTTP connection caps are per host. The benchmark is making hundreds of
requests to one host, whereas a real page might make a dozen requests to the
main server, a dozen to some CDN for static files, and a dozen to
miscellaneous third parties.

\- The benchmark is simulating an uncached experience; with a realistic blend
of cached/uncached, HTTP 1 vs 2 performance would be much more comparable.

HTTP/2 is an improvement but if people expect a "5-15X" difference, they're in
for a big disappointment.

