
HTTP/2 Is Done - stephenjudkins
https://www.mnot.net/blog/2015/02/18/http2
======
bgentry
It's time to begin the long process of unwinding all the hacks that we've
built to make HTTP/1.1 fast. No more concatenation of static assets, no more
domain sharding.

The future looks more like this, as the default, with no special effort
required:
[https://http2.golang.org/gophertiles](https://http2.golang.org/gophertiles)

May nobody else have to suffer through writing an interoperable HTTP/1.1
parser!

~~~
byuu
> May nobody else have to suffer through writing an interoperable HTTP/1.1
> parser!

Yes, now it'll be much easier than parsing plain-text. Now they just have to
write a TLS stack (several key exchange algorithms; block ciphers; stream
ciphers; and data integrity algorithms); then implement the new HPACK
compression; then finally a new parser for the HTTP/2 headers themselves.

Now instead of taking maybe one day to write an HTTP/1.1 server, it'll only
take a single engineer several years to write an HTTP/2 server (and one
mistake will undermine all of its attempts at security.)

If you are going to say, "well use someone else's TLS/HPACK/etc library!",
then I'll say the same, "use someone else's HTTP/1.1 header parsing library!"

HTTP/2 may turn out to be great for a lot of things. But making things
easier/simpler to program is certainly not one of them. This is a massive step
back in terms of simplicity.

~~~
stephen_g
Have you tried writing anything more than a very simple HTTP/1.1
parser/server? It's actually not as easy as it seems at first - edge cases
everywhere, different user agents doing subtly different things, etc. etc.

Your argument is invalid in my opinion. HTTP/1.1 is not simple to implement to
any decent level of completeness and correctness, and HTTP/2 does fix a fair
few things.

Anyway, there are already plenty of good tools for debugging HTTP/2 streams
(Wireshark filters, etc.), and there's only going to be plenty more as time
goes by.

~~~
carsonreinke
So won't HTTP/2 have these edge cases also?

~~~
TylerE
Essentially, no.

HTTP/1.1 contains tons of optional features. Practially no two implementations
support the same set.

HTTP/2 is all 100% mandatory. Any compliant HTTP/2 implementation will support
an EXACT set of known features.

~~~
bhouston
> HTTP/2 is all 100% mandatory. Any compliant HTTP/2 implementation will
> support an EXACT set of known features.

But over the next couple of years, won't people come up with new ideas and add
them as optional extensions? How is that handled?

I suspect some of these optional extensions will be really useful in special
cases such as support for LZMA/LZHAM compression in addition to just gzip.

~~~
TylerE
There is support for extensions, but they're, well, extensions. The only thing
the protocol specifies is that a compliant implementation must pass-through
unchanged any block it doesn't understand.

Compare with HTTP/1.1 where for instance the entire content negotiation
mechanism is optional and clients need to be able to deal with it not being
available.

~~~
josteink
> There is support for extensions, but they're, well, extensions.

So down the line, it will be pretty much _exactly_ like HTTP 1.0 and 1.1 then.

Good to hear someone thought this thoroughly through before creating a mega-
complex protocol unimplementable by most industry-grade engineers, which will
also need to be debugged and maintained for all internet-eternity.

------
bantic
I read Daniel Stenberg's (he is a maintainer of curl, I think?) "http2
explained" pdf the other day, and it's by far the best comprehensive
explanation of http2 that I have seen. Well worth a read if you're curious
what's coming with http2.
[http://daniel.haxx.se/http2/](http://daniel.haxx.se/http2/)

~~~
pgl
Hacker News thread:
[https://news.ycombinator.com/item?id=9038613](https://news.ycombinator.com/item?id=9038613)

------
thomasfoster96
I know I'm apparently not meant to be, but I'm genuinely keen to start using
HTTP/2\. If you've been following some of the things being done in HTML
recently (rel=subresource, rel=dns-prefetch), I think it's starting to become
a little obvious that for most people HTTP is the bottleneck. HTTP/2 seems to
be a good, solid step forwards. If it's not perfect, well, it doesn't have to
be; 2 isn't the last number.

~~~
Practicality
Agreed. I can't wait to start using it. Speed is a very important feature.

------
logicallee
this seems nice, pretty conservative.

[https://tools.ietf.org/html/draft-ietf-httpbis-
http2-17](https://tools.ietf.org/html/draft-ietf-httpbis-http2-17)

"

Abstract

    
    
       This specification describes an optimized expression of the semantics
       of the Hypertext Transfer Protocol (HTTP).  HTTP/2 enables a more
       efficient use of network resources and a reduced perception of
       latency by introducing header field compression and allowing multiple
       concurrent exchanges on the same connection.  It also introduces
       unsolicited push of representations from servers to clients.
    
       This specification is an alternative to, but does not obsolete, the
       HTTP/1.1 message syntax.  HTTP's existing semantics remain unchanged.
    

"

all these changes seem good without a large change, just an improved user
experience. (the Introduction section is also good - "HTTP/2 addresses these
issues by defining an optimized mapping of HTTP's semantics to an underlying
connection", I'd quote more but why not click through the link at the top of
this comment. basically just some compression of headers, none of the funky
stuff to keep connections alive for server push, prioritizing important
requests, etc. all without changing semantics much - great.)

------
blueskin_
Did that anti-encryption backdoor get put in in the end or not? News reporting
on it went quiet a while back...

------
theallan
Does anyone have any information about HTTP/2 development in Apache? Searching
the bug list I don't immediately see anything and the only thing I can find
from a Google search is a mailing list entry with someone asking about http/2
development and being told that it isn't really being worked on [1]

[1] [http://mail-archives.apache.org/mod_mbox/httpd-
dev/201408.mb...](http://mail-archives.apache.org/mod_mbox/httpd-
dev/201408.mbox/%3CCALK=YjONaJzKsYouzrYJrhT=ZXW6oHbZCgtBH9nfi5wuP=qwDw@mail.gmail.com%3E)

------
tropicalmug
I wish instead of a protocol improvement that focused solely on network
resources, the next version will also include improvements for users such as
encryption by default and doing away with cookies.

~~~
numbsafari
You can begin today to do away with cookies on your own sites and services.
Start implementing richer clients and leveraging Open ID Connect and OAuth2.

Cookies solve real use case problems. Unless we all start building and
experiencing and improving the alternatives, progress won't be made.

That said, good luck on getting rid of cookies all together.

~~~
ecthiender
Excuse my ignorance, but how can I do session management without using
cookies?

I tried searching on the net, but it doesn't seem to give any concrete/valid
results.

Can you give me any pointers?

Edit: I do use OAuth2.0 on my services and use Mozilla Persona to manage user
logins, but I am not clear how can I keep sessions between requests if I don't
use cookies.

~~~
hueving
You can carry the session ID in the URL. This also has the benefit of
eliminating XSRF. The downside is that you have a horrendous URL if that type
if thing bothers you, and you can't have a "remember me" check box in your
login.

~~~
amatix
This approach has some massive downsides - the session ID is sent via Referer
to outbound links, URLs are logged all over the place (including browser
histories), it's easy for people to publicly share it without thinking which
then ends up in Google as well...

------
cm2187
Stupid question: would you rather have server push serving static content from
the application server or a CDN for the static assets? If a CDN, how can
server push be leveraged when the assets are not related to each others (and
the server can't tell in which order they will be requested).

~~~
youngtaff
Where sites serve the base page through a CDN, then the CDN has the potential
to start making intelligent decisions on what should be pushed.

As the simplest level this might be just the CSS, and JS in the <head> but
obviously as different UA's behave differently there's scope for much granular
optimisations.

~~~
cm2187
That's the easy, but relatively rare scenario. Today most content is dynamic.

Naively, it looks to me that server push will mostly be an improvement for
small websites that do not use a CDN, but I can't see how it can coexist with
a CDN.

Or it would require a new syntax, where the html tells the browser to start
connecting to the CDN with this particular URL, which contains a token, and
should be downloaded first which would tell the CDN that a particular list of
assets will be needed for that page, and then the CDN will use server push to
send these static assets.

Alternatively the CDN would become a proxy for the underlying html page, which
would still be generated by the application server. That would probably be
simpler.

~~~
youngtaff
CDNs aren't limited to just static content, it's quite common for large
dynamic sites to deliver their base pages though CDNs.

They can use features like ESI to assemble the final page on the edge from
static and dynamic parts, or they can just act as a proxy to the origin with
the dynamic page generated there.

Even when the CDN is just acting as a proxy back to the origin there can be
performance advantages e.g. lower latency TCP and TLS negotiation between edge
to client, and permanent connection between edge and origin i.e. single TCP
negotiation for all clients, larger congestion windows leading to higher
throughput.

In short CDNs aren't just for static content!

------
orthecreedence
Question: I originally heard HTTP/2 would force TLS and have it baked into the
protocol. Is this still the case? If so, is this going to be strictly
enforced? I think it's a really terrible idea to melt a protocol and a
transport together. Or am I misunderstanding how it works?

~~~
garraeth
This says yes for Chrome and FF but no such requirement for cURL or IE:
[http://daniel.haxx.se/http2/](http://daniel.haxx.se/http2/)

How accurate it is, I'm not sure.

------
scorpwarp23
I'd really like to know what this means in the context of MeteorJS -
particularly how the HTTP Push feature will affect MeteorJS in the long run.
Does it make MeteorJS redundant?

------
jgrahamc
[placeholder for commentary about how HTTP/2 is a bad protocol because it's
binary and everything could have been fixed in a text protocol follow by ad
nauseum repetition of all the same old arguments]

~~~
tokyo1000
The fact that there's so much disagreement and discontent surrounding this
should concern everyone involved. Trade-offs are being made that may benefit
some people and organizations, but these trade-offs are also causing
significant problems for others.

While there has always been some degree of disagreement regarding
technological matters, I think we're really seeing a lot more of it these
days, especially when it comes to projects that are open source, or standards
that are supposedly open. HTTP/2 is a good example. But we've also got GNOME
3, systemd, how systemd has been included in various Linux distros, many of
the recent changes to Firefox, and so forth.

Not only is this disagreement more prevalent, it's also much harsher than what
we've seen in the past. Instead of seeing compromise, we're seeing
marginalization. We're repeatedly seeing a small number of people force their
preferences upon increasingly larger masses of unwilling victims. We're seeing
consensus being claimed, but this is only an illusion that barely masks the
resentment that is building.

What we're seeing goes beyond mere competition between factions with differing
situations. We're seeing any sort of competition, or even just dissent, being
highly discouraged, suppressed, or even prevented wherever possible. Those
whose needs aren't being met end up backed into a corner and shunned, rather
than any effort being put into cooperating with them, with helping them, or
even just with considering their views.

This isn't a healthy situation for the community to be in, especially when it
comes to projects that allegedly pride themselves on openness. We've already
seen this kind of polarization severely harm the GNOME 3 project. We're seeing
things get pretty bad within the Debian project. And the HTTP/2 situation
hasn't been very encouraging, either.

~~~
matt_kantor
> While there has always been some degree of disagreement regarding
> technological matters, I think we're really seeing a lot more of it these
> days

I don't have any way to dispute this, but I don't think it's easy to provide
evidence for it either. I feel that there may simply be more individuals
involved in these kinds of discussions these days.

Obviously at some point you have to stop discussing something and start
building it. That's not to say that discussion isn't important or shouldn't be
encouraged (quite the contrary), but I find it very difficult to make
generalizations about where the line should be drawn.

~~~
angersock
_Obviously at some point you have to stop discussing something and start
building it._

No, see, that's where we are having problems: there is a perfectly valid
answer of "It's good enough, or simple enough, that we will leave it as-is
barring a really big leap".

Assuming that you've got to build something to replace the status quo is
_still itself_ an assumption. Saying, in effect, "Hey, we've got to build
_something_ " naturally disallows a reasonable engineering conservatism.

Software gets better not as you add things, but as you remove them--people
keep forgetting this.

------
RunningWild
Another year, another wheel reinvented.

~~~
Intermernet
I know you're being somewhat facetious, but have you considered how much the
wheel _has actually been reinvented_?

The first wheels were probably logs under rocks. Then axles got developed,
then spokes, then tyres etc.

Everything from the gyroscope to the LHC can attribute it's beginnings to the
humble wheel.

Reinvention is, if not always good, always admirable.

~~~
RunningWild
Up to a point, I agree. Beyond that point it becomes churn and reinvention for
the sake of itself.

------
lazyloop
HTTP/2 is a bad protocol, that much is clear by now. Luckily most of us won't
have to deal with it, because it will be deployed merely as an optimization,
with a new generation of reverse-proxy servers, like H2O.
[https://github.com/h2o/h2o](https://github.com/h2o/h2o)

~~~
andrewstuart2
Care to at least explain why you think it's a bad protocol?

~~~
megaman821
It is not a bad protocol for what is in there, it is bad for what is not.

It seems like it was built for the big players to eek out 5% more performance.
How about the average website? What is in there help standardize
authentication? What is in there to help protect privacy?

In the end it looks more HTTP 1.2, with header compression being the only new
feature. The rest of what makes up HTTP 2 is basically implementing a new
transport layer protocol at the application level.

~~~
MichaelGG
On tests on a rather average SPA site I worked on, adding the letters "spdy"
to the nginx config produced double-digit% performance benefits.

Keeping it backwards compatible with HTTP 1.1 as far as semantics means it
will actually get real adoption, very easily, as you can seamlessly enable it
via middleware without changing app code anywhere.

I don't know what you mean to "standardize auth", but seeing what a
clusterfuck OAuth2 turned into, it'd probably guarantee HTTP2 wouldn't ship
for a long time, then ship a mess.

To protect privacy, major browsers plan to only support HTTP2 over TLS. That
should be a major incentive for more websites to force TLS. Pretty clever,
sorta, even if we might have technical objections to requiring TLS for no
"real" reason.

~~~
megaman821
Double-digit performance for a bunch a little files probably. It is less than
5% when compared against concatenated CSS and JavaScript, image sprites, and
domain sharding. Its great that HTTP2 saves a build step but it is hardly
going to make the web a lot faster.

