
Making HTTP realtime with HTTP 2.0 - bpierre
https://docs.google.com/presentation/d/1eqae3OBCxwWswOsaWMAWRpqnmrVVrAfPQclfSqPkXrA/present
======
jimktrains2
I just feel like the basic idea of HTTP 2.0 misses the point of a stateless,
text-based protocol. Also, muxing, flowcontrol? These are not things
application level protocols should be caring about.

~~~
X-Cubed
It does miss the point of a stateless, text-based protocol but only because
that's core to the problems currently experienced in HTTP/1.1 that are
identified near the start of the presentation.

The nice thing about this approach is that it is entirely contained such that
the web application in the browser doesn't know (or even need to know) the
difference, in the same way that current web applications don't need to worry
about whether the request data was compressed or not.

~~~
comex
Certainly, there are plenty of things in HTTP 2.0 that make sense to be there,
such as server push, specialized header compression, etc.

However, some things, such as avoiding making multiple connections due to
congestion and due to slow-start being slow, flow control, and arguably
encryption, seem like they would be better addressed in TCP. And in fact,
Google is trying to do that with QUIC.

The problem is that if (when) those things get standardized in HTTP/2.0, they
need to be supported forever, even if an improved transport layer protocol
makes them obsolete in relatively short order.

~~~
FooBarWidget
You can't address things in TCP. There is so much network infrastructure
deployed that trying to push any non-backward compatible change in TCP is
futile. Google did the right thing by addressing the problems in layer 5.

~~~
comex
Then just tunnel on top of UDP, like QUIC currently does. Doesn't mean it has
to be specific to one application.

------
songgao
Anybody could explain to me why multiplexing is necessary? It definitely won't
speed things up and I don't see why different resources couldn't be loaded
sequentially (instead of concurrently) over the same TCP/TLS connection.
Adding multiplexing increases complexity and brings overhead by introducing
frame headers.

Also flow control seems weird to me too. Maybe it's just I failed to see a
scenario where flow control in HTTP layer is really useful.

Binary header and streaming (long-live TCP connection) are definitely
interesting. In fact I would be really happy if I've got a connection, and can
keep sending binary-format HTTP request, and the server will just respond with
resource I requested in the same order.

EDIT: typos

~~~
simon_vetter
Flow control can definitely be useful when streaming audio/video files. A
browser only needs a few megabytes of buffer to display 1080p content
properly. Everyday example: user starts watching a youtube video, gets
interrupted and pauses it, then closes the browser because they got carried
away. The entire video will be downloaded and put in the cache where only a
few megabytes were needed/watched. That's quite a waste of bandwidth at 1080p.

Multiplexing allows you to have the same level of concurrency as you currently
have with domain sharding without requiring multiple TCP connections (and TLS
contexts). Starting and getting a TCP connection up to speed takes a while,
especially on congested links (packet loss, spurious retransmits, slow starts,
etc.), and TLS just makes the problem worse by requiring multiple back and
forth to setup encryption. It also takes quite a bit of memory on servers to
maintain hundreds of thousands of TCP socket and TLS states, and a lot of CPU
to set up TLS contexts (Diffie Hellman can be quite expensive CPU wise). Then,
there are flow-based routers, load balancers, stateful firewalls and other
stateful network equipments. We'll get greater performance out of them by
using less concurrent connections.

I think moving to a binary protocol and reducing the number of TCP/TLS
connections is a very good thing, long overdue IMHO.

EDIT: typos :)

~~~
zobzu
fuck binary.

~~~
zobzu
I planned for this to get downvoted. Damn! Turns out I'm not the only one
thinking so then ;-)

~~~
zobzu
Thanks!

------
bochi
Ilya Grigorika, the presentation author, has also written a book called High
Performance Browser Network that I highly recommend if you are interested on
the subject.

There is a free online version available at
[http://chimera.labs.oreilly.com/books/1230000000545/index.ht...](http://chimera.labs.oreilly.com/books/1230000000545/index.html)

------
chetanahuja
I'm a bit confused that multiplexing over one TCP connection is somehow seen
as a strength of this new protocol. OK. I see how muxing streams over a TCP
session theoretically allows the TCP session to "fill" the available bandwidth
for a longer period of time. But it also means that every stream in the
session will suffer from a few packet drops (and the resultant "sputtering"
slow starts over lossy physical media, ie. the mobile use-case).

As for why flow control is being pushed into the app layer, the answer again
comes down to multiplexing of multiple streams over one TCP connection (since
without stream-level flow control, one slow end-point for a stream can
potentially block the progress of every other stream in that session... see
discussion here: [https://groups.google.com/forum/#!topic/spdy-
dev/g4PiZBTW-34](https://groups.google.com/forum/#!topic/spdy-
dev/g4PiZBTW-34))

I guess in the end, only real measurements with a mature implementation will
answer the question. As it turned out with SPDY (1), the results of all this
work might still not be enough to overcome the basic problems with TCP.

(1) [http://www.guypo.com/technical/not-as-spdy-as-you-
thought/](http://www.guypo.com/technical/not-as-spdy-as-you-thought/)

------
dhruvbird
Multiplexing is something that many application protocols could make use of
and should be added between the application protocol and TCP, and not
engineered into every application protocol. Plus, HTTP is supposed to be a
simple text-based protocol that I can run by typing into a telnet window. It
doesn't seem possible in HTTP/2.0 though :-/

------
mgwhitfield
[http://mgwis.tumblr.com/post/64899888414/technological-
morta...](http://mgwis.tumblr.com/post/64899888414/technological-mortality)

------
cldr
Will this let us do SRP over HTTP?

~~~
wmf
HTTP has now been essentially split into a lower transport layer and an upper
semantics layer. If someone created an SRP extension for HTTP (see
[https://bugzilla.mozilla.org/show_bug.cgi?id=356855](https://bugzilla.mozilla.org/show_bug.cgi?id=356855)
) it would apply equally to HTTP 1.1 and 2.0. Or you could use SRP with TLS (
[http://tools.ietf.org/html/rfc5054](http://tools.ietf.org/html/rfc5054) ),
but this has the same UX problems as client certs.

------
andyl
Looks really complicated. I don't like it.

------
trentmb
Why is non-hypertext being delivered via a protocol for hypertext?

~~~
lttlrck
What is content-type for then?

~~~
trentmb
To specify the format/encoding of the hypertext?

HTTP was around before I was, hence the question.

If we just want to transfer generic files, maybe we should create a file
transfer protocol.

~~~
AnIrishDuck
We could call it FTP!

In all seriousness, HTTP is a good example of scope creep. The type and volume
of content sent in a typical session is far different than what was common a
decade ago.

~~~
parasubvert
Nonsense. HTTP is a good example of an enduring protocol design, under an
enduring architecture (The Web combination of URI+MIME+HTTP). As the payloads
have changed, it has evolved very little. HTTP/2.0 is an optimization to this
architecture. It's unlikely it will ever fully replace HTTP/1.0.

In 2003, the type of content was exactly the same: HTML, Images, CSS,
JavaScript, Audio and Video. The formats weren't all that different. Nowadays
the main difference is you see a lot more JSON and/or XML (mostly RSS).

In 1995-1996, the formats and codecs were a bit more primitive, and the net
was smaller, but it was the same sort of content: AVIs, MOVs, GIFs, JPGs,
HTML, and early 1.0 JavaScript.

HTTP+URI is also generally a much better protocol for transferring state than
FTP. It's both simpler AND more general.

~~~
AnIrishDuck
I was joking about FTP. I was trying to say that HTTP has stood the test of
time much better, despite the fact that what we have asked of it has changed
quite a bit.

> In 1995-1996, the formats and codecs were a bit more primitive, and the net
> was smaller, but it was the same sort of content: AVIs, MOVs, GIFs, JPGs,
> HTML, and early 1.0 JavaScript.

There is a difference in magnitude between the present day web session and the
typical 1995 one. The number of resources needed to properly render many pages
is much, much larger. HTTP 1.0 and 1.1 simply weren't designed with this
requirement in mind. They still do a pretty good job handling it, but it's not
hard to argue that a protocol designed around improving parallelism will have
better performance characteristics.

~~~
parasubvert
> I was joking about FTP.

Sorry then :)

> There is a difference in magnitude between the present day web session and
> the typical 1995 one.

I agree with that, just misinterpreted your post as a pile on against HTTP.
Certainly HTTP/2.0 and SPDY are necessary to keep up with the complexity of
today's pages.

------
nwh
I feel the presentation would have had a lot more weight and clarity if not
for the memes in the bottom right corner. As it stands, they just make it
another fantasy creation- only the writer has an email address @google.com.

~~~
rubiquity
Ilya Grigorik knows his stuff incredibly well when it comes to networking. I
encourage you to read his blog sometime. You'll find it very informative.
Personally it is one of my favorite blogs.

[http://igvita.com](http://igvita.com)

~~~
zobzu
then again that's why i like to read stuff without having a preconceived idea
of the author. Best way to have a fresh view on stuff.

People write some good stuff and some bad stuff. Not because they're not
smart, in general. More because of pressure due to various reasons, or because
they lost interest, or what not.

Personally, every time i see a "not so much of a win" followed by memes to
make it "look we rock!" it makes me feel pity for our whole industry

