
Yesterday's best-practices are today's HTTP/2 anti-patterns - vladaionescu
https://docs.google.com/presentation/d/1r7QXGYOLCh4fcUq0jDdDwKJWNqWK1o4xMtYpKZCJYjM/present
======
olalonde
Can I get some of the http/2 performance benefits by putting a http/2 reverse
proxy in front of a regular http server? Does such a thing exist?

~~~
chadaustin
Yes, you can just configure the spdy plugin for nginx and place it in front of
your webserver. Easy and we measured significant load-time wins.

~~~
quintin
This might be too much to ask, but you can share the Nginx config file(s)?

~~~
imrehg
Adding SPDY is really nothing much in nginx (and that one of the thing that
makes it awesome).

In practice, if you already have an SSL server, just add the "spdy" to the
listen line like "listen 443 ssl spdy;", restart and you are done[1].

Then making it proxy request is just one location definition with
"proxy_pass"[2].

[1]:
[http://nginx.org/en/docs/http/ngx_http_spdy_module.html](http://nginx.org/en/docs/http/ngx_http_spdy_module.html)

[2]: [http://nginx.com/resources/admin-guide/reverse-
proxy/](http://nginx.com/resources/admin-guide/reverse-proxy/)

------
frik
Do these HTTP/2 server implementations downgrade to HTTP0.x/1.x if the client
support only an older version? Will there be v2-only servers in near future?

If the answers are in the slides, forgive me - using the iPad the Google slide
software breaks the back button and is too annoying to read beyond a few slide
pages. Debugging and implementing older protocols seem to be easier, as they
were text based.

~~~
Lukasa
> Do these HTTP/2 server implementations downgrade to HTTP0.x/1.x if the
> client support only an older version?

Many do, yes.

> Will there be v2-only servers in near future?

Yes.

> Debugging and implementing older protocols seem to be easier, as they were
> text based.

Implementing text protocols _seems_ to be easier, but writing an
implementation that can handle the wide variety of both compliant and slightly
non-compliant traffic is an exercise in frustration.

This is not helped by the fact that people see the text protocol and think
that it's easy to implement, so they go and write their own HTTP/1.1 server
and leave it on the internet. Their server is probably not quite spec-
compliant, so everyone else is left trying to interop with it.

Binary protocols are hard to debug by eye, but they aren't hard to write
parsers for.

~~~
frik
> Binary protocols are hard to debug by eye, but they aren't hard to write
> parsers for.

Implementing a text based protocol (SMTP, POP3, HTTP0.x/1.x) for a client
application is certainly easier and less documentation is required. Knowing
the clusterf __k of the binary Office document formats, the newer text based
ones re far easier to parse (be it XML or plain text doesn 't matter). Be it
binary or text based, one has to write a parser anyway. Only with text based
protocol one could also use Regex or string match during, which is quite
useful for non-production development/testing.

I read about "prioritisation" of data as a hint for the server, and less
caching of data on the client. With the reoccurring "net neutrality" debates,
let's hope this protocol cannot be misused/used to prioritise certain packets
for parties who pay extra. I am not into this debates, but it would be
certainly a disadvantage for startups over established parties. Given the many
problems with SSL (heartbeat, broken/outdated certs, hijacked cert vendors) an
HTTP/2 without SSL would be a nice fallback scenario - wildcard certs for new
startups are still a bit expensive, especially if one will have to replace
(=costs) the certs every few months due security concerns.

~~~
viraptor
I'm going to strongly disagree with this. The problem with office docs was due
to lack of documentation not because they were binary.

When you're parsing text, all kinds of crazy stuff can happen. You need to
resize buffers as you read data, you need to know the escaping rules of each
field, you need to know about line continuations, you have to know the text
encoding, and many many other things.

Binary data in the abstract form has three elements: tag/type (may be inferred
from position), data length (may be inferred from tag) and optional data
itself. There are various ways to compose that information, but that's it
otherwise. Binary protocols may be harder to read for people (you can just use
wireshark dissectors though), but writing a correct and bug free de/encoder
for one is massively simpler than for a text one.

By design, every text protocol will require more documentation than binary,
because you need to include information about data escaping and encoding.

If you want to see this in practice, implement a client for something which
does support both options. I recommend memcache.

~~~
barrkel
Parsing Office docs is parsing XML. We have lots of tools for parsing XML, and
escaping, line continuation, text encoding etc. are all well-defined and don't
need to be reimplemented specifically to support Office.

Whether parsing binary is easier or harder than parsing text depends almost
completely on the grammar of the language being parsed; and let's not forget,
text is, of course, a type of binary format.

If I have to do ad-hoc parsing or generation, I prefer a text format, because
I have lots of tools that understand text. If I need to do production-quality
work, I prefer a binary format, because I need to be complete. But if I'm
integrating multiple heterogeneous systems, I want a format that is trivial to
inspect and test; that may mean a well-specified text format, like JSON or
XML.

I'm fairly sanguine about HTTP/2 because it's at a lower level. If I were in
the business of writing HTTP clients or servers on a regular basis (rather
than using existing libraries), I'd be more concerned. I only do a telnet
HTTP/1.0 session every 4 months or so.

~~~
viraptor
Regarding missing docs, I meant the original .doc. That was all undocumented,
proprietary binary.

But again, I have to disagree about parsing text ever being easier than
binary. Basically for the same protocol, passing the same data and implemented
in a sane way, the text protocol is the same as binary + variable length
metadata + data escaping + value conversion + text encoding of metadata. I'm
happy to challenge anyone with the following: it's not possible to create a
simpler text protocol than a well designed binary protocol. (looking only at
encoding/decoding, not debugging side)

Where by simpler I mean, less likely to get exploited, less ambiguous, shorter
to document (when concatenating with docs of all encoding protocols you depend
on, like JSON or XML)

~~~
barrkel
_it 's not possible to create a simpler text protocol than a well designed
binary protocol_

Huh? That's irrelevant, surely? It doesn't speak to your assertion. The best
binary formats don't necessarily need "parsing" at all; it could be a simple
matter of mapping into memory and adjusting offsets, like an OS loader. I
don't think there's any debate that binary formats can be designed so that
they are far easier to load than text. We're not talking about the design of
protocols here (in this subthread). We're talking about writing parsers.

Parsing an obscure binary format is harder than parsing a simple text format.

~~~
viraptor
I find the response confusing. First, I was responding to "Be it binary or
text based, one has to write a parser anyway. Only with text based protocol
one could also use Regex or string match during, which is quite useful for
non-production development/testing." which we both seen to agree it's false.
Well designed binary doesn't even need parsers sometimes. If you add steam
multiplexing regexes won't help you anyway.

I'm not sure where do obscure binary formats come in. Http2 had a choice of
slightly complicated binary, or more complicated text. Office had lots of
programmers, even more money and simply didn't care. It's a completely
different situation than http2.

So finally: why the obscure binary format? Http2's choice is good text or good
binary really.

------
BorisMelnik
"This would have a larger effect on the speed of the internet than increasing
a users bandwidth from 3.9Mbps to 10Mbps or even 1Gbps"

I'm sold. What would be the easiest way to makes this happen for someone like
me with a pretty vanilla LAMP server?

~~~
MichaelGG
Install nginx as a reverse proxy, add spdy to the server directive. Enjoy.

~~~
facepalm
So the spdy directive also does http/2? I seem to remember looking for HTTP/2
support in NGINX and finding that they only support it in the commercial
version?

~~~
strommen
SPDY is an experimental protocol that does much of what HTTP/2 does ( you can
think of it as the beta version of HTTP/2). It will be phased out starting
next year over favor of the real protocol but people use it now with nginx.

~~~
BorisMelnik
thank you, I have been looking for this exact explanation. Ive been wondering
why everyone keeps recommending SPDY when this whole thing is about http2

------
bonobo
I never took the time to try WebSockets, so please forgive me if this question
doesn't make sense, but, does HTTP/2 supersedes WebSockets? I'm under the
impression that HTTP/2 covers all the WebSockets' use cases, is this a correct
observation?

~~~
jkarneges
If you use WebSockets mainly because you want to multiplex many
requests/responses over a single TCP channel, then HTTP/2 may be a preferable
substitute for WebSockets.

If you use WebSockets for "realtime push", then HTTP/2's server push feature
could potentially be used as an alternative (though I've not heard of anyone
actually doing this yet).

If you use WebSockets because you actually want a bidirectional message-
oriented transport protocol, well then you'll keep using WebSockets. :)

~~~
shorbaji
> If you use WebSockets for "realtime push", then HTTP/2's server push feature
> could potentially be used as an alternative.

One thing to keep in mind with HTTP/2 server push is that a server can only
send a push in response to a client request. So this isn't a drop-in "real-
time push" mechanism. To implement the equivalent of real-time push would
likely require client/server to keep a stream within the connection in the
"open" state whereby the server can send continue to send data frames on that.

~~~
bonobo

        One thing to keep in mind with HTTP/2 server push is that a server can only
        send a push in response to a client request.
    

This was the difference that I was not aware of, thanks. So HTTP/2 server push
is just opportunistic, while WebSockets are real-time push with a persistent
connection.

------
fake-name
Holy shit, fuck that site. It overrides the BACK AND FORWARD BUTTONS for
fucking page changes.

~~~
glitch
I can't actually tell if that's sarcasm anymore or not. Next slide is a new
URL, new page. Based on initial testing, the behavior seems fine. Same
behavior as one would have for something like [http://example.com/my-
presentation/1.htm](http://example.com/my-presentation/1.htm), 2.htm, 3.htm,
etc. and navigating them with hyperlinks on each slide that link to
neighboring slides.

It's not like Back/Forward browser buttons are overrode to behave and
previous/next for the slide presentation.

Remember those horrible embedded Flash presentations that you couldn't
directly link to a particular slide within the blob? Yeah, that "breaks the
Web". Back/Forward is supposed to go back to the previous page the user was on
(which is a "slide" in this case).

Using Chrome 43.0.2357.81 (64-bit) / OS X.

~~~
maxlaumeister
I would agree except that they also use scrolling to change slides. It feels a
little weird to scroll down 3 ticks of the mousewheel, then have 3 clicks of
the back button do the reverse action.

~~~
glitch
Scrolling is a separate matter. I didn't even bother to scroll the first time.
I just pressed the next/previous buttons on the slide navigation bar.

In the scrolling case, I still don't see how it's "overriding the browser
buttons", but rather having a JavaScript that advances to the next page on
scroll.

In the scrolling scenario, my actual back and forward browser buttons behaved
as expected — just for the pages (slides) I visited. No more, no less.

------
vladaionescu
The chapter in the book goes over more details:
[http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm...](http://chimera.labs.oreilly.com/books/1230000000545/ch12.html)

------
Silhouette
Interesting read, but as far as I can tell this article mostly makes a strong
case that using HTTP/2 at all is an "anti-pattern" for most projects[1] today,
for at least three reasons.

Firstly, the presentation seems to start by arguing that round-trip latency
has much more of an impact on perceived performance than bandwidth, but then
argues for several techniques whose principle advantage is saving small
amounts of bandwidth. So how much improvement will these new techniques really
offer over current best practices for "an average web site"?

Secondly, the presentation seems to argue that simplifying front-end
development processes by avoiding things like resource concatenation is a big
advantage of HTTP/2, yet despite repeatedly emphasizing the need for the
server to provide just the right responses to make HTTP/2 work well, it almost
completely ignores the inevitable challenges of actually configuring and
maintaining a server to take advantage of all of these new techniques in a
real, production environment, with rapidly evolving site structure and
content, numerous contributors, etc.

Essentially, this seems to be advocacy for dumping tried, tested, universal
"workarounds" for the limitations of HTTP/1.1 in favour of new techniques that
work well with HTTP/2 and only HTTP/2, but as an industry we have relatively
little experience in what actually works well or doesn't with HTTP/2 and we
have relatively few tools and relatively little infrastructure available that
support it right now. And crucially, making the shift is not by any means a
neutral activity; it is actively and severely harmful to several of the most
important tried-and-tested techniques we've used up to now.

Finally, there is the simple matter of trust or, if you want to be kinder,
future-proofing. The presentation notes that Google are deprecating SPDY from
early 2016. That is the supposed HTTP replacement that was the New Shiny...
yesterday, I think, or maybe it was the day before. When arguing for
fundamental and irreversible changes in the basic development process and
infrastructure set-up, you lose all credibility when your so-called standards
fall out of favour faster than a GUI or DB library from Microsoft, and when
your own browser frequently breaks due to questionable caching and related
behaviour.

It's certainly true that HTTP/1.1 isn't perfect and there are practical ways
it could be improved, but I don't think this presentation makes a strong case
for adopting HTTP/2 as the way forward.

[1] YMMV if you actually do work for Google/Facebook/Amazon, and you really do
have practically unlimited resources available to maintain both your sites and
your servers, and you really are making/losing significant amounts of money
with every byte/millisecond difference.

~~~
ianlevesque
I almost didn't reply because it sounds like you have a bit of an axe to grind
against HTTP/2, but the concerns you state are overblown. All I really
gathered from the presentation was that spriting and concatenation negate the
caching advantages HTTP/2 could provide and are unnecessary. It doesn't make
HTTP/2 worse than HTTP/1.1. As for protocol turnover, arguably adopting SPDY
was premature, but everyone knew it was when they did it. It'd be the same for
someone who chooses to adopt QUIC now. One point of standardizing HTTP/2 was
to let the more conservative among us start to use it now.

~~~
nickpsecurity
Definitely. I actually gave them props for their SPDY work, including the
name: it matching a common word would make it easier for lay people to
remember. Many organizations and academics made alternatives to common
protocols without the user base or influence to make them real. That Google
was in a position to make things better, built a practical improvement,
deployed it, and inspired the update to HTTP is a great credit to them. I wish
more companies in similar positions would follow suit.

And, hopefully, Google's experimentations in other protocols and domains will
challenge those controlling the status quo to adapt to the times as well. Not
holding my breath but it would be nice.

------
philjr
This is a really interesting read (linked from that presentation)

[http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm...](http://chimera.labs.oreilly.com/books/1230000000545/ch12.html)

* one tcp connection per request

* Server push functionality / server initiated streams

* current implementations means http/2 works over tls _only_ - neither Firefox nor chrome currently support non encrypted connections. This does keep things simpler with things like proxies in the middle I presume also.

* tls implementations must now support sni also so basically http/2 is now a forcing function for supporting sni which is awesome.

------
0x006A
what are the current http/2 server options?

~~~
tootie
Undertow and Jetty are both Java options that support HTTP/2\. I've been
playing with both and it's still very rough around the edges (have to mess
with bootpath to get ALPN to work), but it does work. F5 are supporting it now
too.

~~~
pimlottc
From the presentation, Jetty's smart push feature sounds really cool, do you
know if there's more information available on that feature?

~~~
tootie
Basically it caches requests per referrer so the next time a request is made,
it can guess what subsequent requests will be made. Documentation is still
sparse and a lot of it is carryover from their SPDY features.

