
Enabling HTTP/2 for Dropbox web services: experiences and observations - 0xmohit
https://blogs.dropbox.com/tech/2016/05/enabling-http2-for-dropbox-web-services-experiences-and-observations/
======
bsdetector
> _Be careful with enabling HTTP /2 for everything, especially when you do not
> control the clients. As HTTP/2 is still relatively new, from our experience,
> some clients/libraries and server implementations are not fully compatible
> yet._

One of the main excuses given for not enabling HTTP 1.1 pipelining was a small
amount of software that didn't handle it correctly. According to Google, we
needed a new protocol so we could start out fresh where there wouldn't be
software not implementing the protocol correctly.

But according to this report, the browser with the largest market share and by
the creators of HTTP/2 had a bad implementation of it for which a workaround
had to be found for.

For history's sake we should recognize now that this buggy software argument
for not enabling pipelining was never valid. Not enabling pipelining was used
as a political tool to prevent HTTP 1.1 from being on par performance-wise
with the new protocol people wanted for other reasons, and that's also why
Google never even once tested SPDY against pipelining.

~~~
davej
> Not enabling pipelining was used as a political tool to prevent HTTP 1.1
> from being on par performance-wise with the new protocol people wanted for
> other reasons

I'm curious, what are the politics behind this? What are the 'other reasons'?

~~~
josteink
> I'm curious, what are the politics behind this? What are the 'other
> reasons'?

Google is very good at seeing their own needs and deciding that what is good
for them, at their Gigascale, must be good for everyone else on the internet
too.

And then they push it in their own products and in their own browser, without
any proper concern about internet standards and the processes for being then
established in a proper and cooperative way.

Like they did with SPDY, which is now the overly complex and buggy HTTP/2.0.

If they didn't have their own browser, they would not be in a position to do
things like this. I sometimes wish for an antitrust case against Google: that
they cannot operate the major websites and have their own browser at the same
time, much like the ruling against Microsoft over bundling Internet Explorer.

~~~
phamilton
> Google is very good at seeing their own needs and deciding that what is good
> for them

Even within Google the Chrome team is known to make moves that piss off others
at the company. The Chrome/Flash debacle last summer was a good example. The
AdX teams, like the rest of the ad industry, were not thrilled with the change
being forced on the industry.

~~~
ascagnel_
They'll be even less happy today [0].

[0] [http://arstechnica.com/information-
technology/2016/05/html5-...](http://arstechnica.com/information-
technology/2016/05/html5-by-default-googles-plan-to-make-chromes-flash-click-
to-play/)

------
binaryanomaly
Great to see dropbox pushing forward with HTTP/2.

I find it especially remarkable that dropbox co-sponsored the nginx HTTP/2
module. This really brings benefits to all of us instead of treating such
improvements as company internal secrets. Thanks!

------
Mojah
While they don't specifically mention it in the post, they probably had to put
some effort in making sure ALPN is still supported [1]. It is in their case,
which means they either run a custom compiled version of Nginx (compiled
against either OpenSSL 1.0.2 or LibreSSL) or have a system-wide up-to-date
version of OpenSSL.

Either way, nice move Dropbox!

[1] [https://ma.ttias.be/day-google-chrome-disabled-
http2-nearly-...](https://ma.ttias.be/day-google-chrome-disabled-http2-nearly-
everyone-may-15th-2016/)

------
msoad
This is great. I love to see more about front-end load numbers.

As we are moving to HTTP/2 and JavaScript heavy applications are everywhere we
should stop bundling all of JavaScript files into a single big bundle file. It
will help with caching a lot because right now we update the bundle if a
single line in the code base changes. A lot of front-end build systems are
optimized for HTTP/1.1.

~~~
jwr
It depends what you mean by "bundling". For ClojureScript, the output is
passed through the Google Closure compiler with advanced optimizations. The
result is not a concatenation nor just a minimization -- it is a rewritten
program, containing just the parts that are needed and significantly
optimized.

So, the big resulting file is not a "bundle" and can't easily be split into
parts.

~~~
mikegedelman
For a large application, I wonder if you could configure Closure to output
multiple files.. Maybe libraries and such in one file and client code in
another? I would imagine you could at least split the code there.

~~~
dragonne
Yes, Closure Compiler can be made to generate multiple files in this way. IIRC
you can give it multiple entrypoints and it will generate files with the code
unique to each entrypoint, plus one with the shared code.

------
melle
The thing I find odd is the bugs in Chromium. Why would google drop support
for spdy if their http2 implementation contains such (noticable?) bugs?

~~~
tracker1
Because it's difficult to support both, HTTP/2 is the future. There's a reason
they have regular, automated updates by default.

------
dedalus
It would nice to see the reduction in response time for a average GET/file. I
see that the author does say he saw similar response times as was seen on the
canary but I was expecting some benefit over HTTP/1.1. The reduction in
bandwidth might be great but the trade off there was to use the client CPU and
wanted to see the breakdown on perf improvement.

------
wildpeaks
If you want to test a HTTP/2 config, here are some good tools you can use:
[https://blog.cloudflare.com/tools-for-debugging-testing-
and-...](https://blog.cloudflare.com/tools-for-debugging-testing-and-using-
http-2/)

I suspect we'll see more HTTP/2 servers now that Ubuntu 16.04 has been
released.

------
aarkX
I just tried HTTP/2 on nginx 1.10.0 and got constant "client sent stream with
data before settings were acknowledged while processing HTTP/2 connection"
errors from many different client addresses. This was on a standard website
with regular browser traffic.

~~~
Lukasa
This is because nginx has some interesting assumptions about the HTTP/2
preamble works. See this mailing list thread[0] for discussion about it. See
also the nginx bug[1].

The TL;DR is that nginx wants to do some weird stuff with flow control to
avoid the need to do internal buffering. nginx is definitely outside the RFC
here: what the client is doing is entirely acceptable.

[0]: [https://lists.w3.org/Archives/Public/ietf-http-
wg/2016AprJun...](https://lists.w3.org/Archives/Public/ietf-http-
wg/2016AprJun/0174.html) [1]:
[https://trac.nginx.org/nginx/ticket/959](https://trac.nginx.org/nginx/ticket/959)

~~~
Matthias247
I just read the mailing list post that was referenced in the article
([http://mailman.nginx.org/pipermail/nginx-
devel/2016-May/0082...](http://mailman.nginx.org/pipermail/nginx-
devel/2016-May/008211.html)) and found nginx flow control handling also quite
peculiar there. Setting the initial flow control window to 0 and then
increasing it based on content-length will increase the latency a lot and
opens the window for lots of interoperability bugs. E.g. because the client
already starts a stream and sends data (based on the default flow control
window), which then gets rejected and which might not be retried (depending on
the client libraries implementation).

Would be interesting to know what nginx does when it receives no content-
length header, which is also valid.

From reading and implementing the HTTP/2 spec the setttings negotiation is the
biggest weak spot in opinion, because it represents a big race condition where
the expectations of client and servers probably won't match. I would have
preferred it if they would either have put in a mandatory wait for SETTINGS
ACK until streams can be acknowledged or if the default settings (window size,
HPACK table size) would be very low and could only be increased during
negotation. With the given HTTP/2 spec I would most likely try to be
conservative as a library or application author and just announce/use the
default settings or bigger ones in order to avoid compatibility problems. For
most server this should be possible. However for constrained devices lower
default settings would have been preferrable.

~~~
Lukasa
Yeah, I agree that for constrained devices this is somewhat problematic,
though I'd say the only big concern there is header table size. Even that can
be coped with, worst-case by just enforcing the lower-limit from the get-go
and GOAWAYing if the client violates it.

But yes, nginx's HTTP/2 implementation is really quite strange to me.

------
geden
My own experience and observation - http2 is going to very awkward to enable
on Centos until openssl is updated to 1.0.2. (supporting ALPN as required by
Chrome).

I'm not prepared to custom compile something like openssl, especially when it
requires frequent updates.

------
therein
Seems like a very eventless transition. Nice to see the impacts of header
compression. And I've just noticed I've worked with some of the guys behind
this post, which made me smile.

------
diegorbaquero
The overall load time for clients would be a great statistic to have.

------
homero
Block those clients or we'll never move on

------
josteink
So basically HTTP/2.0 performs on par with HTTP/1.1 with pipelining, but has a
complexity which is a million times higher, and everyone working on it
experiences bugs and incompatibilities.

How about we just ditch HTTP/2.0 (aka the Google forced it upon us protocol)
and get back to something which is proven, IETF/W3C-based, is not binary, is
simple and most importantly: actually works?

That'd be real nice.

Also let's not let Google create more internet protocols please. That'd also
be nice.

~~~
madeofpalk
> is not binary

Why is this a downside (apart from 'I can't telnet now')?

~~~
josteink
All major internet protocols to date has been plain text, which for almost
anyone has been easy to learn, easy to inspect and diagnose without
specialised tools and easy to extend without creating compatibility concerns
because of it. We've already seen SPDY be incompatible between its few
versions.

Binary ruins that fine tradition for the promise of 0.1% better performance.
And I think that's a terrible trade.

~~~
X-Cubed
TLS is binary

~~~
zimpenfish
But you can access the inner protocol by using `openssl s_client -connect
host:port` as a TLS/SSL wrapper, can't you?

(Edit: I realise you can likely also access/debug HTTP/2.0 streams with a CLI
tool - I am currently ambivalent about text vs binary streams.)

~~~
Matthias247
Yes, you can also access/debug HTTP/2 content with curl or nghttp2. And the
content of the request/response body streams is binary (in the sense that
there is no predefined encoding) - just like in HTTP/1.1. That there is also a
binary framing layer underneath should not care you from CLI user or
application builders perspective. Just like you don't care in what number of
IP frames your request was transmitted.

