

How speedy is SPDY? [pdf] - ldng
https://www.usenix.org/sites/default/files/conference/protected-files/nsdi14_slides_wang.pdf

======
bsdetector
> Most performance impact of SPDY over HTTP comes from its single TCP
> connection

This is not a surprise at all, because Google _never tested SPDY against HTTP
pipelining_. And at least in their mobile test they included the TCP
connection time for HTTP but not for SPDY; I suppose their test software just
reused the same SPDY connection since the mirrored web pages all were served
from the same IP.

They compressed sensitive headers with data leading to CRIME attacks.

They had a priority inversion causing Google Maps to load much slower through
SPDY than HTTP.

This new protocol is a complete mess, from beginning to end.

~~~
Lukasa
Leaving aside the technical statements about SPDY, the reality of HTTP
pipelining is that no-one uses it. According to Wikipedia, Opera is the only
major browser that ships with pipelining enabled. Most intermediaries don't
support pipelining either.

Pipelining was a well-intentioned feature which didn't solve the core problem:
namely, that a big or slow request can block you from doing anything else for
a really long time unless you open another TCP connection.

~~~
bsdetector
Except that Microsoft tested SPDY against pipelining and found that pipelining
was essentially just as good. So we're left with a situation where Google
could have used HTTP pipelining over SSL (so there's no buggy proxies
interfering, just like what SPDY does) and gotten pretty much all the benefit
with no extra complications at all, but instead there's old HTTP and a new,
much more complicated protocol.

And this "head of line blocking" problem... who said it was a problem, Google?
In reality you have 4 or more connections that automatically work like spread
spectrum where most resources aren't stuck behind a big or slow request. But
even if this was an actual problem, a simple hint in the HTML that the
resource might take a while and to put other ones on a separate connection
would optionally solve this problem, and with almost no extra complexity.

~~~
jgrahamc
_Except that Microsoft tested SPDY against pipelining and found that
pipelining was essentially just as good._

Can you point to that?

~~~
youngtaff
I think parent is referring to
[http://research.microsoft.com/pubs/170059/A%20comparison%20o...](http://research.microsoft.com/pubs/170059/A%20comparison%20of%20SPDY%20and%20HTTP%20performance.pdf)

It's a bit sketchy on the details and data - reading it I certainly end up
with more questions than answers

------
Mojah
Very nice research, kudos to everyone involved.

I agree with the conclusions, mostly the very last one.

> To improve further, we need to restructure the page load process

To fully utilise the potential of HTTP/2, we will have to rethink the way we
create and manage websites. I've posted more thoughts on this on my blog;
[https://ma.ttias.be/architecting-websites-
http2-era/](https://ma.ttias.be/architecting-websites-http2-era/)

------
YZF
It's not surprising that introducing high loss through a network emulator
results in reduced performance of a single TCP connection vs. multiple
connections. That's because there's a relationship between the maximum
bandwidth a single TCP connection can carry and the packet loss % due to TCP's
congestion avoidance. Introducing "fixed" packet loss through an emulator
isn't necessarily a good representation of a real network where packets would
be lost due to real congestion (an overflowing queue).

Throwing many TCP connections into a congested network can let you get a
higher share of that limited pipe though...

~~~
eggnet
Packet loss on wireless networks often have a certain rate of packet loss
unrelated to congestion, due to having not a great signal or interference.

It's somewhat humorous because wireless networks are one of the main purported
benefits of SPDY.

------
moyix
Kudos to them for releasing their data and tools [1]! This is how science
should work.

[1]
[http://wprof.cs.washington.edu/spdy/data/](http://wprof.cs.washington.edu/spdy/data/)

------
Animats
That's a nice study.

The main result is that most of the benefit comes from putting everything
through one TCP pipe. This, of course, only works if almost everything on a
site comes from one host. This is a good assumption for Google sites, which
communicate only with the mothership. It's not for most non-Google sites.

~~~
josteink
And for those sites, you can always use HTTP pipelining and avoid the whole
SPDY can of worms.

Looking at the facts, it seems pretty obvious that whatever theoretical gains
you can get in select scenarios with SPDY, this super-minor gain is not worth
it compared to the associated complexity cost.

Not to mention I don't like the idea of Google now not only running the worlds
tracking units, the worlds most popular browser and most popular websites, but
now also dictating internet-protocols without taking input from other parties.

~~~
youngtaff
What's the complexity cost in SPDY or HTTP/2's case?

For most optimised HTTP/1.x sites there's already a complexity cost of merging
JS files, merging CSS files building sprites - including the tradeoff of
getting the bundles right, which of course reduces cachability.

~~~
josteink
> For most optimised HTTP/1.x sites there's already a complexity cost of
> merging JS files, merging CSS files building sprites - including the
> tradeoff of getting the bundles right, which of course reduces cachability.

And all of this is a build-time problem.

If we're going to engineer the HTTP-protocol to solve build-tooling and
development related problems, we might as well add JS-linting and minifying to
HTTP itself as well.

Seriously: This problem is best solved elsewhere.

~~~
youngtaff
You've got it wrong the way around these aren't build-tooling and development
related problems, they're problems with HTTP/1.x that we chose to solve using
the build process.

------
Hengjie
Trivia: the PDF is in a folder called "protected-files" \- LOL

