
The Effect of Network and Infrastructural Variables on SPDY’s Performance (2014) - chetanahuja
http://arxiv.org/abs/1401.6508
======
ck2
[https://docs.google.com/gview?url=http://arxiv.org/pdf/1401....](https://docs.google.com/gview?url=http://arxiv.org/pdf/1401.6508v1.pdf)

So would the same apply for HTTP/2.0 since they are so similar?

    
    
         In summary, we deduce that SPDY loses its performance gains as a
         website is sharded more. However, these negative results are not ubiquitous
         and vary remarkably depending on the number of page resources. This
         raises a few questions about SPDY deployment. Are the benefits enough for
         designers and admins to restructure their websites to reduce sharding? What
         about third party resources that cannot be consolidated, e.g. ads and social
         media widgets? Can SPDY be redesigned to multiplex across domains? Is
         proxy deployment [29] rewarding and feasible as a temporary solution? The
         success of SPDY (and thereupon HTTP/2.0) is likely to be dependent on
         the answers to precisely these questions.

~~~
acdha
That would apply but it's somewhat odd to see the tone of surprise for
something which has been widely mentioned as an optimization-turned-
antipattern for SPDY or HTTP/2 since at least 2012 or so. I believe at least
Chrome has also optimized so addresses which share the same IP (or SSL cert?)
will be collapsed into the same existing connection rather than opening new
ones.

Opening all of those connections is also something of an anti-pattern even for
HTTPS depending on how much data you're exchanging relative to the SSL
handshake cost – see e.g.:

[https://insouciant.org/tech/network-congestion-and-web-
brows...](https://insouciant.org/tech/network-congestion-and-web-
browsing/#Google_Kitten_Search)

~~~
chetanahuja
The tone of surprise (in the paper as well as this thread) might have
something to do with how the whole HTTP/2 stack has been sold as an
unmitigated "good" for all and sundry. The paper also mentions how the SPDY
whitepaper [http://www.chromium.org/spdy/spdy-
whitepaper](http://www.chromium.org/spdy/spdy-whitepaper) presents exactly the
opposite results in presence of packet-loss. The language in the paper falls
short of an outright accusation but there's an implication that the promises
of the new protocols have been oversold by the backers (mostly Google).

------
yelkhatib
Hello, I am Yehia the lead author on this work and the published paper:
[http://dx.doi.org/10.1109/IFIPNetworking.2014.6857089](http://dx.doi.org/10.1109/IFIPNetworking.2014.6857089)
PDF available from my webpage:
[http://www.comp.lancs.ac.uk/~elkhatib/](http://www.comp.lancs.ac.uk/~elkhatib/)
Feel free to ping me for any queries.

------
aavegmittal
hmm… this caught my eye… "Immediately, we see that SPDY is far more adversely
affected by packet loss than HTTPS is. This has been anticipated in other work
[29] but never before tested. It is also contrary to what has been reported in
the SPDY white paper [2], which states that SPDY is better able to deal with
loss.”

~~~
KaiserPro
Yup, Each dropped packet pauses the _entire_ connection until its
retransmitted.

Moving forward to a time where the average webpage is 10-100megs in size[1] in
around 5 to 10 years time, SPDY will be the bottleneck, not the network or
serving infrastructure.

Of course five to ten years is about the time that HTTP 2 will start to see
wide spread adoption.....

Multiplexed TCP is just not a good idea for high bandwith, low latency file
delivery. (HTTP is basically a very wordy file system interface)

If you look at any of the systems for moving files about, they all either use
a custom UDP protocol, or many streams of TCP. (or rely on being in a LAN)

[1][http://www.websiteoptimization.com/speed/tweak/average-
web-p...](http://www.websiteoptimization.com/speed/tweak/average-web-page/)

~~~
xyzzyz
* Yup, Each dropped packet pauses the entire connection until its retransmitted. *

Yeah, that's why the next step after SPDY/HTTP2 adoption is QUIC, which moves
the web to UDP, and solves the head-of-line blocking.

[https://en.wikipedia.org/wiki/QUIC](https://en.wikipedia.org/wiki/QUIC)

~~~
userbinator
_which moves the web to UDP_

That sounds like it'd just cause more congestion and dropped packets if not
used carefully - and they'll eventually end up reinventing TCP on top of
UDP...

~~~
KaiserPro
or just create one virtual stream across multiple sockets

------
rp248
“Ironically the biggest sufferer is Google with a 20.2% increase in ToW"

------
bexp
Indeed I'm tired to read articles on how awesome SPDY, HTTP/2 is. Why nobody
publishes fair benchmarks on various networks with packet loss ?

