

Analysis of SPDY and TCP Initcwnd (2012) - dedalus
http://tools.ietf.org/html/draft-white-httpbis-spdy-analysis-00

======
bsdetector
> SPDY worked well for Page A (many smaller images) ... For Page B (many
> images more typical size), the results were a bit more hit-or-miss, with
> SPDY not always resulting in improved performance. ... Page C (small number
> of large images), on the other hand, resulted in almost universally worse
> performance with SPDY than with HTTPS.

> [SPDY] showed worse average performance than HTTPS when packet loss was 1%
> ... With a typical HTTPS download of a web page, the browser will open a
> large number of simultaneous TCP connections. In this case, a random packet
> loss will cause the one affected connection to temporarily reduce its
> congestion window (and hence the effective data rate) ...

In other words, there's a new, much more complicated protocol with layer
violations to squeeze everything into one pipe, that's going to end up being
used over several connections anyway because of how TCP works. And the main
performance benefit is for many small resources.

> Google ... report a 15.4% improvement in page load time

Vs _non-pipelined_ HTTP. Pipelined HTTP also can send many small resources
efficiently, where SPDY (now HTTP/2) gets most of its performance benefit
from.

Google could have just made a pipelining implementation only work over SSL (so
no proxies/antivirus in the middle) that only worked when the server signaled
it and achieved almost all of the performance benefit of SPDY and HTTP/2
without needing a new protocol at all. Without having to guess and test for
problematic pipelining, it could have spammed requests from the start and
would have been slightly slower and slightly less robust, but still the lion's
share of the benefit.

Google certainly understand fractional performance improvements. I just wish
they didn't make it so complicated.

