

Google SPDY Protocol Would Require Mass Change in Infrastructure - lmacvittie
http://devcentral.f5.com/weblogs/macvittie/archive/2009/11/17/google-spdy-protocol-would-require-mass-change-in-infrastructure.aspx

======
radu_floricica
I may not be very knowledgeable in this area, but my first instinct is I don't
very much like the kind of infrastructure that needs to peek into my HTTP
headers.

Anyways, google is in a unique position with this kind of changes. It has a
>1% browser, and a hefty slice of web servers (only google and youtube and it
adds up to something impressive). So this kind of protocol is worth pushing
only to give chrome users on google sites an edge. The rest of the
sites/browsers have a good chance of following, but they're not essential to
declare the protocol a success.

~~~
easp
I think the strategy google is pursuing with Chrome, ChromeOS, SPDY and even
google docs is to make web-centric computing more widespread, both among the
established base of existing desktop and laptop users, and among non-users
like people in developing countries, and people who aren't doing mobile
computing yet. Google benefits both because they are well positioned to
monetize this type of computing, through both advertising, and fees for
services like Google Apps; and because this kind of computing devalues the
cash cows of their largest rival.

The tactics they are using are to drive down costs (ie give Google Apps away
to small businesses, and make it a lot cheaper than running exchange,
deploying office, etc, for larger orgs), and to enhance the overall experience
(ie make web apps faster via SPDY, fast javascript, etc).

Against this background, projects like Chrome don't have to achieve market
dominance to be successful, they just have to achieve broad influence. I don't
think just delivering better performance when Chrome users visit Google's
properties counts as success, though; I think they need to get other browsers
and services to change too.

I think SPDY is most likely to take off on mobile devices first. First,
latency is a much bigger issue for cellular data than it is for WiFi, etc.
Second, it looks like Google is going to get decent browser share on mobile
devices. If Google implements SPDY in the Android browser, then makers of
mobile web apps will likely embrace it. If that happens, then Apple, etc will
feel the need to support it as well, lest the iPhone end up at a big
disadvantage.

~~~
radu_floricica
> Against this background, projects like Chrome don't have to achieve market
> dominance to be successful, they just have to achieve broad influence.

Good point. After all, GMail's most immediate and important result was that
every other mail provider dramatically increased their storage space. At the
time yahoo offered only 4 mb for free - in a matter of weeks it jumped almost
thousand-fold. Same with Chrome and javascript speed in Firefox.

------
pilif
while I agree and I really don't see that there's any chance of any disruptive
change happening to the web, it's also saying a lot that this article is
posted on the website of a company that sells HTTP load balancers.

You see? Either SPDY is quick enough to not require them any more (unlikely)
or (more likely) they are not interested in the huge development ahead of them
should SPDY pick up (which it won't).

~~~
TFrancis
Right. It's almost as if F5 forgot that the infrastructure isn't a means to
it's own end for application owners.

------
jacquesm
SPDY has a few good ideas, one that can be implemented fairly easily is http
header compression, it wouldn't be very hard to make that change in a
backwards compatible fashion.

Most of the rest of the changes remind me of IPV4 vs IPV6, if it is that much
better how come we're stuck in IPV4 land ? Installed base is a fantastic way
to limit your freedom to make changes, if SPDY is going to require both server
_and_ client (and proxy) modifications in order to function and is not
backwards compatible then I don't give it much chance of being implemented.
It's nice to see people thinking about these issues though.

~~~
briansmith
HTTP header compression is already available in HTTPS with TLS. You can
compress the headers and the data as one GZIP stream. You can even use the
content of a previous response to further compress a subsquent response. (In
other words, HTTP-level compression is like a ZIP file, where each document is
compressed individually, whereas TLS-level compression can be made to work
like a tar.gz, where all documents are compressed as a single stream,
maximizing sharing.) For maximum efficiency, you need HTTP keepalive (but that
is true even without using TLS compression).

Firefox just recently added support for this. Apache has had support for it
for a while, depending on how your OpenSSL is configured.

If browsers and web servers implemented the combination of NULL encryption
cipher + NULL authentication code, then you could use TLS just for
compression, without having to pay the cost of cryptographic operations.
However, browser makes don't want to do this because they are afraid that
users think "<https://..>. means secure" when "<https://..>. just means
TLS+http, which may or may not have useful security properties.

Also, server admins usually don't want to enable long-lived connections
because Apache's default way of handling them is stupid. They don't know that
load balancers, servers like nginx, and other configurations of Apache do not
have the same problems. This is made worse by the fact that all the advice on
the internet states that keepalive is bad.

------
AndrewDucker
I have to agree - it does seem likely that the chances from SPDY will find
themselves rolled into later versions of HTTP rather than an entirely new
protocol being used.

~~~
wmf
This is just a matter of terminology. If "HTTP 2.0" is radically different on
the wire from 1.1 then it is both a later version of HTTP and an entirely new
protocol.

