
A 2x Faster Web - johns
http://blog.chromium.org/2009/11/2x-faster-web.html
======
nixme
The whitepaper has more information: <http://dev.chromium.org/spdy/spdy-
whitepaper>

Interesting parts:

\- "...make SSL the underlying transport protocol, for better security and
compatibility with existing network infrastructure." _Won't this break many
caching models?_

\- "...provides an advanced feature, server-initiated streams. Server-
initiated streams can be used to deliver content to the client without the
client needing to ask for it." _Nice, push without comet-style hacks._

 _EDIT: more items that stand out_

\- "SPDY implements request priorities: the client can request as many items
as it wants from the server, and assign a priority to each request"

From the protocol document: <http://dev.chromium.org/spdy/spdy-protocol>

\- "Content-length is not a valid header"

\- "Clients are assumed to support Accept-Encoding: gzip. Clients that do not
specify any body encodings receive gzip-encoded data from the server."

\- "The 'host' header is ignored. The host:port portion of the HTTP URL is the
definitive host." _I guess this supplants the need for SNI support. Edit: No
it doesn't since SPDY sits above SSL. The host isn't known until the secure
link has already been established._

~~~
sp332
The future of the internet is ubiquitous encryption. Better to break caching
models ASAP so that fixing them can be done incrementally.

~~~
timdorr
Provided we're using IPv6 exclusively. SSL removes the ability to do virtual
hosting, which is why we didn't run out of IPs 5-10 years ago.

~~~
notauser
Given that IPv6 is a good thing(1) any technology change that brings forward
the time line for IPv4 address exhaustion is a good thing.

It is human nature that people don't do things that sound like hard work until
they have to, and the continuing use of IPv4 with NAT and other hacks falls
firmly into that camp.

(1) Every device becomes addressable again - I remember when it was normal to
assume that devices could be reached directly, and would be fire walled if
required. That led to a much greater number of people running services from
their machine. From the perspective of a startup the idea that a client can
run data services without some horrible <nat-related> hack is really
interesting.

------
dkubb
I love that Google is focusing on how to speed up the web, but they are
missing the most obvious thing they could do that would have a dramatic impact
on the speed of the web:

Update their PageRank algorithm to take into account website speed, and then
publish this as a fact.

The idea would be that they would have GoogleBot measure the latency, and
overall page load time and rank sites higher that respond faster, and slow
sites lower.

One of the problems with PageRank is that most of how they measure site
quality is a mystery, but making it plain that speed is used in ranking would
bring website performance to the front of everyone's minds. They could use
similar metrics as YSlow and PageSpeed use, and tell everyone to optimize
their sites using those tools as a starting point.

~~~
ryanwaggoner
This is a horrible idea. Yes, it would result in increased focus by some
people on speed, but probably the same people who focus on "SEO". So now
content relevancy has yet another adversary in the battle for SERPs. There's a
LOT of really incredible, original content out there written by amazing people
who do it in their spare time and don't give two shits about their SEO or the
speed of their server, and now that content is going to slip further down the
rankings because it takes 2 seconds to load instead of 100 ms? No thanks.

~~~
dkubb
You are correct that SEO people would latch onto the idea and start producing
faster sites. But so will many legitimate sites that want to provide a good
user experience. Is that really so bad?

I think that site speed does correlate with a better user experience. A site
that loads quickly shows that the person who made it cares about my experience
more than a bloated site that takes forever to load.

I will admit that it's certainly not as as strong an indicator as the content,
inbound links and other factors that are rumored to be part of PageRank but
speed does matter. All other things being equal, I would much rather visit a
site that loads in 2 seconds than one that loads in 20 seconds.

~~~
smcq
You wait 2 seconds for a page load? You're amazingly generous.

------
swombat
Impressive, how Google are taking on challenges that pretty much no other
company could possibly address... replacing email... enhancing HTTP, etc...

They're the only company with the means to take on such massive, unprofitable
projects, and at the same time enough street cred that everyone won't
immediately assume they're trying to make the web proprietary. The fact that
they tend to open up these things from the get go helps, too.

~~~
neilc
_They're the only company with the means to take on such massive, unprofitable
projects, and at the same time enough street cred that everyone won't
immediately assume they're trying to make the web proprietary._

They are hardly the only company doing this. Microsoft Research do a ton of
pretty visionary / far-reaching work, much of which has nothing to do with MS-
proprietary platforms. For example, the VL2 architecture for data center
networks: <http://sns.cs.princeton.edu/2009/10/new-datacenter-networks/>

~~~
JulianMorrison
Microsoft are long past the day when they were evil. It's just that, unfairly
or not, if they tried to do this people would be going through the license
with a fine tooth comb in the certain expectation of traps.

I wonder if Microsoft will one day redeem themselves to the public like IBM
did?

~~~
rbanffy
Long past?!

I don't regard the shameful approval of OOXML as an ISO standard, and all the
questionable things Microsoft did in order to accomplish it, as a particularly
nice thing to do. There are a lot of NBs that were formed specifically for
that purpose that were never again heard from. As it is now, the damage to ISO
seems irreversible.

Also, there is the trolling about "Linux infringes 4 bazillion of our patents"
empty threats (because if they weren't empty they would have acted) and the
failed attempt to sell some of those patents to trolls that would go after
Linux users. The fact if failed (because a non-invited group won the auction
and donated the patents) makes it not less evil.

No. Being evil is in their soul.

They may be less evil now - I can agree on that. But it's so much more because
they are far less relevant now than they were in the 90s than any particular
change of heart.

They deserve no redemption.

------
bensummers
Regarding the use of SSL by default, they better require the use of the SSL
mode which sends the hostname before the crypto negotiation. Otherwise that's
a lot of IP addresses required for virtual hosting!

~~~
agl
We will.

~~~
drusenko
Even so, the current certificate issuing process is lengthy and expensive.
Requiring SSL would be a major hit to the mom-and-pop type websites that just
want a simple webpage on their own domain for their business.

They'd now have to pony up for an SSL cert, which at least doubles the website
ownership cost, as well as potentially go through a very complex process to
get the certificate issued.

It also completely eliminates the possibility of them having a free website on
their own domain, unless an easy and free method to obtain an SSL cert becomes
available.

The gain to them in these circumstances is negligible, if they aren't doing
any e-commerce.

~~~
ams6110
mom and pop brochure sites are not the sort who would be using spdy anyway,
they'd just use http.

~~~
cameldrv
Why? If this is successful, everyone should use it, and the whole web would be
faster with default server configuration.

------
jeremyw
One imagines djb envisioning a single roundtrip for encrypted HTTP, built on
top of DNSCurve.

You have the public key via DNS, you send an encrypted request and receive a
verifiable encrypted response that includes content and begins the multiplexed
channel.

------
neilc
I'm impressed that they're able to get such good performance (and latency)
while still running the traffic over SSL. For example, average page load time
over SSL is 1899 ms ("SPDY basic single-domain / SSL"), compared to 3112 ms
for plain HTTP -- they don't show HTTPS, but presumably that would be even
slower.

------
shimon
Am I missing their explanation of how a SPDY-capable client and server
discover and make use of SPDY instead of HTTP?

For this to take hold in the wild, it has to be possible for me to run a
combined HTTP+SPDY server, and for a client connecting to it to automatically
make use of SPDY if it is capable. This must happen without user intervention
(i.e. we don't expect people to type spdy:// instead of <http://>).

It seems like this should be possible. Perhaps there could be a special server
response that instantly "upgrades" a TCP session from HTTP into SPDY. But I'm
not seeing anything about that in the docs; is this part of the SPDY plan
(yet)?

~~~
teej
This is merely the spec doc for the DVD. The combo VHS-DVD players will come
next.

------
antirez
I've a bad feeling of this potentially making the web layer more complex. Also
the encryption bit may not be a good idea if you are not Google...

In general I'm more happy with web _standards_. Google doing research on this
is great, but if they unilaterally push solutions that will anyway hit the
mass market because they are Google is not good.

All the web, the idea of HTTP APIs and so on are based on the fact that HTTP
may not be perfect but it's _trivial_ to use, implement, and so forth. Please
don't break this fact.

~~~
drusenko
I wouldn't be so afraid of Google being able to unilaterally effect change
without consensus. Even getting adoption for their Chrome browser seems to be
an extreme uphill battle.

------
dedalus
<http://www.w3.org/Protocols/HTTP-NG/http-ng-scp.html> had some similar ideas
which when combined with <http://www.faqs.org/rfcs/rfc998.html> gives you this
SPDY.

But how do you gain adoption of such a tech across clients and servers (which
was the problem with pipelining) will be an interesting question to see

~~~
dedalus
<http://tools.ietf.org/html/draft-natarajan-http-over-sctp-00> also had
similar ideas but SPDY is better because of Resource Prioritization

------
9oliYQjP
Am I missing something? How does "55% faster" translate into a 2x speed-up?

~~~
timdorr
2x = Goal

55% = Reality so far

------
cakeface
Is this basically an RFC for HTTP version 2?

~~~
timf
It's mostly a session framework on top of SSL but under HTTP.

From <http://dev.chromium.org/spdy/spdy-whitepaper>

Q: Is SPDY a replacement for HTTP?

A: No. SPDY replaces some parts of HTTP, but mostly augments it. At the
highest level of the application layer, the request-response protocol remains
the same. SPDY still uses HTTP methods, headers, and other semantics. But SPDY
overrides other parts of the protocol, such as connection management and data
transfer formats.

------
EugeneG
Wouldn't faster internet connections make this whole effort moot?

~~~
stevejohnson
No, because this would make internet faster over those faster internet
connections.

------
Towle
I can't help but read "SPDY" as "Spidey"... :(

