

HTTP Intermediary Layer From Google Could Dramatically Speed Up the Web - spahl
http://tech.slashdot.org/story/09/11/12/1943254/HTTP-Intermediary-Layer-From-Google-Could-Dramatically-Speed-Up-the-Web

======
jacquesm
The quickest way for a 'dramatic' speedup of the web is to install an ad
blocker.

Also, 'dramatic' speedups will only lead to pages that get loaded up with more
junk.

If that had not happened we'd already have an extremely fast browsing
experience. It's like memory and disk space, if the budget increases then
there will be some way to spend that budget.

If web pages load in under one second they'll be 'improved' until they load in
3 to 4 seconds again.

spdy:// ? I don't think so.

Let's drop some of that flash and turn on gzip compression (if you haven't
done that already) and make sure your cache headers are set properly.

That alone will probably give you a 50% boost.

~~~
dazzawazza
While your points are valid in order for webapps to continue their march in to
desktop app territory they need to decrease latency. I think this is what spdy
is there for.

I'm sure a lot of fat will be added to apps but if my AJAX requests are
snappier I am more then happy to support this.

~~~
jacquesm
from:

[http://sites.google.com/a/chromium.org/dev/spdy/spdy-
whitepa...](http://sites.google.com/a/chromium.org/dev/spdy/spdy-whitepaper)

"To target a 50% reduction in page load time. Our preliminary results have
come close to this target (see below)."

The bigger chunk of which seems to come from header compression.

SPDY is targeted to be placed between HTTP and TCP.

It 'only' requires changes to the client and the webservers.

That makes me a little skeptical about the value of their tests, I can't
imagine they got the top 500 websites to change their server architecture to
allow this test to take place.

They are very much focused on the latency issues but the only references they
give are for things related to page load time, which are affected peripherally
by latency, but latency is _much_ more of an effect on single connections.

And the only gain from that would be to compress the headers.

If header compression is that much of a deal then I would suggest to add
optional header compression to the http standard, not to place layers in
between http and tcp.

And that layer would have to add 0 overhead all by itself on single
connections or the results would be negative instead of positive.

You can't really support this unless the client supports it as well, think of
it as a specialized gateway function inside the browser that knows how to
tunnel HTTP over SPDY to hosts that support it, the key factor is interleaving
frames of content from different sources. Essentially a packet
multiplexer/demultiplexer if I read it all right, that will pack all the
requests on the sending side and unpack them at the receiving end to go to
their respective handlers.

By the way, the 'server pushes resources before the client has asked for it'
is what has been making my webcam software functional for the last 15 years or
so.

I clued in to that in '95 or so using the server-push technology and figured
out that since I already knew that the next request would be for another frame
of the same cam why not send it right now instead of waiting for the browser
to ask for it...

------
RiderOfGiraffes
Out of interest I downloaded the page with curl, then stripped out the JS and
other stuff, leaving mostly just formatting and text.

    
    
        Before: 178845 bytes
        After :  33604 bytes
    

That's just the page, and doesn't include images or ads, but it does include
the text of the comments.

Just an observation.

------
seshagiric
Compressing http content also means extra processing on both the client and
the server side. While this might not be as noticeable on a PC, for mobile
phones it sure can be a concern. It will be interesting to see some data on
it.

------
nalbyuites
Wow. A slashdot link in many months on HN. Telling.

