

Making HTTP Pipelining Usable on the Open Web - dedalus
http://tools.ietf.org/html/draft-nottingham-http-pipeline-00

======
jws
Summary:

People who wrote broken proxies and the administrators who deploy them have
poisoned the use of pipelining for everyone and the internet cries a salty
tear on every page load as a result. There appears to be no way to hold these
entities accountable, so the best we can do is work around them from the end
points…

1) Browsers should come with pipelining functional tests built in and run
these at any network change to try to determine if they have faulty proxies in
their path to the internet.

2) Servers should add an _Assoc-Req:_ header to responses that identifies the
request, e.g. "Assoc-Req: GET <http://www.example.com/somereq?bar> so clients
can detect when the proxies or servers have misordered the responses coming
back up the pipeline.

3) Servers should use the _Content-MD5:_ header to detect corruption from
defective proxies and network admins. (There is a trailer version for large
dynamic content.)

4) There should be some HTML META and REL additions to suggest to the user
agents which resources are low latency and are good candidates for pipelining.
(You don't want to get your big dynamic query stuck in front of your flurry of
IMG and SCRIPT loads.) Edit: This one would be nice even in the absence of
broken proxies.

------
fredliu
I'm wondering what google would like to say about this... Apparently their
SPDY is intended to do more and "better" than what http pipelining can offer.
But if these modifications of pipelining do get into the standard and got
large adoption (although that may take forever...), then SPDY would have fewer
advantages over pipelining, and thus much less convincing.

