

Working Group assembling to discuss HTTP/2.0 - metabrew
http://lists.w3.org/Archives/Public/ietf-http-wg/2012JanMar/0098.html

======
kibwen
While it would be exciting to see something as progressive as SPDY become
codified as a standard, keep in mind the deliberately conservative nature of
standards development, as seen in the responses to that message:

    
    
      On 24/01/2012, at 3:50 PM, James Snell wrote:
      > +1... would love to see that work move forward within this group. I do
      > have concerns about keeping the scope focused, however. Even tho the
      > door would be opened to the possibility of new features being
      > introduced, there is obvious danger in opening those doors too wide. I
      > strongly feel that things such as the introduction of new request
      > methods and new header fields, unless there is clear and irrefutable
      > evidence of their general utility within the core of the spec, should
      > continue to be pursued as they are today -- within separate I-D's as
      > extensions to the core protocol. The charter should make it absolutely
      > clear that the goal is an incremental evolution of HTTP/1.1 rather
      > than an opportunity for radical changes.
    
      Thanks, James.
      Pretty much everyone I've spoken to about this has raised the same concern.
      I agree it's going to be a tightrope walk, but I think there's a healthy 
      amount of concern about this in the community, which will help guide development.
      Cheers,
      Mark Nottingham

------
jgrahamc
Whilst SPDY is nice, the compression possible using SDCH is a game changer.
Given the amount of disk space on machines it should be possible to send quite
large dictionaries to browsers.

For example, I've been experimenting with the Bentley/McIlroy compression
algorithm and I can easily (no effort at all) compress the BBC News web site
by 90% (i.e. to 10% of its original size).

~~~
wmf
For some reason SDCH gets no respect. Somebody could become a hero by writing
an auto-SDCHing reverse proxy.

~~~
gojomo
The stuff that's big (rich media) already has its own native compression. The
stuff that's relatively small (text) also already compresses pretty well
without shared-dictonaries.

I think that's the reason: it's not that big a win over the total traffic of a
modern browsing session. For example, I doubt jgrahamc's experiment yields a
90% savings over the entire browsing session with BBC News, when getting
images, and if gzip is already enabled on the HTML/CSS/JS. I'd guess closer to
10%.

 _Updated after taking a quick look at BBC News homepage in Firebug's net
panel:_

The BBC News homepage has about 150KB of HTML/CSS… but they haven't even
bothered to turn on gzip, which might bring that down to 30KB. It has about
230KB of images, which wouldn't be helped by a shared dictionary.

If they turned on gzip, text would be about 30/230=13% of the bandwidth. Even
99.99% compression – a dictionary of exactly today's content already on the
client! – could only shrink the homepage download by 13%.

------
metabrew
I'm looking forward to the inevitable discussions about cross-origin requests,
same-origin policy and all that jazz.

Having implemented SPDY/2, they could do a lot worse than just ratifying SPDY
(v3 by now..) and perhaps clarify some of the CORS stuff.

~~~
alexchamberlain
I love the idea of SPDY, but I would really like to see an open source
(transparent) proxy, then I could proxy all my internet traffic through the
cloud.

~~~
mburns
Not quite what you want, but a good DefCon talk with code about making a
'real' session layer.

<http://www.youtube.com/watch?v=aakzkrl-34g>

------
freehunter
Why is it that every mailing list I read about updates to legacy design (even
classic ones that have been successfully implemented and no one would change
back) always that that guy (or guys) who say "no this is stupid, make your own
widget if you want new stuff, floppy disks are a good enough standard"? Poul-
Henning Kamp is that guy here.

The guy who coined the term "bikeshed color" is now arguing over the name of
the next iteration of HTTP, and how since it can't be done in a year it
shouldn't be done at all. I understand the need for conservatism, but outright
pooh-poohing is never effective without _major_ concerns.

------
cultureulterior
I'd like HTTP over UDP for javascript applications.

~~~
keeperofdakeys
Frankly, your sentence doesn't make much sense. HTTP is a way of sending text
and data, so UDP would never be used. The data needs to arrive, be in order,
and be correct (exactly what TCP does). Also, the fact that a new HTTP
protocol is being used has no effect on javascript, since the protocol is
essentially abstracted away.

As for your real question, when will javascript have UDP socket support, that
is an interesting question. The feature would definitely be handy for
multiplayer games and video/audio streaming. In light of this, I'm sure
something will come down the pipe eventually. There could be security worries
though, for example it could be used to build a webpage that performs a DDoS
(not that they can't be built already).

