I mean SPDY is here and almost every server can handle it already. (where can != is configured to by default) If google pulls features from HTTP-2.0 into SPDY-1.x in the next couple of months, what would be the benefit of anyone doing HTTP-2.0?
A clear case where a working and apparently perfectly well backwards compatible vendor-specific implementation worked so well that it actually might get to be an official standard.
Now if only we could have SSL with name based virtual hosts or much wider use of IPv6 so that SPDY will actually be useful for a wide range of server administrators.
We can: it's called SNI (http://en.wikipedia.org/wiki/Server_Name_Indication). Unfortunately browser support isn't quite there; recent versions the major desktop browsers do support it, but e.g. the stock Android browser does not (because Apache HttpClient does not).
(Regardless, as I mentioned above, name-based vhosts work just fine with HTTPS, using SNI. Much easier to fill in the gaps in browser support for SNI than to get everyone to adopt SRV records.)
I sure as hell could appreciate a new HTTP-standard, which was brought to the field by having professionals (maybe even IETF!) work together and reason about things publically, not just have Google hijack the process, dump shit out, saying "that's what we have implemented. if you want to compete with us, you will have to take this package", and then after the fact release some docs and claim it's open.
You know. SPDY just smells bad wherever you look. I'd like for the new HTTP protocol to be something you can trust
SPDY: Not-stateless, not-plaintext, includes complecting factors like prioritization and multiplexing. I say that is several orders of magnitudes of increased complexity and uncertainty baked into a protocol.
Result: Much less than an order of magnitude in improved response-time.
I say SPDY comes at such a cost that it cannot be considered worth it. Especially when you consider it comes at the added cost of handing over control of one of internet's main protocol's to a single company to be developed behind closed doors, such as Google is already doing with SPDY.
I'm flabbegastered that people aren't reacting to this. Had Microsoft being doing anything like this, people would be calling their senators to start investigations. But with Google it is evidently all cool.
You'd think people would remember the cost of handing over the internet to one single company, when the results of last time we did that is still plaguing us today (IE).
As for source, this one should do: http://www.theverge.com/2012/5/3/2995881/google-spdy-speed-t...
This is a good point, for instance the developers of Spdy don't actually compare it to HTTP pipelining. They don't measure the effect of 'head of line blocking', just assuming it is a major problem. They don't consider that the protocol performs worse than HTTP over satellite and similar links. Neither the design choices nor specific details have been vetted or backed up by real deliberations.
There are tons of ways a committee of experts could improve Spdy, but it looks like Google is just going to show up with their draft RFC and demand a rubber stamp.
This should really be enough of a reason not to trust them with the task of developing this protocol.
- doesn't include DNS lookup times
- doesn't include packet loss
- doesn't compare to pipelining
First two mean the claimed speedup isn't measuring the whole real time. Third is just amazing... the stock Android browser even uses pipelining. Basically pipelining gives the same benefits as Spdy and they really, really don't want to admit this.
This is what I'm talking about... every one of Microsoft's changes he identifies as b-h is a positive change. That he can't see it I guess is a testament to being young and inexperienced.
Why is it "preemptive"? It seems more like a nonpreemptive push, right?
Does this mean keeping a background tab open uses a remote server's resources indefinitely? How can I as the server dev prevent unintentional DDOS?
So in the case you quoted, the server would also be able to explicitly tell the browser to start a new connection later. (It's not just a browser-to-server signal.)
Generally, most HTTP 1.1 (keepalive-aware) servers have a default timeout for those "persistent" connections so this isn't actually a new problem specific to SPDY.
(Aside: simply consuming leaving open an idle TCP connection for later re-use doesn't necessarily imply that idle users will "DDOS" a server. Depending on the server software and OS, the cost-per-socket is low enough that many idle connections isn't actually a problem until you get to port and file descriptor limits — which, again, is already well-dealt with in plenty of other HTTP/TCP applications by using timeouts at all.)