Hacker News new | comments | show | ask | jobs | submit login
HTTP 2.0 Draft – SPDY Protocol (ietf.org)
68 points by ejpastorino 1838 days ago | hide | past | web | favorite | 27 comments



FWIW, Opera 12.1 and Firefox 15+ support SPDY, so it's not a Google-only protocol. The only major browsers left are Safari and IE, which are both slow at adopting new tech.

Source:

http://caniuse.com/#search=spdy


I'm so surprised that the security consideration section has no reference to CRIME [1], which has made Google and Mozilla to turn off SPDY's header compression in Chrome and Firefox.

[1] https://docs.google.com/presentation/d/11eBmGiHbYcHR9gL5nDyZ...


I think this version is basically SPDY 3. SPDY 4 (draft spec: https://github.com/grmocg/SPDY-Specification/blob/gh-pages/d...) uses a completely new header compression algorithm which is not susceptible to the attack used in CRIME.


http://www.guypo.com/technical/not-as-spdy-as-you-thought/

The results show SPDY, on average, is only about 4.5% faster than plain HTTPS, and is in fact about 3.4% slower than unencrypted HTTP. This means SPDY doesn’t make a material difference for page load times, and more specifically does not offset the price of switching to SSL.


That test was done by proxying through a third-party. I don't see how that's any more credible than the other results from Google he cites and dismisses at the top. The restriction of having SPDY disabled for third-party domains also taints the results. "It won't make the web faster because not everyone will use it" is a silly argument.

He also dismisses the benefits of having encryption by default on every connection, that itself is worth a 3% slowdown.


He didn't just proxy through a random "third-party," he sent everything via Contendo. If anything that sped the whole thing up unfairly. Contendo had (before Akamai killed them) a killer DSA product that basically sucked a webpage in via the closest Contendo datacenter then transported it over their optimized and compressed backbone to the Contendo datacenter closest to the user.

SPDY hits you a bit harder in setup costs to be able to sling lots of requests back and forth faster. This is great for someone like Google who serves 100% of the on page content themselves. Anyone with advertising, third party content, or using a CDN for delivery might want to do some extensive real world testing before bothering to implement.


His benchmark indicates that SPDY doesn't magically make bad websites fast, which is to be expected. SPDY only makes sites that are sufficiently optimized that SSL's added latency becomes a limiting factor on performance. It also lets you not make some painful optimizations because you can multiplex more.


That absolutely horrible test is somehow posted to every story about SPDY. It uses a nonsensical methodology and adds nothing to the discussion.


This guy is the chief architect at Akamai and probably knows a thing or two more about HTTP in general than most HN regulars. But please feel free to improve upon his work by proposing a better methodology and your results.


He uses a SPDY enabled intermediary proxy with zero caching calling out - per request - to source http 1.1 sites. There was no world where that getup makes any sense at all, and his position at Akamai doesn't change that basic reality. This is saying that a performance car is no faster than a economy sedan by forcing the former to drive behind the latter.

At an absolute minimum he should have enabled caching and then measured performance on the second run both with and without SPDY. As someone who has setup a rig exactly like this, using a SPDY enabled reverse proxy, the benefits are enormous.

Many, many absolutely terrible ideas have persisted on HN because of the appeal to authority (like listening to Digg's opinion on databases). It is not useful.


I mentioned this in another post, but calling Contendo simply a "intermediary proxy" is disingenuous. It gave SPDY an unfair advantage in my opinion.

But again, feel free to suggest better testing methodologies. I look forward to your results.


link to independent study that shows otherwise?


I don't know if this qualifies, but it's definitely not done by Google:

http://dev.opera.com/articles/view/opera-spdy-build/

It's not a "study", per se, but it does corroborate Google's claims.


Notice it says "on average". SPDY isn't a switch you flip on to make things faster. Very often, there are low-hanging fruits for performance improvement, especially in SSL: https://insouciant.org/tech/ssl-performance-case-study


If you're using Chrome, type this into your address bar:

chrome://net-internals/#spdy


oh man, this one is and its "clear host cache" will come in handing in the future. chrome://net-internals/#dns


While I appreciate SPDY, what's the point of a draft if adoption is so far not widespread?

Google's documentation seemed quite sufficient to implement, and until we have a proper reference implementation, this seems a little half-cocked.

Maybe I misunderstand the point of the IETF draft?


Firefox, Chrome, and Opera support SPDY. Apache, nginx, and Node.js support it. Large Google sites support SPDY (gmail), so does Twitter, and Facebook is working on it (http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/...).

I don't know what qualifies as widespread to you, but this is at least a great start. And there are plenty of good implementations to reference.



I guess this is my naïveté showing:

1) What is the advantage of SPDY over HTTP/1.1? (Besides a possible (and debated?) speed up? (Due to sending assets w/o a request for them?))

2) Is it as easy to debug with tools like curl, wget, and telnet?


1) A few things, notably server push.

2) curl and wget will probably eventually add support. telnet not so much, but then again you can't really debug HTTPS or even HTTP/1.1 (with chunked encoding) with telnet.


telnet is useful for somethings, but it does have it's limits. Often I'll compose my message and then paste it in (If I want to do something specific or have a header dump from chrome or something and don't want to convert it to a curl command).

openssl s_client is useful for debugging SSL issues. It gives you just about all the info you could want about the certs and then gives you an open pipe to the server that I treat like telnet above.

(Not really trying to argue. As things grow up more tools will support them and engineers like me will learn to adapt and debug them as they do the current generation of protocols.)


What's the difference between server push in SPDY and a multipart message in HTTP?


there should be a disclaimer that you will only see performance gains if you make a lot of HTTP connections on your site. people expect some magic happening once they install SPDY, which is just not the case. but if your site is heavily on SSL, then you have nothing to lose by installing SPDY, but not much to gain either.


Is the SPDY protocol done? Have they submitted it for the HTTP 2.0 standard, or will it remain separate?


The point of this document is that SPDY will be the basis for HTTP 2.0. Until HTTP 2.0 is finished, I would expect Google and others to keep shipping SPDY implementations.


Bring it. We're eagerly waiting for Heroku to release SPDY compatibility.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: