The results show SPDY, on average, is only about 4.5% faster than plain HTTPS, and is in fact about 3.4% slower than unencrypted HTTP. This means SPDY doesn’t make a material difference for page load times, and more specifically does not offset the price of switching to SSL.
That test was done by proxying through a third-party. I don't see how that's any more credible than the other results from Google he cites and dismisses at the top. The restriction of having SPDY disabled for third-party domains also taints the results. "It won't make the web faster because not everyone will use it" is a silly argument.
He also dismisses the benefits of having encryption by default on every connection, that itself is worth a 3% slowdown.
He didn't just proxy through a random "third-party," he sent everything via Contendo. If anything that sped the whole thing up unfairly. Contendo had (before Akamai killed them) a killer DSA product that basically sucked a webpage in via the closest Contendo datacenter then transported it over their optimized and compressed backbone to the Contendo datacenter closest to the user.
SPDY hits you a bit harder in setup costs to be able to sling lots of requests back and forth faster. This is great for someone like Google who serves 100% of the on page content themselves. Anyone with advertising, third party content, or using a CDN for delivery might want to do some extensive real world testing before bothering to implement.
His benchmark indicates that SPDY doesn't magically make bad websites fast, which is to be expected. SPDY only makes sites that are sufficiently optimized that SSL's added latency becomes a limiting factor on performance. It also lets you not make some painful optimizations because you can multiplex more.
This guy is the chief architect at Akamai and probably knows a thing or two more about HTTP in general than most HN regulars. But please feel free to improve upon his work by proposing a better methodology and your results.
He uses a SPDY enabled intermediary proxy with zero caching calling out - per request - to source http 1.1 sites. There was no world where that getup makes any sense at all, and his position at Akamai doesn't change that basic reality. This is saying that a performance car is no faster than a economy sedan by forcing the former to drive behind the latter.
At an absolute minimum he should have enabled caching and then measured performance on the second run both with and without SPDY. As someone who has setup a rig exactly like this, using a SPDY enabled reverse proxy, the benefits are enormous.
Many, many absolutely terrible ideas have persisted on HN because of the appeal to authority (like listening to Digg's opinion on databases). It is not useful.