I've never been attracted to SPDY as I have doing HTTP 1.1 pipelining for years before SPDY appeared. It's great for getting lots of data from the same source, though I don't know about the way typical webpages are these days with so many connections to pull-in offiste resources, many of which offer the user zero value (they benefit the site owner only). That's a problem with how people are deisgning websites. Not something that's solved with a transmission procotol.
SPDY did offer a couple of new things over 1.1.
SPDY wants to compress headers, but I asked "Why?"
What exactly are they planning to put in the headers to make them so big they need to be compressed? Normal headers are not large. And headers can actually be quite useful after pipelining a large amount of data when you want to carve into pieces later. They are like record separators.
Another new addition was de-serialization. But I see nothing wrong with receiving the data in the order I requested it. If the transmission is interrupted, at least then I can restart where I left off.
I'm just not convinced compressing headers or de-serializing adds such a speed boost as to justify a new protocol.
And now, with this exploit, I don't need to even think about SPDY anymore. Except has a potential security hole.
Sorry SPDY fans, but this is just my opinion as a dumb end user. Pipelining worked just fine before SPDY. Alas, no one used it. Why? Maybe because it didn't have a catchy acronym and a corporate brand behind it. I honestly don't know. Because it's efficient and makes perfect sense. I used it. And I still do.
I will never use SPDY, especially not now. (It's on by default in Firefox but you can disable it.)
> What exactly are they planning to put in the headers to make them so big they need to be compressed? Normal headers are not large.
Compressing headers is not for handling large headers. They are for handling repeated headers. For example, with every request-response pair, you waste a few bytes saying "Accept-Ranges:bytes", you waste a few bytes saying "Server:Apache/2.4.1 (Unix) OpenSSL/1.0.0g", you waste a few bytes saying "Connection:Keep-Alive", etc. Further, when you compress headers, you save a few bytes for repeatedly using the string "Cache-Control:", "Content-Length", "Last-Modified:", etc.
If you take a look at the work done here: http://www.eecis.udel.edu/~amer/PEL/poc/pdf/SPDY-Fan.pdf, (ironically, the pdf is entitled SPDY-Fan after one of the authors), you will find that the average HTTP request they studied is (reasonably) 629.8 bytes whereas SPDY (after some warm-up) transmits 35 bytes on average. Similarly, the average http response they studied is (reasonably) 437.7 bytes whereas SPDY (after some warm-up) transmits 63.8 bytes on average.
> Another new addition was de-serialization. But I see nothing wrong with receiving the data in the order I requested it. If the transmission is interrupted, at least then I can restart where I left off. [...] Alas, no one used it. Why? Maybe because it didn't have a catchy acronym and a corporate brand behind it.
Historically, browser support for pipelining has pretty much been limited to Opera. Chrome has an implementation now, and I believe, so does Firefox. I don't think either of these are enabled, by default, although I could be mistaken.
Realistically speaking, pipelining is essentially a hack to make a synchronous protocol more efficient. If the creators of HTTP were to repeat their efforts today, knowing how the protocol they are creating is going to be used, I'd bet my left arm that they would create a multiplexed protocol.
> especially not now.
Are you also going to avoid all the sites that use an unpatched OpenSSL? CRIME seems to affect those just as it affects SPDY.
Yes I know about repeated headers but as I explained I've actually found a use for them. I do pipelining everyday. I don't think "Gee, these repeated headers are really eating up my bandwidth." It's just not that big a deal. The headers are like 10 lines or less. The headers for HN are like what, two lines? Headers are nothing compared to all the useless cruft in HTML nowadays.* That is a situation where I am definitely thinking "This is unbelievable. 90% cruft and 10% content." Wanna "fix" something? Talk some sense into web developers.
* Unless you start jamming them with larger and larger cookies.
Multiplexing is also my preference. I think you would be able to keep your left arm. But I'd rather do it over UDP. HTTP is OK, we all have to use it, but it's not the bee's knees. Uploading stuff via non-idempotent HTTP requests needs to die. The web browser centricity school of thought is not my thing. SPDY is probably excellent for Google's purposes, it just isn't for mine. I guess I am one of the few.
I think Firefox has actually had pipelining support for many years. They just never turned it on by default. As I say no one seemed to use pipelining. Neither servers nor clients. But they sure took notice of SPDY.
I'll bet the SPDY fan tested things with downloading stuff from multiple domains, simulating the typical hodgepodge webpage. But that's not how I do my "browsing". I block everything but the stuff I want, it all comes from one source, one domain. I retrive only the content and leave the cruft behind. Efficiency.
As I said compression is not the only condition that needs to be met for this attack to work. The browser needs to make requests to other sites, automatically, e.g. via an img src tag.
As for HTTPS, nothing requires me to seek out sites that use HTTPS. Nor to trust sites that use it which I'm forced to use. And nothing requires me to send headers asking for compression support when using HTTP. But with SPDY, everything is compressed, all the time. It's all on by default. This is by design.
CRIME relies on lots of things, one of which is a browser that makes requests for offsite resources automatically. Sure, Chrome does that, but not all broswers do, and certainly not simple http clients unless you're spidering.
It's pretty easy to not be vulnerable to CRIME, but not when you use SPDY; since SPDY was compressing everything. The protocol takes away the control that the HTTP 1.1 user had through specifying desired options in headers, such as whether or not to use compression. And compression was never the default in HTTP/HTTPS.
I wish that I liked SPDY. It's easier to go along with the crowd. But I just don't. IMO, it's something that should stay internal to Google and not be pushed on the rest of the web. Exactly for reasons like this exploit. It's way too easy to screw this stuff up.
It seems like the solution is to special case sensitive headers like "Cookie", so that the compressed version doesn't leak information. But isn't every byte equally worthy of protection? What if there's sensitive information in the body?
(I get that the format of the Cookie header is probably much more structured/predictable than usernames or account details that might appear in the body, but surely a good cryptosystem would be 100% resistent to these sorts of side-channel attacks, without any need to guess which headers need to be handled more carefully.)
This attack depends on being able to control the requests made; cookies are automatically added to the request, which makes them vulnerable. There aren't many times when you'll have enough control over a portion of the body to make non-cookie attacks viable.
SPDY header compression only compresses the headers. Bytes in the body might be vulnerable to a similar attack if the server is doing gzip compression, and that would be a fun extension to the attack, but headers and body are never compressed together.
Interesting -- looks as though the speculation as to the nature of CRIME was pretty much dead-on. Took years for anyone to realize that compression introduced a vulnerability into HTTPS, but as soon as everyone knew that there was something there, the nature of the attack was immediately guessed.
(Although I'd be much more worried if the community had discovered a different vulnerability!)