

CRIME - stalled
http://www.imperialviolet.org/2012/09/21/crime.html

======
honest0pinion
I've never been attracted to SPDY as I have doing HTTP 1.1 pipelining for
years before SPDY appeared. It's great for getting lots of data from the same
source, though I don't know about the way typical webpages are these days with
so many connections to pull-in offiste resources, many of which offer the user
zero value (they benefit the site owner only). That's a problem with how
people are deisgning websites. Not something that's solved with a transmission
procotol.

SPDY did offer a couple of new things over 1.1.

SPDY wants to compress headers, but I asked "Why?"

What exactly are they planning to put in the headers to make them so big they
need to be compressed? Normal headers are not large. And headers can actually
be quite useful after pipelining a large amount of data when you want to carve
into pieces later. They are like record separators.

Another new addition was de-serialization. But I see nothing wrong with
receiving the data in the order I requested it. If the transmission is
interrupted, at least then I can restart where I left off.

I'm just not convinced compressing headers or de-serializing adds such a speed
boost as to justify a new protocol.

And now, with this exploit, I don't need to even think about SPDY anymore.
Except has a potential security hole.

Sorry SPDY fans, but this is just my opinion as a dumb end user. Pipelining
worked just fine before SPDY. Alas, no one used it. Why? Maybe because it
didn't have a catchy acronym and a corporate brand behind it. I honestly don't
know. Because it's efficient and makes perfect sense. I used it. And I still
do.

I will never use SPDY, especially not now. (It's on by default in Firefox but
you can disable it.)

~~~
agentS
> What exactly are they planning to put in the headers to make them so big
> they need to be compressed? Normal headers are not large.

Compressing headers is not for handling large headers. They are for handling
repeated headers. For example, with every request-response pair, you waste a
few bytes saying "Accept-Ranges:bytes", you waste a few bytes saying
"Server:Apache/2.4.1 (Unix) OpenSSL/1.0.0g", you waste a few bytes saying
"Connection:Keep-Alive", etc. Further, when you compress headers, you save a
few bytes for repeatedly using the string "Cache-Control:", "Content-Length",
"Last-Modified:", etc.

If you take a look at the work done here:
<http://www.eecis.udel.edu/~amer/PEL/poc/pdf/SPDY-Fan.pdf>, (ironically, the
pdf is entitled SPDY-Fan after one of the authors), you will find that the
average HTTP request they studied is (reasonably) 629.8 bytes whereas SPDY
(after some warm-up) transmits 35 bytes on average. Similarly, the average
http response they studied is (reasonably) 437.7 bytes whereas SPDY (after
some warm-up) transmits 63.8 bytes on average.

> Another new addition was de-serialization. But I see nothing wrong with
> receiving the data in the order I requested it. If the transmission is
> interrupted, at least then I can restart where I left off. [...] Alas, no
> one used it. Why? Maybe because it didn't have a catchy acronym and a
> corporate brand behind it.

Historically, browser support for pipelining has pretty much been limited to
Opera. Chrome has an implementation now, and I believe, so does Firefox. I
don't think either of these are enabled, by default, although I could be
mistaken.

For issues with pipelining, this document has a few hints
[http://tools.ietf.org/html/draft-nottingham-http-
pipeline-01...](http://tools.ietf.org/html/draft-nottingham-http-
pipeline-01#section-3). As well, pipelining cannot be used for non-idempotent
requests, i.e. non-(HEAD|GET) requests.

Realistically speaking, pipelining is essentially a hack to make a synchronous
protocol more efficient. If the creators of HTTP were to repeat their efforts
today, knowing how the protocol they are creating is going to be used, I'd bet
my left arm that they would create a multiplexed protocol.

> especially not now.

Are you also going to avoid all the sites that use an unpatched OpenSSL? CRIME
seems to affect those just as it affects SPDY.

~~~
honest0pinion
I agree with your comments.

Yes I know about repeated headers but as I explained I've actually found a use
for them. I do pipelining everyday. I don't think "Gee, these repeated headers
are really eating up my bandwidth." It's just not that big a deal. The headers
are like 10 lines or less. The headers for HN are like what, two lines?
Headers are nothing compared to all the useless cruft in HTML nowadays.* That
is a situation where I am definitely thinking "This is unbelievable. 90% cruft
and 10% content." Wanna "fix" something? Talk some sense into web developers.

* Unless you start jamming them with larger and larger cookies.

Multiplexing is also my preference. I think you would be able to keep your
left arm. But I'd rather do it over UDP. HTTP is OK, we all have to use it,
but it's not the bee's knees. Uploading stuff via non-idempotent HTTP requests
needs to die. The web browser centricity school of thought is not my thing.
SPDY is probably excellent for Google's purposes, it just isn't for mine. I
guess I am one of the few.

I think Firefox has actually had pipelining support for many years. They just
never turned it on by default. As I say no one seemed to use pipelining.
Neither servers nor clients. But they sure took notice of SPDY.

I'll bet the SPDY fan tested things with downloading stuff from multiple
domains, simulating the typical hodgepodge webpage. But that's not how I do my
"browsing". I block everything but the stuff I want, it all comes from one
source, one domain. I retrive only the content and leave the cruft behind.
Efficiency.

I've never been thrilled with SSL (and guess what, I'm not a Javascript fan
either... Netscape was the world's slowest and most unstable browser but they
did what they had to do to make ecommerce possible; now we have js, ssl and
cookies, what lovely things) but what choice do I have? And what am I supposed
to do if my bank has an unpatched SSL? Email them and tell them they are
idiots? I just assume there's always something wrong with SSL; if I just wait
a while somehow I'm always right. Time to upgrade. Again. I've become an NaCl
fan, so maybe there is hope. There life after SSL.

------
mjs
It seems like the solution is to special case sensitive headers like "Cookie",
so that the compressed version doesn't leak information. But isn't every byte
equally worthy of protection? What if there's sensitive information in the
body?

(I get that the format of the Cookie header is probably much more
structured/predictable than usernames or account details that might appear in
the body, but surely a good cryptosystem would be 100% resistent to these
sorts of side-channel attacks, without any need to guess which headers need to
be handled more carefully.)

~~~
daeken
This attack depends on being able to control the requests made; cookies are
automatically added to the request, which makes them vulnerable. There aren't
many times when you'll have enough control over a portion of the body to make
non-cookie attacks viable.

------
duskwuff
Interesting -- looks as though the speculation as to the nature of CRIME was
pretty much dead-on. Took years for anyone to realize that compression
introduced a vulnerability into HTTPS, but as soon as everyone knew that there
was something there, the nature of the attack was immediately guessed.

(Although I'd be much more worried if the community had discovered a
_different_ vulnerability!)

~~~
__alexs
People have been thinking about this sort of attack since at least 2002
<http://www.iacr.org/cryptodb/data/paper.php?pubkey=3091>

------
stalled
(Previous discussion: <http://news.ycombinator.com/item?id=4510829>)

