Hacker News new | comments | show | ask | jobs | submit login
Google disables compression for OpenSSL in Chrome - SSL exploit coming? (chromiumcodereview.appspot.com)
250 points by EwanToo on Sept 12, 2012 | hide | past | web | favorite | 132 comments




Is my understanding correct that the attack described there requires all of 1. cookies associated with the target, 2. the ability to monitor the length of requests to the target, and 3. the ability to send requests to the target (e.g. JavaScript injection, malicious website)?

I guess this is actually an instance where those Hollywood "guess the password one character at a time" animations would make sense.


You need three things:

1. To know the format of cookies used for the web site you are targetting. Specifically, whatever cookie contains authentication.

2. The ability to injection a request to the target web server while someone is connected (such as with JavaScript, an XSS, or plug-in).

3. Ability to monitor the SSL connection is it is transported across TCP.

Would have been fun if they called this the CPE1704TKS attack instead of CRIME: http://www.youtube.com/watch?v=NHWjlCaIrQo


You don't even need (3) if you can do enough trials and have an accurate clock. Compression by its very nature drips timing side channels.


Due to the padding of even compressed SSL segments only having timing data will significantly complicate things. You'll be trying to measure the variance in deflate compressing different blocks with single byte differences. You better have an extremely low jitter connection!


Low jitter is not a requirement for timing side-channel attacks. As long as the jitter is uncorrelated with the timing difference that you're after, you can filter the signal from the noise.


> You better have an extremely low jitter connection!

Like AWS. (Cite: http://dl.acm.org/citation.cfm?id=1653687.)


Okay, but people don't run web browsers on AWS.



Doesn't do HTTPS, also I am skeptical of cross-site javascript shenanigans happening on the server-end of silk.


People do run HTTPS clients on AWS, though. Web APIs tend to [1] require exactly that.

[1] Lala people never send sensitive data over plain HTTP I'M NOT HEARING YOU.


Sure, they might use HTTPS, but how often do they load arbitrary pages with uncontrolled javascript?


Pretty much never. How often do they make requests that are at least partially attacker-controlled, though? (Note that variants on https://www.someapi.com/...?username=foo contain attacker-controlled data...)


I doubt youd even have to break SSL using that attack. Lots of people do SSL termination at the load balancer.


To expand on your second point, if you have 3, then in many cases you also have the ability to intercept and modify non-encrypted traffic. Meaning, if the victim is using a secure web site and a non-secure site at the same time, the MITM can infiltrate the non-secure site and use it to generate requests to the secure site.


That's why sites need to enable the Secure flag on their cookies, and to set the Strict-Transport-Security headers.


Won't help in this case.


I don't know what you're saying. If you're saying that the "Secure" flag won't defend against CRIME, nobody is saying that. I'm saying that the "Secure" flag and HSTS mitigates SSL-stripping.


I think both ivanr and thatwonthelp were suggesting that any non-secure site (same origin or not) can be hijacked in order to make the requests necessary for this attack.

I understand and agree with your comment about the secure flag, but if I understood ivanr's comment it doesn't apply.


Oddly, GP's account was made for the purpose of writing that comment. I'm unsure what that means.


I think you're correct here but how would you manipulate the cookies being sent when sending XSS requests from a different domain?


you don't need to manipulate the cookies being sent, only the body of the message. The browser will send the correct cookie in the header, you control the body, and use the length of the message to determine how close the cookie value in the body is getting to the one in the header.


For CRIME, I don't think that it's necessary to control the cookies. Having control of other parts of the request should be sufficient (e.g., using request headers, request body, etc).

As for manipulating cookies from the MITM perspective, here's one clunky way to do it: redirect the victim's browser to the plain-text version of the target web site, intercept that request, and set a new cookie (pretending to be the plain-text version of the target web site). The next request to the secure version of the target web site should contain the injected cookie. As tptacek mentioned in another comment, this approach would not work with a site that uses HTTP Strict Transport Security.


The attack as Pornin outlines it does not involve controlling the cookie header, but rather on being able to get content into the client's TX stream that happens to match the cookie header; think in terms of things like query args and post data.

I think all three comments in this little subthread might be saying the same thing. Go nerddom!


Would have been fun if they called this the CPE1704TKS attack

That one was not cracked one character at a time in series, portions in the middle were discovered sooner. But, yeah, would have been a more appropriate, cooler name.


Are there credible situations where you would be able to do (2) but not able to simply read the cookie from the DOM and send it to an attacker controlled website?


An attacker can build a hidden form that submits to a URL at the victim site, and submit it repeatedly using javascript. Alternatively an attacker could put the test content in the URL (we don't care if it 404s), which would work just as well and allow them to use a series of script-free pages full of images.

edit: tptacek is absolutely right; there isn't a way to defend against this in application code. All you can do is turn off TLS compression. This is NOT by any means the only approach, it's just the most obvious one.


Careful: you're outlining the attack well, but you wouldn't want to give someone the impression that defending against CSRF also defends against attacks like BEAST.


You can't read the cookie for the victim website from a malicious site. What you can do is send POST/GET requests to which the browser will attach the appropriate cookies.


Oops, yes of course, now I feel stupid for asking. Thanks.


Juliano and Thai actually coined the name "Hollywood attack" for exactly this concept back when they presented the chosen-boundary attack on TLS 1.0.


> Hollywood "guess the password one character at a time" animations

Did you see the BEAST attack? Exactly one byte at a time: http://www.youtube.com/watch?v=BTqAIDVUvrU


Serge Vaudenay's CBC padding oracle attack plays out the same way, as does the simplest attack on ECB mode crypto.


As I understand it, only 2 and 3 are needed. The (secret) cookies are the part that is determined by the SSL compression exploit, by trying subsequent parts.

You're right that Hollywood "guess the password one character at a time" animation makes sense here. It also makes sense for timing based attacks. Ie, if checking "AB" against the password takes longer than "A", you know the second character is B...


What I meant was that the browser must have some cookies stored to send to the target, cookies that the attacker wants to discover.


The attack as described by Thomas Pornin is indeed specific to HTTPS. Generalize it past HTTPS and its elements are:

1. Attacker controls some parts of plaintext

2. Attacker's content is mixed with content attacker wants to discover

3. Attacker can make repeated trials against the same plaintext (not against same ciphertext stream or key)

4. Defender compresses content on the fly

There are probably other cryptosystems that have this flaw, but it's less inherent to TLS than it is to the HTTPS security model.


That "Guess the password one char at a time" thing also occurs in real life when people forget to use constant time comparison functions.


A better comparison function:

Variable time until some threshold number of repeat failures occurs (e.g. 3). After this, manipulate the comparison time in such a manner that someone attempting to do this sort of attack would come up with a dummy password. If they come up with said dummy password, alert the user of an attempted attack.


Cool idea, but remember the botnets. I also am not sure what we gain by verifying their class of attempt if we're already defending against it.


You also need the client and server to both support TLS compression. But there aren't any major sites (that I know of) which do support TLS compression.


In my quick test, about 42% of the sites in the SSL Pulse data set (~180k SSL sites in Alexa's top 1m) support compression. For example, mail.yahoo.com does.


Google, Yahoo, Twitter are vulnerable based on SSL Labs tool


Very interesting, using differential analysis on the compression output is pretty clever.


Note that it is effectively the same attack as ECB byte-by-byte decryption, except using record length instead of repeated blocks as the signal.


Stupid question: does TLS's compression use the _same_ compression state for both directions of the connection? That seems hard to me, because they could be sending information in parallel, and there's no provision for syncing streams.

So I suspect the attack there cannot work on TLS, and in fact the demo code posted in a comment attacks local zlib, not TLS.

I think the actual attack, if it's along these lines, involves getting the server to echo back your guess of the cookie, so that the cookie and your guess are both on the same stream. Perhaps with TRACE?


The attack Pornin outlined involves only the client TX stream, which the attacker shares with the defender by virtue of content-controlled Javascript.


Oh, I got Set-Cookie and Cookie mixed up in my head. You're right.


Heh, anyone else notice the proposed attack strongly resembles Level 8 in the last Stripe CTF?


That Stripe CTF was deceptively excellent. If you (the reader of this comment) got through it and had fun, I promise, we'd be both interested to talk to you and an interesting company for you to talk to. www.matasano.com/careers.


Unless I'm missing something, I reckon there's a really obvious improvement on that, but Stack Exchange doesn't let new users leave comments. Think of the old puzzle where you've got 9 bags of gold coins which weight 10 grams each and one bag of shaved coins which weigh only 9 grams each and you need to figure out which bag has the shaved coins in only one weighing. You should be able do the same thing here - add one "Cookie: secret=A", two "Cookie: secret=B" and so on. Because of the limited look-back window in the compressor you'll need to interleave them like ABCD...BCD...CD... but it should give more information.

Edit: Wait, I am. That wouldn't work because it'd compress the repeated subsequences. D'oh.


You could still probably get more than a bit per request along those lines with some experimentation.


Probably, yeah. Thinking about it, one obvious approach might be to try and overlap the last step of guessing one byte with the first step of guessing the next byte, for instance.


Interesting. The attacker would also need to push the request size up to the next block boundary, but it should work.

At least we can still do compression at the HTTP (or other higher-level protocol) level. That gets us most of the benefit anyway.


I suspect most TLS libraries will flush the compression buffer after each write call so its highly likely that every request will end on a deflate block boundary already.


Compressed Redundancy in Message Exploit?


Compression Reveals Information Meant (to be) Encrypted?


This attack procedure applies only for dictionary based compressions. Correct?

What would be the procedure if you use bzip2 (which is based on Burrows-Wheeler transformation) compression?


It shouldn't matter so long as redundancy between headers and body result in a smaller cyphertext.


Due to the nature of compression algorithms using BWT, changing a single byte in the uncompressed data might give you +/- 5 bytes difference in the size of the compressed data; that makes pulling off an attack like this much, much, much more difficult. Don't get me wrong, you'll get variance with deflate as well, but it's nowhere near the level of something like bzip2.

I do a lot of experimentation with new compression techniques for web demos, and I recently implemented my own compression algo from scratch based around the same building blocks as bzip2. The variance I saw was just staggering; a tiny change in my source material would totally warp the output.


I would love to see somebody proving or disproving whether BZip2 is susceptible to these type of attacks. My assumption is that small changes in MTF transformation and tweaks in creating encoding table will make it impossible to crack using this method.


Fascinating explanation / prediction, thanks.


I think it's time for me to write a blog post entitled "on the provable security of spiped".

Seriously, we know how to build secure cryptographic protocols. Unfortunately, step #1 is "don't try to be backwards compatible with the horribly broken things people were doing in the 1990s".


What, compressing before encrypting?


Note to anyone that doesn't do much crypto: You have to compress before encrypting, compressing after encrypting has no effect.

(Reason: because encrypted data is [should be] indistinguishable from random data, and random data does not compress)


You're right, but again: there's advice people get to compress deliberately, not to save space in messages but to make cryptanalysis more difficult.


FWIW, that advice made more sense pre-internet, when compression formats were tighter and encryption was weaker -- the idea was that an attacker might find the key but not recognize that he successfully decrypted the data.

These days, even when we don't include cryptographic fingerprints (e.g., with the hash-and-encrypt construction C = E(M || H(M)) ) there's enough structure in compression format headers to allow an attacker to recognize if he has the right key... and he isn't going to be running a brute-force attack anyway.


Strong agree. One of the big philosophical problems in cryptographic engineering seems to be that much of it was designed to address data-at-rest, but most of the practical problems are about data-in-motion, and those are two different problems.


It's not so much that the data itself is in motion, but that online secrecy/authentication necessarily implies widely-available encryption/verification oracles. Oracles with possible implementation vulnerabilities (/features) that can be abused to answer questions such as "What's the encrypted encoding of the secret++X" or "How long does it take to check if X is valid" ?


Data in motion creates opportunities for adaptive chosen plaintext attacks that aren't present in data-at-rest scenarios. An "oracle" is a basic characteristic of a cryptosystem, not a flaw, but the "oracle attacks" you're thinking of are instances of adaptive attacks.


But it's not the fact that the data is being transported that's leading to new attacks, it's that the parties are online and responding to arbitrary messages sent by the attacker. The data itself isn't creating problems - what's new is the ability for the attacker to ask interactive questions of the parties.

When authenticating messages, a receiver necessarily gains the ability to reject a message as invalid. Adaptive chosen plaintext attacks arise when this rejection ends up containing more information than a simple Y/N. From the perspective of an attacker, the verifier becomes an oracle capable of answering say "How many bytes are valid", leading to a sub-brute-force attack.


Yes. There's absolutely zero excuse for including compression as part of a cryptographic protocol.


Can you point to an instance where you said anything like that, here or anywhere else, before last week?

It's easy for me to believe that you've internalized "minimize data-dependent branches in crypto code" and thus wouldn't have designed a compressed encrypted transport.

It is very hard for me to believe that you would have spotted this flaw immediately had anyone pointed out to you that TLS supported compression.

There is well-regarded (though not by me) crypto advice recommending that people compress before encrypting, to destroy structure in the plaintext.


This paper published in 2002 does discuss this attack. http://www.iacr.org/cryptodb/data/paper.php?pubkey=3091

"... both the SSH and TLS protocols support an option for on-the-fly compression. Potential security implications of using compression algorithms are of practical importance to people designing systems that might use both compression and encryption."


Yep. Great paper. Thanks for finding this.


It is very hard for me to believe that you would have spotted this flaw immediately had anyone pointed out to you that TLS supported compression.

I'm not claiming that I'd have noticed this particular attack. I'm saying that having compression in a supposedly secure transport layer was an obviously bad idea even before it was clear how it could be exploited. I don't need to get into a car accident to know that driving at night with my car's headlights turned off is a bad idea.

There is well-regarded (though not by me) crypto advice recommending that people compress before encrypting, to destroy structure in the plaintext.

There's a huge difference between compressing data and compressing an authentication channel. This is why compression should not be included in the secure channel -- it should be left up to the higher-level code to decide if compression is (a) completely pointless or (b) will leak information dangerously.


These are things that are obvious in hindsight. They may have been obvious to you before; or, as I'm claiming, they may just fit snugly into your (probably correct) philosophy about secure channel design.

But compression in TLS is not a relic of the 1990s; it's something that looks to have gained its earliest adoption in SSL/TLS at about the same time as Elliptic Curve.

My issue here isn't that you're wrong; it's that I think this is an extremely clever attack that says something profound about designing cryptosystems, and I wouldn't want to see Thai's and Juliano's (or Pornin's, if he's "wrong" about the prediction) work minimized by a glib comment about TLS.


compression [...] looks to have gained its earliest adoption in SSL/TLS at about the same time as Elliptic Curve

I guess by that point they had thrown in everything but the kitchen sink, and decided they might as well throw in the kitchen sink too.

I wouldn't want to see Thai's and Juliano's (or Pornin's, if he's "wrong" about the prediction) work minimized by a glib comment about TLS.

Oh, of course. I'm just irritated (as usual) by the fact that people continue to use SSL/TLS "because it's the standard" despite the fact that it's a phenomenally broken standard. There's places where you can't avoid it (HTTPS), but where it can be avoided...


Your perspective on HTTPS/TLS is that it has a history of vulnerabilities because it is poorly designed.

My perspective on HTTPS/TLS is that it has a history of vulnerabilities because it is the most carefully studied cryptosystem in human history.

I agree to disagree with you on this.


My perspective is that we know how to design protocols which are provably secure.

Your perspective is... I'm not quite sure, actually. Maybe you just don't believe in mathematical proofs?

As you say, agree to disagree -- but I'm not going to stop pointing and laughing every time a new SSL/TLS vulnerability comes out. :-)


That protocols without security proofs can survive in the real world, and protocols with security proofs still fall to implementation bugs, and that if you were going to bet on incidence of protocol design flaws vs. implementation flaws, the safe bet is on implementation flaws.


The level of civility in this thread is great, but I would pay money to see a DEF CON panel debate between you two where you each had to take a shot every 8 minutes. We could get Mikko Hypponen to moderate and pour shots!


(a) I would lose the debate, (b) it would be boring, (c) I can drink Colin under the table.


(d) I would probably end up in the hospital.

There's a reason why I don't drink -- type 1 diabetes and large quantities of alcohol don't interoperate well.


If you've got a provably secure protocol, what's the problem with formally verifying the implementation of the protocol?

I work in an area where bugs are very scary, so we use formal verification, and that's on top of having many more testers than developers.

From the perspective of this naive outsider, I'd would have expected FV to be worth it for security sensitive protocols. Is it that the protocols are too complex to be verified, or is it just not considered to be worth the effort?


I'm not making an argument against formal methods. I'm saying that if you replaced TLS with a protocol with a design proof, you could easily end up less secure.


Have there been any proposed contenders to TLS/SSL?


SSH and IPv6 come to mind.


For HTTP traffic?


Vulnerabilities don't arise by study, they arise because of vulnerabilities. We have plenty of well-studied crypto that is fine, ranging from SHA to Kerberos.

Colin is right that we know how to prove that cryptographic systems have certain security properties. The academic literature is filled with laments about TLS and proofs of fixed versions of it.


SHA? The Secure Hash Algorithm? The Secure Hash Algorithm with the length extension property? The Secure Hash Algorithm with the length extension property that was specifically forbidden from the SHA-3 contestants because it creates implementation vulnerabilities in the real world? That SHA?

Also, how does one compare a cryptographic hash function core to an entire cryptographic protocol to produce a statement about the fallibility of crypto design?


You'd have the same problem if HTTP supported compression of the request headers & body. It's because it only supports compression of the response body that there isn't any leakage already.


This is true, but it reinforces why the compression should not be done at the secure transport layer, and should instead be left to higher levels: only HTTP knows which parts of the request are potentially unsafe to include in the same compression state.

(Even if HTTP probably just got lucky here rather than deliberately making the right choice, it's still the only layer that had the chance to make the right choice).


Compression itself in a cryptographic protocol is not the issue here. The problem starts when you let an attacker add chosen plain-text before or after the secret in the same compressed and encrypted stream.

Compression before encryption is not a problem if the sender is the only person that decides what is in the message to be sent. Compression doesn't make it vulnerable to chosen plain-text attacks either. Mixing victims's and attacker's data before compression and encryption will leak data, yes.


This isn't true in theory or in practice.

In theory, protocols which fall to attacks when attackers have control of some of the message are said to be vulnerable to "chosen plaintext attacks" (if the attacker only gets 1 shot per message) or "adaptive chosen plaintext attacks" (if the attacker gets many bites at the same apple). Sound protocols don't have feasible adaptive chosen plaintext attacks.

In practice, most protocols can be coerced into carrying some data controlled by attackers. Sneaking some attacker-controlled data into a message is a very low bar for an attacker to clear.

It's true that content-controlled Javascript code makes it distinctively easy for an attacker to spirit their data into the plaintext, but don't let that confuse you. For the HTTPS/TLS cryptosystem to be sound, attackers can't use this property to decrypt the content they didn't add to the message.


The excuses I've heard for it include 1) can't do compression after the crypto, and 2) it reduces ciphertext available to attackers. SSH and PGP do it too.


What it does is eliminate the structure of the plaintext. Lots of practical crypto implementation attacks depend on plaintext structure, especially in their most naive and straightforward implementations.

A conceptual purist like Colin Percival would argue, correctly, that if there's an attack against a cryptosystem that benefits from knowing the distribution of bytes in the plaintext, that's a damning statement about the cryptosystem itself.

But compression does "break" some exploits.


What's the issue with that ? Isn't it possible to design a compression algorithm that is efficient and less predictable at the same time ? Or is it more of a general concept that I don't know of ?


It's because compression inherently creates a side-channel: the size of the message is now a function of the plaintext contents.


I thought compressing after encryption was a waste of time because encrypted text shouldn't contain compressible patterns


You think you have a handle on how hard it is to get crypto right. Then you get a result like this. As an expert (not me; I'm not smart enough to say this) would put it: "if attacker data is mixed with defender data, none of the branches in the cryptosystem code can depend on the content".


I think crypto is possibly the best example in all of software development and/or computer science of how dangerous "unknown unknowns" are, and how a little knowledge is a dangerous thing (although threads are a serious competitor for the latter trophy).

I've studied cryptography theory, and I've implemented various ciphers and attacks, and the more I learn the more certain I become that I would never, ever use any of my own crypto code in production.


Reading the title, I thought that Chrome was turning of gzip based compression for HTTP bodies when using SSL. According to http://security.stackexchange.com/a/19914 (linked above by jgrahamc), it is in fact header compression that is TLS specific. So a giant sigh of relief from me.


Thanks for noting this. I also got this wrong on the first reading. Having to drop gzip compression of the content would be terrible. I fathom that header compression helps if you have many requests (polling etc), but for most websites it is much less influential in bandwidth usage.


Problem 1 in Stanford's crypto class [1]:

Data compression is often used in data storage or transmission. Suppose you want to use data compression in conjunction with encryption. Does it make more sense to

A. Compress the data and then encrypt the result, or

B. Encrypt the data and then compress the result.

Now we know the correct answer is Neither.

[1] http://crypto.stanford.edu/~dabo/cs255/hw_and_proj/hw1.pdf


If you're encrypting and storing files, the answer is (A). The question is, in light of this result, frustratingly imprecise.


Well, B makes no sense. A makes sense, but might have leaks, that depending on use case (and in particular, whether the attacker can execute chosen plaintext attacks like in SSL) may or may not weaken the system, but it still makes sense.

For a confusing example of a fairly leaky cryptosystem involving compression that works well in practice and has a proof of security that is aware of the leak, see Douceur et al.'s "convergent encryption".

http://research.microsoft.com/apps/mobile/publication.aspx?i...



"The good news is we've got working patches for most of the issues. The bad news is some of them might contribute to global warming"

Disabling compression could certainly contribute to global warming in a relatively small way...


Colour me stupid, but is there something to prevent inserting indeterminate padding while sending headers? e.g. imagine a request like

    GET / HTTP/1.0
    X-Pad: GET / HTTP/aaaaaaa
Where X-Pad randomly repeats previous bytes, and perhaps 0..8 bytes of random/repeating variable-length data. By randomizing the effectiveness of the compression, surely this attack can be generally prevented by the browser?

You could argue that given enough samples the noise could be filtered and the attack still remains, but the same could be said the same for many successful patches over the years (TCP sequence number randomization, Kaminsky's DNS issue, etc.), and the number of samples required would be pretty infeasible.


I'm fairly certain this could be defeated with a little differential cryptanalysis. Similar approaches to defeating side-channel attacks on smart card power usage have been tried and defeated :/


There's no point - you're better off just disabling TLS compression and doing whatever compression of the body-only at the HTTP level.


If it's random, you can just get enough samples and remove the noise. It'd have to be padding to a given number of bytes, but that probably has a vulnerability too.


Doesn't look like this has anything to do with an exploit. It is because some older web servers who don't support TLS also don't support compression.

See:http://code.google.com/p/chromium/issues/detail?id=31628


That's always possible, but why would the bug report linked from the code change be private then?

http://code.google.com/p/chromium/issues/detail?id=139744


No, I doubt that something closed in early 2010 has much to do with a new patch that has a rationale disabled from public viewing which implies it's a security emergency.


You are correct, I misread the code commit. The link to the 2010 bug was removed from the code, not added to it.


I think 'jgrahamc is right.


While this is probably security related, disabling compression is actually a good practice for clients which make a large number of SSL requests (like a browser). OpenSSL allocates alot of memory per connection for compression/decompression. See http://journal.paul.querna.org/articles/2011/04/05/openssl-m... for a great discussion of why SSL_OP_NO_COMPRESSION might make sense for you.

Even if you don't have direct access to the library code creating SSL objects, there are still some tricks with ctypes, ffi, dlopen that have the same effect.


Does the attack apply to SPDY as well?


I think it does, although not precisely as described in the SO post above: in SPDY the request headers are compressed separately before any TLS compression would apply, so the attacker would have to put her guess of the cookie in the request headers or the URL (as the URL is sent as a pseudo-header in SPDY).


It would be interesting to know if there were any changes in the SPDY implementation. It too supports compression before encryption and may be affected by the same problem.


So, to summarize for the laypeople, if the user has Javascript and/or cookies disabled, this exploit will not work?


Hidden <iframe> elements would probably work just as well.

It might even work with JacaScript turned. Have a page with nested iframes. Serve each one from a different IP address and have the server not answer right away. The server answers with the first iframe having a chosen URL while Eve snoops the wire. Eve tells the server which URL to use for the next iframe body that it sends. Repeat until the cookie is deduced. The server can use a nested tree of iframes to avoid having a bazillion iframes in the top level document.


Javascript just makes the attack faster, but it can be accomplished with raw html. Cookies are required though; it's what you are after when using this attack.


Is this a disabling of compression on both ends? No DEFLATE on either requests or responses?

Will a site using HTTPS be able to serve compressed assets (like javascript & css)?

If not, we’ll urgently need a way for site devs to whitelist precomputed (and thus immune) static assets for compression.


It's a disabling of the optional compression at the TLS layer. HTTP content compression is not affected.


So what, exactly, is no longer compressed? Merely the headers themselves, and the content is not affected?


Pretty much, yes. With the proviso that if you didn't have HTTP Content-Type compression for your content before, then it might still have been compressed at the TLS layer, and won't be now.


CRIME researcher says 4 request are enough to decrypt: https://twitter.com/julianor/status/245943430570704896


To decrypt each byte.


man sshd_config.

Compression: Specifies whether compression is allowed, or delayed until the user has authenticated successfully. The argument must be "yes", "delayed", or "no". The default is "delayed".

I don't know why they chose "delayed" as default, but this seems to prevent this attack.


The SSH protocol is nothing like TLS. The SSH security model is nothing like the TLS security model. Pornin's guess about Thai and Juliano's attack does not break the TLS authentication model; it breaks the confidentiality guarantees TLS makes to the application layer. If you authenticated TLS with certificates the way SSH authenticates with keys, you wouldn't worry about the compression attack.

So, not so much.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: