I wrote a comment about this a year and a half ago, and I'm just go to re-paste it verbatim here:
You want sick? Here's sick.
An AES block is 16 bytes long. You want to encrypt the lyrics to "Yellow Submarine". You can't just repeat AES for each 16 byte block, because then you can tell where the words "yellow submarine" --- 1 AES block --- occur: they'll encrypt to the same block. So you use CBC mode, XOR'ing each plaintext block to the previous ciphertext block, creating a unique ciphertext every time you encrypt those lyrics. But "Yellow Submarine" is 902 characters long; the last block is 6 bytes short. So you pad the message which a count of the pad characters --- 06h 06h 06h 06h 06h 06h.
With me so far?
Decrypt the message. Check the padding: it should repeat as many times as the value of the last byte. If not, you decrypted badly. Send an error to that effect.
Hey guess what! I can decrypt parts of your message!
If I stick a random AES block in your message, it will scramble the padded block. Every once in awhile, that scrambled message will end in 01h. 01h is valid padding. The message will be gibberish, but you at least won't tell me it had bad padding.
You wouldn't tell me if you decrypted to bad padding if I generated 02h 02h either. But that's much less likely than 01h.
Now I know that your plaintext XOR my random block produces a last byte of 01h. Solve.
Now my point: this is actually way old news --- back to Bleichenbacher and Vaudenay and probably before that (I'm not the resident expert) --- but attacks like this have broken peer-reviewed versions of TLS, the most peer-reviewed crypto protocol ever designed.
Don't build crypto.
Some quick back-story
Vaudenay described this attack almost 10 years ago, in this paper:
We blogged about padding oracles in our AES post 2 years ago.
Then Thai Duong and Juliano Rizzo independently re-discovered them and went on a tear looking for software vulnerable to the flaw. Just a few months ago, they busted up Java Server Faces with the exact same vulnerability.
This vulnerability is also a galactic pain in the ass to address:
* It exploits the common-case error handling behavior of most web stacks (generate exception, do something that visibly indicates to the user that an exception was generated)
* It's not enough just to stifle the errors; you have to make observable behavior in the bad-padding case identical to the good-padding case.
* If you're MAC'ing packets as well as encrypting them, you also have to make sure the timing is the same in the bad-padding and good-padding case, which means you have to "pretend" to MAC the bad-padding packets, which is extremely counterintuitive (you have to catch an exception then pass "something" to a MAC verification routine).
Some food for thought
It's 2010, vulnerability research is no longer a cottage industry (it's hundreds of millions annually), and we're just now hearing about exploits of one of the simplest crypto flaws there is (there is almost no math to this one).
Software security people simply don't read crypto papers.
Crypto researchers, at least until recently, don't break software.
God only knows how much other scary stuff is buried in those papers waiting for an enterprising bug hunter to try them out.
Make sure fat cookies are MAC'd AND CBC-encrypted, decrypt, always check the MAC, generate indistinguishable errors for the MAC failure and the padding failure, and expire sessions associated with bad decryptions or MAC failures.
Encrypt-then-Sign addresses this problem, but it's a design change, and it's not the case that Encrypt-then-Sign is an industrywide best practice (Practical Cryptography, for instance, recommends the opposite).
Encrypt-then-sign prevents the padding oracle attack; a 1 bit change to the input will immediately fail the MAC verification, preventing attackers from generating the repeated trials they need to exploit the padding error.
This doesn't depend on the algorithms you use. The padding oracle attack works just fine with DES-EDE, Serpent, AES or Twofish; HMAC-SHA1, HMAC-SHA256, and (probably) HMAC-MD5 are all equally effective at combating it.
The train has already left the station on encrypted fat cookies. It's not productive to argue that the problem is fundamental to the design; it isn't. This is the textbook definition of an implementation fault.
Note that (a) lots of people have screwed up HMAC recently and (b) HMAC isn't a solution to the padding oracle problem unless you take pains to make sure it isn't.
Regardless of the issue mentioned in this article, it could be argued that encrypting cookies, which basically means that you're storing data on a user's computer which the user is not allowed to even look at, is a flawed idea. If the user should not be able to look at it, why store it at the user's computer in the first place? And why must the app hide the data for the user?
(Unless I'm mistaken and there's other advantages for choosing encryption over MAC?)
Interesting, how could you exploit padding oracle when you're using HMAC?
You need to do both: encrypt to provide confidentiality, MAC to provide integrity.
In sign-then-decrypt designs --- which, from what I've seen, is what most designs are --- you can still exploit a padding oracle when the app catches padding errors and skips the (pointless) MAC verification for a packet it plans to discard anyways.
So wait - this is about clientside cookies that a developer would knowingly choose to write versus something that is just automatically happening behind the scenes with asp.net? Aka, if you never write any cookies - encrypted or otherwise - then this doesn't apply to you or your app?
I don't know ASP.NET at all, but this is the way I understood the article:
If a developer writes a cookie, it's not encrypted and the user can easily change it. Just normal cookies.
If a developer writes a session cookie, ASP.NET encrypts it behind the scene and stores it in a cookie. Because the user doesn't know the encryption key, he can't change the session cookie. The developer doesn't do any encryption himself, he just tells ASP.NET "Please store this value at the user, but he should not be able to tamper with it".
However, crypto is hard, so because of the issue mentioned in this article it is possible to tamper the data.
All anyone "officially" knows right now is that there is some place in ASP.NET where the stack uses AES/CBC to encrypt data that is then passed back and forth with clients, and that place failed to prevent padding oracles. 'brl is close with Juliano Rizzo, and downthread he suggested that this flaw allows attackers to 100% effectively forge authentication credentials on any ASP.NET application.
Software security people simply don't read crypto papers.
Crypto researchers, at least until recently, don't break software.
Do you know of any statistics on how often attacks on web applications are cracks of the crypto involved as compared to many of the other weak links in the chain? Seems like hackers/crackers will find much easier ways to circumvent the system, even if the crypto has weaknesses as outlined the above article.
You don't find a lot of SQLI and XSS in global retail banking applications (well, you do, but only in the pentest on the staging environment). A padding oracle that lets you corrupt clientside state, though: pretty devastating.
We clearly don't deal with the same banks. I'd say XSS is the number one most commonly found vuln we come across in financial sector web apps. Some of our customers even accept the risks associated with it because the cost of fixing it is greater than the resulting insurance-adjusted loss.
SQL Injection happens now and again but less often, possibly because with a lot of the banks we deal with they're not necessarily using an SQL backend as opposed to some sort of broker for backend systems. If you can abuse the broker though (and that is a common theme) then there's all kinds of fun/problems to be had.
We're in NYC and Chicago and financial services is a practice focus, but this is a distinctively unproductive argument to have. Clearly there are banking applications that have XSS. Clearly some of those XSS flaws make it into product. Sometimes, this even happens in retail banking apps; it may have happened on the retail banking apps of a global bank somewhere.
I think if you re-read my comments in the context I wrote them in, you'll see that all I'm saying is that there are heavily-audited apps for which a padding oracle flaw would be a reasonable first step for an attacker.
To concur with Thomas' statement above, I've been doing application testing for about 11 years now (predominantly in the Financial sector), and with the exception of when XSS was initially discovered (and over-hyped) I don't regularly encounter production e-commerce or online banking apps which are vulnerable to XSS.
When XSS was discovered, we had one large financial institution that we did testing for make us go back and retest all the applications we'd tested for them for XSS (this was about 20 online banking apps), and for about the next two years, it would pop up in tests for other clients.
I can safely state that now it's pretty rare though, as the majority of the applications we're looking at have been tested annually, and the low hanging fruit has been picked a long time ago.
But this type of attack, like most crypto attacks, seems to be singular in nature. It's one thing to have access to the client-side state via (machine compromise or MITM) of a single person or single computer. But it certainly seems that attacks of this directed nature are not very common against web apps. It seems more likely that a huge net is thrown against a large number of computers and a few get compromised. Hardly a directed attack.
It seems that weaknesses like these, while problematic and fun to discuss, are not really the ways in which web applications are generally attacked.
That's not even a little bit true. Are there people dragnetting the Internet looking for the same Wordpress flaw over and over again? Yes. Does that mean people aren't spending hours/days/months attacking single banking apps? Of course they are.
You aren't going to break a Fortune 100 bank's retail app with a textbook SQLI. Those apps have been audited several times over. It would follow from your worldview that those apps simply don't get attacked. Of course they'll get attacked.
Also: padding oracles? HMAC timing attacks? Not rocket surgery.
I think we're talking about two different things here. You seem to be talking about attacks against the application itself like SQLI, etc. But I'm talking about attacks on the much larger numbers of users of the system.
Pick the random Fortune 100 bank app and wouldn't it be more fruitful to attack the pc's of the clients rather than the server? You've got probably millions of users of the application, most of which are barely secure in the first place. As you said, the web server side is going to be the toughest link in the chain. Not impervious, but certainly difficult. Seems much more likely the hacker/cracker will target the users. In such an attack, the crypto is not the thing attacked.
Yep, so apply the patch. Now what do you do about your millions of insecure clients running unpatched, out-dated OS/browsers? How do you solve that problem? Security folks don't want to talk about that because it's hard. The client-side is a much larger number of opportunities and much weaker link in the chain. Yes, you need to deal with both, but it's hard to find discussion on the latter, and a lot on the former.
Yes, the sun is still spinning on its axis, slowly burning its way through our atmosphere and hastening the inevitable demise of our species, whether or not you patch the ASP.NET CBC padding oracle. But for today, can we just focus on the padding oracle?
I thought "the crypto is protecting the app from its users" was pretty clear, but apparently it wasn't. Let me try it again: "the crypto that Juliano and Thai broke is protecting your bank from you". Is that more helpful? The attack doesn't involve sniffing.
Can I ask, what's the psychology of knee-jerk "this vulnerability doesn't matter" comments? You clearly don't know what the flaw is, and that's fine, but I'm really curious: why do you want it to be pointless? Isn't the world a more interesting place when ASP.NET can blow up spectacularly because of a 2-line programming error?
What's your point? That you don't like the way ASP.NET, J2EE/JSF, Rails, and Django work? You started out saying "just use SSL and you don't have this problem". You were wrong. Then you said "but all I can see is stuff I already know". You were wrong there too. Now you've backpedaled all the way to first principles. Sure, now you're not wrong; indeed, if everyone just redesigns their applications not to use AES at all, they will in fact be safer.
OK, the jab about SSL was misguided. On my first read of the article, I thought it was about protecting cookies from eavesdroppers. I didn't consider people attacking the app because I thought it was kind of silly that you'd leave yourself open to that. Oh well, turns out I'm wrong and the world is crazy.
I suspect that there are precious few ASP.NET applications where this is any sort of a risk (millions? - rubbish) and any developer that lets his or her server side code send anything other than a generic error message back to the client in 2010 would be an idiot.
Bruce Leidl ('brl) works with Juliano, and just commented that this is a 100%-effective attack against any ASP.NET application's authentication.
It's OK to be skeptical about things (god knows I am), but it's important to be clear about where your actual knowledge ends. I think one thing that's confusing people is that the talk hasn't actually been presented yet.
Between your comments, brl's, and the JSF paper, its clear enough how this works now. Thanks.
I still suspect this is being overhyped. Being able to forge the auth cookie 100% of the time doesn't change the fact that you need a server-recognized value to actually forge.
This seems like it could indeed badly damage apps that stash important things in client state expecting it to be secure, but a lot of bog standard asp.net apps will be unscathed, unless I'm missing some killer detail. We'll see.
Edit: If the contents of a forms authentication cookie are not session-bound and assume that crypto == safe, that could certainly make things interesting. No idea though.
Edit2: Increasingly convinced that this could enable devastating attacks on forms auth (a large proportion of asp.net apps to be sure) - the content of a forms auth ticket appears to be trivially constructed.
Incorrect. OP is correct. The article is sensationalistic to a fault. Anyone upvoting yours and downvoting his does not understand either how ASP.NET works or didn't read the article or its comments. Sorry - not trying to be a jerk but it bugs me whenever incorrect information is upvoted, especially when it comes at the expense of downvoting the correct information.
The first thing you learn about error handling and security is to not send back error messages that have useful information that hackers could use for penetration; that is programming 101 stuff. Therefore, any decent ASP.NET app does not send back the error messages that the article talks about - the app devs hide those behind, "Oops! Sorry 404!" type stuff. It's super simple and done on almost all ASP.NET sites.
So yes, there is a vulnerability, but this is sort of like the SQL Server bug from 5+ years ago that depended on their being no password for the server admin account. Yes, certain people will be bitten, but by and large "professionals" will have covered for this already.
And no, "the generic error" is not all they need b/c the developer says "Oops 404!" as the generic error or "Oops Server Error 500". Nothing substantial and no information that would give away what happened or why.
They don't need timing. They don't need to sniff. HMAC doesn't solve the problem. They don't use detailed errors. You should probably read their JSF paper from earlier this year to see the attack pattern they're working with.
Here is a clue: they are using ASP.NET behavior to generate a one bit signal from the target. Their attack requires many tens of thousands of requests. That's about as much as I can say.