Hacker News new | comments | show | ask | jobs | submit login
'Padding Oracle' Crypto Attack Affects Millions of ASP.NET Apps (threatpost.com)
38 points by Uncle_Sam 2355 days ago | hide | past | web | 48 comments | favorite

This looks pretty bad. The standard way to authenticate in an asp.net app, FormsAuthentication.RedirectFromLoginPage(), basically encrypts a username and saves it to a cookie.

Any subsequent requests to a site will trust that cookie as long as it decrypts properly.

Yes, exactly. Among other things, this attack allows you to forge authentication cookies on any ASP.NET application with a 100% success rate.

I wrote a comment about this a year and a half ago, and I'm just go to re-paste it verbatim here:

You want sick? Here's sick.

An AES block is 16 bytes long. You want to encrypt the lyrics to "Yellow Submarine". You can't just repeat AES for each 16 byte block, because then you can tell where the words "yellow submarine" --- 1 AES block --- occur: they'll encrypt to the same block. So you use CBC mode, XOR'ing each plaintext block to the previous ciphertext block, creating a unique ciphertext every time you encrypt those lyrics. But "Yellow Submarine" is 902 characters long; the last block is 6 bytes short. So you pad the message which a count of the pad characters --- 06h 06h 06h 06h 06h 06h.

With me so far?

Decrypt the message. Check the padding: it should repeat as many times as the value of the last byte. If not, you decrypted badly. Send an error to that effect.

Hey guess what! I can decrypt parts of your message!

If I stick a random AES block in your message, it will scramble the padded block. Every once in awhile, that scrambled message will end in 01h. 01h is valid padding. The message will be gibberish, but you at least won't tell me it had bad padding.

You wouldn't tell me if you decrypted to bad padding if I generated 02h 02h either. But that's much less likely than 01h.

Now I know that your plaintext XOR my random block produces a last byte of 01h. Solve.

Now my point: this is actually way old news --- back to Bleichenbacher and Vaudenay and probably before that (I'm not the resident expert) --- but attacks like this have broken peer-reviewed versions of TLS, the most peer-reviewed crypto protocol ever designed.

Don't build crypto.

Some quick back-story

Vaudenay described this attack almost 10 years ago, in this paper:


It's very readable for a crypto paper.

We blogged about padding oracles in our AES post 2 years ago.

Then Thai Duong and Juliano Rizzo independently re-discovered them and went on a tear looking for software vulnerable to the flaw. Just a few months ago, they busted up Java Server Faces with the exact same vulnerability.

The nature of the attack is this: in almost all modern web frameworks, session storage can be stored "clientside" in fat cookies. These cookies are encrypted so that users can't modify their contents. Because of this vulnerability, attackers can take legitimate fat cookies, decrypt them, and (often) modify their contents. Those attackers can then (say) issue fraudulent payments, change their user ID, alter their account balance, or inject SQL or Javascript into code that never expected to be exposed to attack because it was guarded with AES.

This vulnerability is also a galactic pain in the ass to address:

* It exploits the common-case error handling behavior of most web stacks (generate exception, do something that visibly indicates to the user that an exception was generated)

* It's not enough just to stifle the errors; you have to make observable behavior in the bad-padding case identical to the good-padding case.

* If you're MAC'ing packets as well as encrypting them, you also have to make sure the timing is the same in the bad-padding and good-padding case, which means you have to "pretend" to MAC the bad-padding packets, which is extremely counterintuitive (you have to catch an exception then pass "something" to a MAC verification routine).

Some food for thought

It's 2010, vulnerability research is no longer a cottage industry (it's hundreds of millions annually), and we're just now hearing about exploits of one of the simplest crypto flaws there is (there is almost no math to this one).

Software security people simply don't read crypto papers.

Crypto researchers, at least until recently, don't break software.

God only knows how much other scary stuff is buried in those papers waiting for an enterprising bug hunter to try them out.

What are potential solutions to this? Keep a KV store of ids and checksums of every cookie sent and verify them when requesting the data back from the client?

Make sure fat cookies are MAC'd AND CBC-encrypted, decrypt, always check the MAC, generate indistinguishable errors for the MAC failure and the padding failure, and expire sessions associated with bad decryptions or MAC failures.

Encrypt-then-Sign addresses this problem, but it's a design change, and it's not the case that Encrypt-then-Sign is an industrywide best practice (Practical Cryptography, for instance, recommends the opposite).

I think it depends on the encryption and signing algorithm implementation as to what works best, but then again I deal with app security and am not a cryptographer.

Encrypt-then-sign prevents the padding oracle attack; a 1 bit change to the input will immediately fail the MAC verification, preventing attackers from generating the repeated trials they need to exploit the padding error.

This doesn't depend on the algorithms you use. The padding oracle attack works just fine with DES-EDE, Serpent, AES or Twofish; HMAC-SHA1, HMAC-SHA256, and (probably) HMAC-MD5 are all equally effective at combating it.

I was thinking more about the problems with MD5, but you're right, thanks for the comment.

I believe the real solution to this problem is to not encrypt the cookies at all, but rather just use HMAC to make sure no-one can tamper with them.

If you need to put sensitive data in cookies, you're doing it wrong!

The train has already left the station on encrypted fat cookies. It's not productive to argue that the problem is fundamental to the design; it isn't. This is the textbook definition of an implementation fault.

Note that (a) lots of people have screwed up HMAC recently and (b) HMAC isn't a solution to the padding oracle problem unless you take pains to make sure it isn't.

Regardless of the issue mentioned in this article, it could be argued that encrypting cookies, which basically means that you're storing data on a user's computer which the user is not allowed to even look at, is a flawed idea. If the user should not be able to look at it, why store it at the user's computer in the first place? And why must the app hide the data for the user?

(Unless I'm mistaken and there's other advantages for choosing encryption over MAC?)

Interesting, how could you exploit padding oracle when you're using HMAC?

You need to do both: encrypt to provide confidentiality, MAC to provide integrity.

In sign-then-decrypt designs --- which, from what I've seen, is what most designs are --- you can still exploit a padding oracle when the app catches padding errors and skips the (pointless) MAC verification for a packet it plans to discard anyways.

So wait - this is about clientside cookies that a developer would knowingly choose to write versus something that is just automatically happening behind the scenes with asp.net? Aka, if you never write any cookies - encrypted or otherwise - then this doesn't apply to you or your app?

I don't know ASP.NET at all, but this is the way I understood the article:

If a developer writes a cookie, it's not encrypted and the user can easily change it. Just normal cookies.

If a developer writes a session cookie, ASP.NET encrypts it behind the scene and stores it in a cookie. Because the user doesn't know the encryption key, he can't change the session cookie. The developer doesn't do any encryption himself, he just tells ASP.NET "Please store this value at the user, but he should not be able to tamper with it".

However, crypto is hard, so because of the issue mentioned in this article it is possible to tamper the data.

Could someone correct me if I'm wrong?

All anyone "officially" knows right now is that there is some place in ASP.NET where the stack uses AES/CBC to encrypt data that is then passed back and forth with clients, and that place failed to prevent padding oracles. 'brl is close with Juliano Rizzo, and downthread he suggested that this flaw allows attackers to 100% effectively forge authentication credentials on any ASP.NET application.

The only thing that sticks out in my mind as being a potential issue is user IDs.

"Check the padding: it should repeat as many times as the value of the last byte."

Shouldn't that be 'at least as many times as'? After all, the 902th byte could be a 06h, too.


Software security people simply don't read crypto papers.

Crypto researchers, at least until recently, don't break software.

Do you know of any statistics on how often attacks on web applications are cracks of the crypto involved as compared to many of the other weak links in the chain? Seems like hackers/crackers will find much easier ways to circumvent the system, even if the crypto has weaknesses as outlined the above article.

You don't find a lot of SQLI and XSS in global retail banking applications (well, you do, but only in the pentest on the staging environment). A padding oracle that lets you corrupt clientside state, though: pretty devastating.

We clearly don't deal with the same banks. I'd say XSS is the number one most commonly found vuln we come across in financial sector web apps. Some of our customers even accept the risks associated with it because the cost of fixing it is greater than the resulting insurance-adjusted loss.

SQL Injection happens now and again but less often, possibly because with a lot of the banks we deal with they're not necessarily using an SQL backend as opposed to some sort of broker for backend systems. If you can abuse the broker though (and that is a common theme) then there's all kinds of fun/problems to be had.

We're in NYC and Chicago and financial services is a practice focus, but this is a distinctively unproductive argument to have. Clearly there are banking applications that have XSS. Clearly some of those XSS flaws make it into product. Sometimes, this even happens in retail banking apps; it may have happened on the retail banking apps of a global bank somewhere.

I think if you re-read my comments in the context I wrote them in, you'll see that all I'm saying is that there are heavily-audited apps for which a padding oracle flaw would be a reasonable first step for an attacker.

To concur with Thomas' statement above, I've been doing application testing for about 11 years now (predominantly in the Financial sector), and with the exception of when XSS was initially discovered (and over-hyped) I don't regularly encounter production e-commerce or online banking apps which are vulnerable to XSS.

When XSS was discovered, we had one large financial institution that we did testing for make us go back and retest all the applications we'd tested for them for XSS (this was about 20 online banking apps), and for about the next two years, it would pop up in tests for other clients.

I can safely state that now it's pretty rare though, as the majority of the applications we're looking at have been tested annually, and the low hanging fruit has been picked a long time ago.

But this type of attack, like most crypto attacks, seems to be singular in nature. It's one thing to have access to the client-side state via (machine compromise or MITM) of a single person or single computer. But it certainly seems that attacks of this directed nature are not very common against web apps. It seems more likely that a huge net is thrown against a large number of computers and a few get compromised. Hardly a directed attack.

It seems that weaknesses like these, while problematic and fun to discuss, are not really the ways in which web applications are generally attacked.

That's not even a little bit true. Are there people dragnetting the Internet looking for the same Wordpress flaw over and over again? Yes. Does that mean people aren't spending hours/days/months attacking single banking apps? Of course they are.

You aren't going to break a Fortune 100 bank's retail app with a textbook SQLI. Those apps have been audited several times over. It would follow from your worldview that those apps simply don't get attacked. Of course they'll get attacked.

Also: padding oracles? HMAC timing attacks? Not rocket surgery.

I think we're talking about two different things here. You seem to be talking about attacks against the application itself like SQLI, etc. But I'm talking about attacks on the much larger numbers of users of the system.

Pick the random Fortune 100 bank app and wouldn't it be more fruitful to attack the pc's of the clients rather than the server? You've got probably millions of users of the application, most of which are barely secure in the first place. As you said, the web server side is going to be the toughest link in the chain. Not impervious, but certainly difficult. Seems much more likely the hacker/cracker will target the users. In such an attack, the crypto is not the thing attacked.

Both attacks happen routinely. A serverside crypto flaw that lets users impersonate other users is not just something fun to talk about; that's an emergency patch for most sensitive apps.

Yep, so apply the patch. Now what do you do about your millions of insecure clients running unpatched, out-dated OS/browsers? How do you solve that problem? Security folks don't want to talk about that because it's hard. The client-side is a much larger number of opportunities and much weaker link in the chain. Yes, you need to deal with both, but it's hard to find discussion on the latter, and a lot on the former.

Yes, the sun is still spinning on its axis, slowly burning its way through our atmosphere and hastening the inevitable demise of our species, whether or not you patch the ASP.NET CBC padding oracle. But for today, can we just focus on the padding oracle?

Or you could use SSL and not give a shit.

The bug isn't being able to decrypt cookies, it's thinking that encrypted cookies are useful.

SSL has absolutely nothing to do with this vulnerability. The encryption being discussed here protects the application from its own users.

From the article: "The attack allows someone to decrypt sniffed cookies, which could contain valuable data such as bank balances, Social Security numbers or crypto keys."

I'm pretty sure I already know my bank balance, so hiding it from me seems pretty pointless. And like I said, protecting an app from its users by encrypting cookies is a stupid idea from the start.

I thought "the crypto is protecting the app from its users" was pretty clear, but apparently it wasn't. Let me try it again: "the crypto that Juliano and Thai broke is protecting your bank from you". Is that more helpful? The attack doesn't involve sniffing.

Can I ask, what's the psychology of knee-jerk "this vulnerability doesn't matter" comments? You clearly don't know what the flaw is, and that's fine, but I'm really curious: why do you want it to be pointless? Isn't the world a more interesting place when ASP.NET can blow up spectacularly because of a 2-line programming error?

OK, fine, the crypto is protecting the bank from me. It's still stupid. Don't send users secrets with the implicit promise that they won't tamper with it.

It's a lot better for my bank to not send me anything to tamper with. Then it doesn't matter if the crypto works or not, because there's nothing I can tamper with.

Why is it pointless to point out that there's a better way to build apps that avoids the entire flaw?

What's your point? That you don't like the way ASP.NET, J2EE/JSF, Rails, and Django work? You started out saying "just use SSL and you don't have this problem". You were wrong. Then you said "but all I can see is stuff I already know". You were wrong there too. Now you've backpedaled all the way to first principles. Sure, now you're not wrong; indeed, if everyone just redesigns their applications not to use AES at all, they will in fact be safer.

OK, the jab about SSL was misguided. On my first read of the article, I thought it was about protecting cookies from eavesdroppers. I didn't consider people attacking the app because I thought it was kind of silly that you'd leave yourself open to that. Oh well, turns out I'm wrong and the world is crazy.

I suspect that there are precious few ASP.NET applications where this is any sort of a risk (millions? - rubbish) and any developer that lets his or her server side code send anything other than a generic error message back to the client in 2010 would be an idiot.

"Nothing to see here" as they say.

The generic error is all they need. Next time, don't write "nothing to see here" when all you did was skim a news story.

To be fair, the "millions" does seem quite hyperbolic, at least insofar as it assumes every ASP.NET app will be in grave and imminent danger. Plenty of ASP.NET devs don't use cookies at all beyond the default behavior, which is to track session id. Modifying that wont be terribly useful without another known session to hijack, I'd think.

Bruce Leidl ('brl) works with Juliano, and just commented that this is a 100%-effective attack against any ASP.NET application's authentication.

It's OK to be skeptical about things (god knows I am), but it's important to be clear about where your actual knowledge ends. I think one thing that's confusing people is that the talk hasn't actually been presented yet.

Between your comments, brl's, and the JSF paper, its clear enough how this works now. Thanks.

I still suspect this is being overhyped. Being able to forge the auth cookie 100% of the time doesn't change the fact that you need a server-recognized value to actually forge.

This seems like it could indeed badly damage apps that stash important things in client state expecting it to be secure, but a lot of bog standard asp.net apps will be unscathed, unless I'm missing some killer detail. We'll see.

Edit: If the contents of a forms authentication cookie are not session-bound and assume that crypto == safe, that could certainly make things interesting. No idea though.

Edit2: Increasingly convinced that this could enable devastating attacks on forms auth (a large proportion of asp.net apps to be sure) - the content of a forms auth ticket appears to be trivially constructed.

Incorrect. OP is correct. The article is sensationalistic to a fault. Anyone upvoting yours and downvoting his does not understand either how ASP.NET works or didn't read the article or its comments. Sorry - not trying to be a jerk but it bugs me whenever incorrect information is upvoted, especially when it comes at the expense of downvoting the correct information.

The first thing you learn about error handling and security is to not send back error messages that have useful information that hackers could use for penetration; that is programming 101 stuff. Therefore, any decent ASP.NET app does not send back the error messages that the article talks about - the app devs hide those behind, "Oops! Sorry 404!" type stuff. It's super simple and done on almost all ASP.NET sites.

So yes, there is a vulnerability, but this is sort of like the SQL Server bug from 5+ years ago that depended on their being no password for the server admin account. Yes, certain people will be bitten, but by and large "professionals" will have covered for this already.

And no, "the generic error" is not all they need b/c the developer says "Oops 404!" as the generic error or "Oops Server Error 500". Nothing substantial and no information that would give away what happened or why.

This attack doesn't rely on reading information out of errors. You seem remarkably confident about your interpretation of a talk that hasn't been presented yet.

Was it your perception from this report that the flaw here was developers leaking crypto secrets in the text of their errors? You were wrong.

It isn't at all clear how getting the default generic YSOD is going to generate "more data" for the attacker by repetition, unless response time is the data of interest. Is that the case here?

They don't need timing. They don't need to sniff. HMAC doesn't solve the problem. They don't use detailed errors. You should probably read their JSF paper from earlier this year to see the attack pattern they're working with.

Here is a clue: they are using ASP.NET behavior to generate a one bit signal from the target. Their attack requires many tens of thousands of requests. That's about as much as I can say.

As somebody who is familiar with some details of the vulnerability, I can assure you that the error reporting configuration is irrelevant.

The HTTP response code is the only information needed for the attack.

200 vs. 500 vs. 404

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact