
OAuth 2.0 Hack Exposes 1B Mobile Apps to Account Hijacking - Jerry2
https://threatpost.com/oauth-2-0-hack-exposes-1-billion-mobile-apps-to-account-hijacking/121889/
======
ademarre
These are implementation flaws, not flaws of the OAuth or OpenID Connect
protocols themselves.

But the prevalence of implementation flaws indicates a bigger problem. We need
better tools and libraries for app developers. Too many people are rolling
their own ad hoc implementations, and they shouldn't have to do that.

Some think the spec is partially to blame. The core spec is deceptively
approachable, and implementers don't know how to respect it properly. And it's
only a primitive building block. On its own it's often not enough to be
useful. Even if you implement the core successfully, the glue around it might
be bad. That's why I believe we need a bigger ecosystem of vetted utilities
for every niche.

~~~
mSparks
This isn't an "implimentation flaw". Its a fundamental flaw in the core logic.

->the attacker would then sign in, via OAuth 2.0, with their own credentials. The proxy would capture the outbound exchange and the attacker could then substitute their user ID with a victim’s (this would be obtained from public information).

~~~
ademarre
No, it is not a flaw in the OAuth core logic. From the paper's intro:

> _" the OAuth2.0 standard does not cover or define the critical security
> requirements and protocol details to govern the interactions between the
> 3rd-party (client-side) mobile app and its corresponding backend server
> during the SSO process. As a result, various IdPs have developed different
> home-brewed extensions..."_

~~~
mSparks
How does not defining the security protocols of your authentication protocol
not count as a flaw in the core logic?

An entry security product that doesnt define any security on entering...

Pure Genius.

~~~
tecmobowlbo
I believe that is one of the finer items the paper points out: OAuth2 is NOT
an authentication protocol, its an authorization protocol.

~~~
mSparks
[https://m.youtube.com/watch?v=zKuFu19LgZA](https://m.youtube.com/watch?v=zKuFu19LgZA)

------
dperfect
Unfortunately, I've come across these kinds of issues first-hand in a number
of existing projects. It usually comes down to the fact that a surprising
number of back end implementations trust data from the client without
verification.

Who's to blame? Well, the app / back end developers obviously, but I think it
goes beyond that. As we all know, client-side frameworks are all the rage
these days (receiving more attention than back end technologies, it seems),
and with the popularity of BaaS providers (Firebase, Parse Server, etc), front
end developers often feel over-confident in handling these things without
really understanding what's going on underneath (and the security
implications) because hey, "using a BaaS means I don't have to worry about
that kind of stuff."

Add to that the fact that so many of the popular OAuth providers support
"implicit grants" \- where the access token is sent directly to the client -
without adequate warning as to the security implications involved. Personally,
I've yet to come across a case where the implicit flow is justified, and in my
opinion, it should be disabled for any security-conscious OAuth provider
(really, your fancy JS or mobile app can't communicate securely with a trusted
server _somewhere_?).

Finally, some OAuth providers (ahem, _Instagram_ ) issue access tokens which
_should_ be opaque to the client, but obviously contain user data (e.g., user
IDs) that tempt developers to take shortcuts. In one case, I saw an app that
only looks at the access token itself (never using it to verify anything),
parses it for a user ID (undocumented of course, but "it seems to work"), and
simply exchanges that ID for a session token! A forged response allows anyone
to impersonate any other user - simply with the Instagram ID of the target
user. The horror...

So yes, it does come down to developer incompetence, but my point is, the
popular OAuth providers certainly aren't helping the situation by encouraging
(or at least not discouraging) implicit OAuth grants, along with issuing not-
so-opaque tokens that encourage bad developer behavior.

~~~
kemitche
> Add to that the fact that so many of the popular OAuth providers support
> "implicit grants" \- where the access token is sent directly to the client -
> without adequate warning as to the security implications involved.
> Personally, I've yet to come across a case where the implicit flow is
> justified, and in my opinion, it should be disabled for any security-
> conscious OAuth provider (really, your fancy JS or mobile app can't
> communicate securely with a trusted server somewhere?).

If you can't trust the client _just enough_ to let it have a bearer token,
what can you trust it with to allow requests to any backend (yours or
facebook's)? Cookies or a token for your backend will be just as
comprimisable, with the sole advantage that the attacker now can only work via
your API on behalf of that user.

Furthermore, the devs taking shortcuts, if you take away implicit grants -
guess what they'll do? They'll "encrypt" the client secret into their app, and
pray that it's "good enough", then just emulate the code flow in the app.
You're now less secure, because the OAuth provider is going to assume that the
lack of implicit grant means "everyone is using a back end server", instead of
being able to put tokens and clients into the "confidential" and "non-
confidential" buckets.

Non-opaque tokens I agree are probably bad. However, opaque tokens doesn't
keep developers from taking shortcuts. Only well-written SDKs and libraries,
either generalized (for all OAuth) or specialized (facebook SDK) can do that.
I've spent enough time in reddit.com/r/redditdev to realize that a large
majority of devs won't take the time to do it right - many barely understand
how HTTP works, when they're first learning this stuff. There's so much to
learn and understand that no one is going to get it right without significant
help in the form of SDKs and libraries that just do the right thing by
default.

~~~
dperfect
Implicit grants are more dangerous for two reasons: (1) the process wherein
the token is sent to the client can have serious security implications
(especially for web apps) and (2) if your app communicates with any app-
specific back end, the transfer of auth tokens between the app and back end is
an additional temptation that is prone to have even more security issues.

I do agree - if you have an app that is 100% client-side, and you're either
using an official SDK or know what you're doing, implicit grants can be done
securely (just as secure as any other bearer or session token, as you
mention). The problem is that many app developers choose the implicit flow
initially (because it's easier to implement), but then end up using it in ways
that would be much better suited to the authorization grant flow.

------
lstamour
Source (linked in post):
[https://www.blackhat.com/docs/eu-16/materials/eu-16-Yang-
Sig...](https://www.blackhat.com/docs/eu-16/materials/eu-16-Yang-Signing-Into-
Billion-Mobile-Apps-Effortlessly-With-OAuth20-wp.pdf)

Only 15-20% of apps tested for Facebook/Google OAuth were vulnerable, perhaps
because platform-specific login code might be available for those providers
(so you don't have to roll-your-own OAuth client as an app developer).

Also, the requirement for performing an MITM with SSL certificate bypass again
raises the complexity of the attack. But it's true that your information is
only as secure as the browser/client you use to access it, and the security of
the API endpoints such clients talk to (and how they validate OAuth
credentials passed by the client).

~~~
idunno246
Does MITM on the attacker's device really increase the complexity? Setting up
one of the http debugging proxies isn't that hard, all you need to do is
modify a response body from a http request. ive done this plenty with charles
to hack the loaded save state from games with no server side validation

~~~
lstamour
Sure. Let's put it this way: an app that doesn't validate SSL in such a way
that your login service can be impersonated is... in some ways asking for
attack, right? In addition to exploiting lax OpenID Connect validation to
impersonate others, you could also steal credentials at this point, I would
think. So while they're illustrating a weakness in Oauth client
implementations, it relies on impersonating an SSL server that should already
be mitigated by operating system or app-based certificate pinning, CRLs, etc.
and as such isn't the root cause of the issue. This indirection/complication
is what I meant by complexity.

~~~
je42
Not only the authentication of an app should be TLS/SSL. The rest of the
communication as well. Certificate pinning and such should be done anyway for
all services the app uses.

A consistent view and specified view on how to securely communicate over the
network would clearly help.

But maybe they should should issue another version of OAuth2 and require TLS
for all communication; and leave the legacy of the non-encrypted traffic
behind.

------
misterbowfinger
I'm having trouble understanding the paper. Can someone break down which part
people are getting wrong? Is it that the `state` challenge isn't correctly
being matched server-side?

Separately - is there an app someone can run to "hack themselves" and see if
their OAuth2 implementation is safe?

~~~
throwaway2016a
Nearest I can tell... the client app is trusting the user ID sent back to it
so you can login with your own (the hacker) user name and password and then
replace the userid in the returned value with your victims and many mobile
apps will let you continue assuming you are the victim.

------
qiqitori
OAuth 1.0 is often implemented somewhat wrong too. I've only worked with OAuth
1.0 so I can't speak for 2.0, but the I think main problem is that social
networks don't 100% stick to the specification, so you first have to figure
out what exactly is different (at which point the guy who originally coded the
thing that I had inherited had already given up), and then you either have to
figure out how to get the library you're using to work anyway, or just
implement it yourself. All of these tasks have pitfalls.

------
pritambaral
So ... some third-party services implicitly trust the auth data their own apps
send them, without verifying the auth data with Facebook/Google/etc. servers
from their server-side code.

TL;DR: Some services forgot the "never trust client-sent data!" rule.

~~~
mSparks
I dont think they forgot it.

Just never something they considered.

"My app is to small and insignificant and not doing anything of much
importance to be targeted by a nation state so I don't need to care about
security"

That pervades the developer scene.

Blended with a spice of nothing to hide nothing to fear.

------
bglusman
This seems relevant...[1] I've implemented OAuth once or twice, but it's
painful, and I've been hesitant to recommend it as a login or security method
both because it's a pain, and because of this article's concerns.

[1] [https://hueniverse.com/2012/07/26/oauth-2-0-and-the-road-
to-...](https://hueniverse.com/2012/07/26/oauth-2-0-and-the-road-to-hell/)

~~~
je42
Actually oauth2 is straight forward to implement. The different flows all have
a purpose.

Obviously, when confronted with a protocol and implement it without library;
you have to read and understand the spec.

I think the flows are fairly stripped down and only do the minimum amount and
in order to fulfill their respective use case.

Obviously the authentication-code flow is the most complex one. However, I
don't see how to make it simpler than it already is and still keep it secure.

A well tested and reviewed library and usage guidelines should help to remove
possible security flaws.

I think that the major area where the ecosystems of the different platform can
improve. But that's not an issue of OAuth2 but an issue of the communities
providing OAuth2 implementations.

The article you are referring to has quiet strong expectations from the OAuth2
specification; which it cannot fulfill. Until there is a better specification,
I would stick to OAuth2.

I believe it is quiet workable and I believe the de-facto standard for
authentication at this moment.

------
logicuce
Misleading title of the post. The hacks are possible because of insecure
implementations not because of the protocol itself.

All 3 insecure implementations which paper talks about are known from long.
Developers concerned about security are cautious about such pitfalls.

------
snarfy
I believe it. All too often I've seen implementations of methods with names
like CheckAccess(), ValidateCert(), which the developer must implement, with
an implementation of

    
    
        return true; //TODO

------
arekkas
That's why I wrote [https://github.com/ory-am/hydra](https://github.com/ory-
am/hydra)

~~~
homakov
Then consider implementing this oauth, with security bugs fixed by default
[http://sakurity.com/oauth](http://sakurity.com/oauth)

------
jlgaddis
My first thought is that it sounds like apps aren't verifying SSL
certificates? Is there more to it or is that the underlying issue?

~~~
jjnoakes
There is more to it. The paper linked from the article is enlightening.

------
homakov
Billion of apps, omg :O

~~~
ThisIs_MyName
They're talking about accounts, but yes; There is a _lot_ of shovelware.

