
Consumer key/secret for official Twitter clients - matsuu
https://gist.github.com/re4k/3878505
======
pilif
And with this one of the huge flaws of OAuth comes to play. OAuth just doesn't
work with locally installed applications as it's impossible to hide anything
there, but OAuth strongly relies on the client having some secret knowledge
(the client token).

As long as all clients are equal when using the API, this might go well (minus
some malicious clients), but once some clients start to be more equal than
others - even more so as the service starts to get to be real jerks - then the
whole system will fall down.

What we see here is twitter's secrets leaking out (though remember: That's
more or less public data as it's technically _impossible_ to hide that info -
the server has to know) due to them being jerks giving their client
preferential access.

What does this mean? For now, probably not much as I can imagine the bigger
third-party-clients want to behave.

It might however make Twitter reconsider their policies.

If not, this is the beginning of a long cat and mouse game of twitter updating
their keys and using heuristics to recognize their own client followed by
twitter clients providing a way to change the client secret[1].

Though one thing is clear: Twitter will lose this game as the client secret
has to be presented to the server.

Using SSL and certificate pinning, they can protect the secret from network
monitors, but then the secret can still be extracted from the client, at which
point, they might encrypt it in the client, at which point the attackers will
disassemble the client to still extract the key.

It remains to be seen how far twitter is willing to go playing that game.

[1] even if the keys don't leak out, as long as twitter allows their users to
create API clients, an editable client secret is a way for _any_ twitter
client to remain fully usable

~~~
mtrimpe
It's not really fair to call this an OAuth flaw, since it's just another
instance of the 'trusted client' problem in security.

Essentially it's impossible to differentiate Twitter's own client on an
untrusted platform since it will always be possible for 'malicious' client to
behave exactly like Twitter's own client.

~~~
pilif
IMHO OAuth doesn't work for desktop applications because all the aspects where
it tries to provide more security than traditional username/password
authentication are easily circumvented on the desktop.

As such OAuth on the desktop IMHO is not much more than snake oil and does
nothing aside of increasing the complexity for the implementer while providing
next to zero additional security.

First is client authentication: OAuth tries to authenticate clients as well as
users. On a desktop it's not possible to authenticate clients because whatever
they do, the information can be extracted and simulated by a malicious client.

The other thing is that by now many desktop OAuth clients embed a webview for
the authentication handshake. There is no way for the user to be sure that
they are typing their credentials into the site they think they are. There is
no browser chrome, there is usually no URL bar and even if there was, there is
zero trust that the URL is actually showing the correct URL.

Worse: How many client apps are actually going through the trouble of checking
SSL certificates (or even that SSL is on)?

Embedding a webview for an OAuth handshake provides (to the user) no
additional security compared to just showing a username/password dialog.

The only way how I see this actually work is if the application opens the
default browser for the handshake. But of course that will show a big-ass
security warning when redirecting back to the local url protocol.

In consequence this means that the only method by which OAuth on the desktop
could provide additional security is the one method that presents a security
warning to the user. How ironic.

~~~
Spooky23
That's why we have Kerberos.

~~~
vicaya
No Kerberos won't help in this case. Once you have the keytab you can
impersonate the client. Kerberos doesn't solve the "trusted client" issue.

~~~
Spooky23
It doesn't help in the Twitter client use case, but it will help in the
user/password compromise scenario described in the parent comment.

If I compromise the keytab, I can impersonate the domain member server and
presumably the active tickets... but the username/password is on the KDC/DC.

------
dewitt
For some reason several of the commenters here are explaining this away as a
protocol bug (specifically with OAuth) but the challenge isn't at all protocol
specific. Rather, it's a hardship with all client/server apps, specifically in
that trusting any client requires additional support from the platform (self-
assertion or possession of a secret by the client alone is insufficient) and
even then it's known hard problem.

This has been true of client/server apps for a very long time, well predating
any particular protocol. I'd be sincerely interested in any solutions that
people come up with that don't depend on additional extrinsic platform
capabilities.

~~~
lootsauce
I have a work around for this. Is it mathematically provable that it is more
secure? I don't know but I beieve it to be much better in many respects for
native apps. It essentially leverages push notifications to deliver bearer
tokens.

Push, in my mind, makes way more sense for mobile and benefits from code
signing, known users, known devices, and an essentially private out of band
network for push messaging, much of this exists because of the app publishing
model in play. You can trust the binary because the developer is known via a
developer code signing certificate. The user is known because they had to
create an account in the platform's app store. You can trust the device
(pretty sure of this) because of the unique device id. Incidentally this
process could be achieved with any out of band communication it could be a
whisper in the ear, a note delivered by carrier pigeon, an email or whatever,
push just makes it more user friendly because its directly tied into the app
making the request through a service managed by the OS. This system is
basically a bearer token with an out-of-band delivery mechanism.

There is a "distributed authentication service" in this flow. This is
essentially a central registry of all apps and services that use this auth
strategy (I'm thinking something is needed like a certificate signing
authority), The apps and services are catalogued and users can see security
warnings from this service before they grant access to an app or service. Have
you ever wondered what happens if a trusted compant goes out of business and
the app gets bought by the mafia and an update goes out making it a malicious
app? OK. you surely have not but a company going out of business, a blog post
about a security breach, things like this are thret indicators and should play
into the system, How DO you know if it is safe to grant access to an app or
service you are new to? The only recourse you have is revoke access after you
find out the problem. This is not good enough. I think there should be
something like community awareness of security issues with apps and services
that can alert users of said issues and that should be integrated into the
auth process. This service provides a management console for users to enable
and disable application access to specific user data or other third party
services and audit activity by the user, their apps and by third party
services. This management console also has the concept of levels of security.
If the code has been audited and is from a highly reputable source it is put
in a certain group, if it is a new app from a new company then it would not be
graded as highly. We have something like this with green bar ssl certs. Banks
and credit card companies do this kind of risk grading all the time, if we
want an auth system to be worthy of our efforts and then is should be worthy
of banking and e-commerce, we should be serious about preempting, identifying,
and responding to risk throughout the auth process not just let people revoke
access tokens when and IF they find out there was a problem.

------
wrs
This Twitter client situation reminded me of some ancient (1999) history, so
in case you're wondering how far companies will go to try to enforce a
theoretically-impossible preference for one client of their service...

The MSN Messenger team added America Online chat support to the Messenger
client. AOL didn't like that and tried a variety of approaches to reject
Messenger. The protocol was undocumented, so there were lots of tricks they
could play. At one point they went (IMHO) a bit too far: they deliberately
exploited a buffer overflow in their own client!

One person's contemporaneous summary:
<http://www.geoffchappell.com/notes/security/aim/index.htm>

------
mathias
If you ship a binary to a person’s computer and that binary has a secret
embedded in it, that secret will eventually be discovered.

This has been discussed here before:
<http://news.ycombinator.com/item?id=4411696>

------
zacharyvoase
Something I've been pointing out about OAuth for _ever_ is that it's a method
for delegating authorization to agents who wish to act on behalf of the user.
When it is the actual user him/herself who is acting, there's nothing wrong
(and a lot of things right) with username/password authentication.

~~~
TazeTSchnitzel
OAuth prevents apps from nabbing passwords, though.

~~~
dmdeller
If the app is displaying its own internal web view for OAuth, it can load any
page it wants in there and tell the user it's Twitter. It can even fake an
address bar with a twitter.com URL if it wants. Then use the common phishing
technique of 'oops, you must have entered your password wrong' (the user
didn't, but now the phisher has it), followed by a forward to the real site so
the user suspects nothing.

But this is hypothetical. In reality there is little motivation for apps in an
App Store-like environment, which survive on customer goodwill, to want to do
this.

The user's security is probably not why Twitter chose OAuth.

~~~
nwh
> But this is hypothetical. In reality there is little motivation for apps in
> an App Store-like environment, which survive on customer goodwill, to want
> to do this.

Would anybody ever find out though?

It's clear that the App Store does no real checking of the apps they accept.
Since the GUID use was banned, everyone has just switched to the just-as-
unique MAC address to identify devices.

------
rmccue
I use OAuth for an application written in PHP, and as such, there's no
possible way to trust the client/secret, given that the source is not
obfuscated in any way. This application talks to my own server, and the OAuth
flow is basically just a way to avoid storing username/password combinations.
The client key/secret have to be treated as permanently compromised, so the
only thing I use those for is version usage statistics.

The question is, given that your key/secret _will_ be compromised, is there
any point in even having it in the OAuth flow?

------
pdknsk
Interestingly, the keys were posted 5 months ago.

<https://gist.github.com/re4k/3878505/revisions>

------
Kudos
For people who think this is going to cause drive-by Twitter hijacks, remember
that Twitter stores the callback URL on their side for this very reason. Any
web app impersonating these apps will fail at the callback stage.

~~~
zwily
The client can intercept whatever URL the embedded webview is redirected to.
The callback URL provides no security against this.

~~~
Kudos
That's not a drive-by hijack, all bets are off when it comes to apps. For all
you know the app is presenting a fake login dialog.

It would be a drive-by hijack on the web because there's a good chance you're
already authenticated with Twitter and the callback cycle will automatically
grant credentials on your behalf to the requester with no prompt.

~~~
zwily
Right - we were discussion OAuth in the context of client apps.

~~~
Kudos
I'm pretty sure I was setting the context, and that context was the web.

~~~
zwily
Fair enough. I was referring to the larger discussion.

------
jamesbrennan
I just tested out the iPhone key/secret using the script here [1] and it
worked perfectly. It'll probably bump my actual iPhone client off though I'm
assuming.

[1] <https://gist.github.com/tcr/5108489/download#>

------
homakov
It could be non dangerous if oauth1/2 would follow my advices (static
redirect_uri):

<http://homakov.blogspot.com/2013/03/oauth1-oauth2-oauth.html>

------
saghul
Here is an interesting talk on OAuth by its creator:
<http://2012.realtimeconf.com/video/eran-hammer>

------
aerolite
what's wrong with making the oauth_callback parameter not override whatever
you put in Twitter? wouldn't this fix the problem?

~~~
fastest963
Yes, but break a lot of other apps that happen to use different domains/pages
for different contexts (mobile, desktop, etc).

------
galactus
Does twitter honor the the oauth_callback parameter? Otherwise, how can those
keys be used by an attacker?

~~~
cheald
oauth_callback is only interesting for web app authorization. For out-of-band
authorization flows, you can't protect it with a callback filter.

Since these keys were lifted from an application that does out-of-band auth
flows, any other app could use them similarly at will.

~~~
abraham
The Twitter apps don't use the OOB flow. They use xAuth password exchange
"flow".

~~~
arthulia
Nope, untrue. Read the description for oauth_callback.

<https://dev.twitter.com/docs/api/1/post/oauth/request_token>

------
feydr
if you find this sort of 'research' fun and/or you find this sort of stuff to
be the norm rather than the exception ;), you should chk out
<https://www.appthority.com/careers>

------
lukeholder
uh, this is not good. Why would someone post that under their own github
account?

~~~
cmelbye
It's more or less public knowledge. You can find it yourself by running
"strings" on the Twitter app binary. Any attempts on Twitter's part to limit
the disclosure of these tokens would almost certainly invoke the Streisand
Effect.

~~~
lukeholder
Couldn't another app use these tokens and take advantage of lax api limits ?

~~~
jasiek
I think Apple will simply not permit applications that use these keys and are
not official clients in the App Store. Looks like something that is pretty
easy to automate.

~~~
codeka
I'm not sure why Apple would play police for Twitter, though.

~~~
cryptoz
Isn't Twitter integrated into Apple's mobile operating system? Such tight
partnerships is plenty reason for them to play police for Twitter.

~~~
fpgeek
Yes and no. Apple wants to protect their Twitter partnership, but... Apple
knows that there aren't any effective police in the park next door. So the
question is whether Apple values their Twitter relationship enough that
they're willing to cede most of the future energy and enthusiasm around third-
party Twitter clients to Android.

It's possible, but I don't think it is at all an easy call.

