As long as all clients are equal when using the API, this might go well (minus some malicious clients), but once some clients start to be more equal than others - even more so as the service starts to get to be real jerks - then the whole system will fall down.
What we see here is twitter's secrets leaking out (though remember: That's more or less public data as it's technically impossible to hide that info - the server has to know) due to them being jerks giving their client preferential access.
What does this mean? For now, probably not much as I can imagine the bigger third-party-clients want to behave.
It might however make Twitter reconsider their policies.
If not, this is the beginning of a long cat and mouse game of twitter updating their keys and using heuristics to recognize their own client followed by twitter clients providing a way to change the client secret.
Though one thing is clear: Twitter will lose this game as the client secret has to be presented to the server.
Using SSL and certificate pinning, they can protect the secret from network monitors, but then the secret can still be extracted from the client, at which point, they might encrypt it in the client, at which point the attackers will disassemble the client to still extract the key.
It remains to be seen how far twitter is willing to go playing that game.
 even if the keys don't leak out, as long as twitter allows their users to create API clients, an editable client secret is a way for any twitter client to remain fully usable
Essentially it's impossible to differentiate Twitter's own client on an untrusted platform since it will always be possible for 'malicious' client to behave exactly like Twitter's own client.
As such OAuth on the desktop IMHO is not much more than snake oil and does nothing aside of increasing the complexity for the implementer while providing next to zero additional security.
First is client authentication: OAuth tries to authenticate clients as well as users. On a desktop it's not possible to authenticate clients because whatever they do, the information can be extracted and simulated by a malicious client.
The other thing is that by now many desktop OAuth clients embed a webview for the authentication handshake. There is no way for the user to be sure that they are typing their credentials into the site they think they are. There is no browser chrome, there is usually no URL bar and even if there was, there is zero trust that the URL is actually showing the correct URL.
Worse: How many client apps are actually going through the trouble of checking SSL certificates (or even that SSL is on)?
Embedding a webview for an OAuth handshake provides (to the user) no additional security compared to just showing a username/password dialog.
The only way how I see this actually work is if the application opens the default browser for the handshake. But of course that will show a big-ass security warning when redirecting back to the local url protocol.
In consequence this means that the only method by which OAuth on the desktop could provide additional security is the one method that presents a security warning to the user. How ironic.
Worse yet, the webview is terrible UX. So not only is there no security advantage, but you also directly harm the user experience.
If I compromise the keytab, I can impersonate the domain member server and presumably the active tickets... but the username/password is on the KDC/DC.
The client is required to download a piece of code from server to compute a result given a challenge from server. You can embed all sorts of logic to validate the client per connection! Think of this as using virus for good :)
 I'm getting good at inventing 4 letter security acronyms :)
And yet, it's still possible to rip Blu-Ray disks. DRM doesn't work.
I guess Twitter could start banning users who are detected running unauthorized clients (like Xbox Live), but that'll get really ugly PR-wise.
Twitter doesn't need to ban unauthorized clients. They can use the detected client info for QoS to give preferential treatment to their own client and render the rest in the "best effort" service class, which could make unauthorized clients appear to work but unreliably.
The place where the client id and secret actually offer real security is in the hosted scenario, where the secret is never distributed outside a trusted environment. Anyone who tells you different is wrong. The same applies to every single copy protection scheme (SSL/TLS, HDCP, DVD Region Coding, etc, etc, etc) and barring some mathematical breakthrough, this will always be true.
That's not really true, Twitter just has a broken model where they want to authenticate the application as well as the user...
The rest of your comment is quite spot on, though. This is going to be a cat and mouse game for Twitter and I'm not sure they can win.
No, it is still true. There's no point, at all, to OAuth on a native client.
Native clients can:
1) Present fake browser chrome
2) Present no browser chrome
3) Inject their own scripts into the webview to acquire the username/password
4) Extract the browser username/password from the keychain
... and so on.
Enforced OAuth is simply a case of web developers foisting complexity onto native developers because they don't appreciate the goals and priorities in the native app problem domain.
This way if the device on which the native client lives is compromised, the attacker won't be able to obtain the cleartext password, unless it was compromised during the OAuth login process. Sure, the attacker will get the token, but they won't be able to get the cleartext password and log into other unrelated services with it.
Why does this matter?
If you compromise somebody's desktop or phone, you're in an extremely privileged permission. This means that you can spy on their authentication processes while they are occurring. Including processes that occur on the web.
You can also read the user's keychain, containing the saved passwords for the OAuth service that you're trying to protect.
* That all applications that run unattended store the user's service password?
* That the user change the app password on the 2 desktops and 3 tabs/phones they use because they rotated their service password?
* That users should NOT be able to revoke or change an application's access rights without changing their password?
* That applications *shouldn't* request the minimum privileges they need to do their job?
* That it is NOT a good thing for a user to be able to log into their service provider and check what applications are connected with the service?
> If you compromise somebody's desktop or phone...
This is my day job. After compromise, the first thing is the "smash and grab" where you loot as many credentials as possible off the machine with a bunch of scripts. If there's stuff in a browser's built-in password manager, win. If your app stores a password, win.
You're assuming that all failure modes are alike.
You seem to be thinking of applications as if they were web sites.
> After compromise, the first thing is the "smash and grab" where you loot as many credentials as possible off the machine with a bunch of scripts. If there's stuff in a browser's built-in password manager, win. If your app stores a password, win.
So they smash and grab the keychain, which holds the master passwords to the services granting OAuth tokens.
You might have me vaguely convinced if you were advocating for XAuth+OAuth, in which case it would provide a bare modicum of additional security without negatively impacting user-experience at all.
I hadn't run into xAuth before. It looks like a reasonable way to make the grant process not such a Desktop UI cluster$%$^ - which I totally agree that it is.
Re: the keychain thing, again, it depends on the level of compromise, whether the machine is unlocked, what platform you're on and whether the creds are in there.
And yes, with enough access you can keylog, man-in-the-browser or "desktop phish", but it's higher risk for the attacker and requires maintaining longer-term access.
But if your app stores passwords the user instantly loses.
Failure modes matter. A kind of parody version of how I see your argument is: "Sites shouldn't bother hashing passwords, because an attacker with a shell could just modify the site's login process to capture passwords".
But if the attacker only has access to the filesystem, then it certainly matters whether the filesystem contains enough information to impersonate a user.
And there are things that people do to desktops that allow other people to access their filesystems. Things like cloud backup, sending your computer in for repairs, or selling an old hard drive. Sure, you should remove the hard drive or wipe it before you do any of these things, but a lot of people...don't. There are plenty of stories about that situation floating around the interwebs, but this comment is long enough...
Also, because the access token for each app is unique, you can grant different account permissions to your apps so you can in principle limit their access to your account.
Also there is more accountability. If an access token gets leaked or misused you at least have a chance of figuring out where it came from.
OAuth is usually a better choice for API authentication than username/password, even for desktop apps.
If you're worried about genuinely nefarious applications, OAuth ought to be the least of your worries, because it's not going to protect you.
As far as I understand it, you're right, except when OAuth is the only authentication mechanism provided and you want to, you know, develop a desktop app.
Of course, providing OAuth as the only authentication mechanism appears then to be the problem. But that is not a problem application writers can solve; it seems to be a problem that OAuth deployers have to deal with.
Can you elaborate? As far as I understand how it works, this is the case by design in OAuth.
This would be a pain for these clients, but
if Twitter really wanted to do this they could probably justify that it's the price you have to pay for preferential access.
Edit: It just occurred to me that this might just be kicking the can down the road: you would end up with same problem authenticating with proxy server. It would have to be something along the lines where the proxy server itself has preferential access, but the clients to the proxy server do not (e.g. they are rate-limited by the proxy or something.)
As to know why they want these keys to be basically 'public', I'm not sure why yet..
This has been true of client/server apps for a very long time, well predating any particular protocol. I'd be sincerely interested in any solutions that people come up with that don't depend on additional extrinsic platform capabilities.
Push, in my mind, makes way more sense for mobile and benefits from code signing, known users, known devices, and an essentially private out of band network for push messaging, much of this exists because of the app publishing model in play. You can trust the binary because the developer is known via a developer code signing certificate. The user is known because they had to create an account in the platform's app store. You can trust the device (pretty sure of this) because of the unique device id. Incidentally this process could be achieved with any out of band communication it could be a whisper in the ear, a note delivered by carrier pigeon, an email or whatever, push just makes it more user friendly because its directly tied into the app making the request through a service managed by the OS.
This system is basically a bearer token with an out-of-band delivery mechanism.
There is a "distributed authentication service" in this flow. This is essentially a central registry of all apps and services that use this auth strategy (I'm thinking something is needed like a certificate signing authority), The apps and services are catalogued and users can see security warnings from this service before they grant access to an app or service. Have you ever wondered what happens if a trusted compant goes out of business and the app gets bought by the mafia and an update goes out making it a malicious app? OK. you surely have not but a company going out of business, a blog post about a security breach, things like this are thret indicators and should play into the system, How DO you know if it is safe to grant access to an app or service you are new to? The only recourse you have is revoke access after you find out the problem. This is not good enough. I think there should be something like community awareness of security issues with apps and services that can alert users of said issues and that should be integrated into the auth process. This service provides a management console for users to enable and disable application access to specific user data or other third party services and audit activity by the user, their apps and by third party services. This management console also has the concept of levels of security. If the code has been audited and is from a highly reputable source it is put in a certain group, if it is a new app from a new company then it would not be graded as highly. We have something like this with green bar ssl certs. Banks and credit card companies do this kind of risk grading all the time, if we want an auth system to be worthy of our efforts and then is should be worthy of banking and e-commerce, we should be serious about preempting, identifying, and responding to risk throughout the auth process not just let people revoke access tokens when and IF they find out there was a problem.
The MSN Messenger team added America Online chat support to the Messenger client. AOL didn't like that and tried a variety of approaches to reject Messenger. The protocol was undocumented, so there were lots of tricks they could play. At one point they went (IMHO) a bit too far: they deliberately exploited a buffer overflow in their own client!
One person's contemporaneous summary: http://www.geoffchappell.com/notes/security/aim/index.htm
This has been discussed here before: http://news.ycombinator.com/item?id=4411696
But this is hypothetical. In reality there is little motivation for apps in an App Store-like environment, which survive on customer goodwill, to want to do this.
The user's security is probably not why Twitter chose OAuth.
Would anybody ever find out though?
It's clear that the App Store does no real checking of the apps they accept. Since the GUID use was banned, everyone has just switched to the just-as-unique MAC address to identify devices.
OAuth, having solved this problem, then goes on to create many more.
The question is, given that your key/secret will be compromised, is there any point in even having it in the OAuth flow?
It would be a drive-by hijack on the web because there's a good chance you're already authenticated with Twitter and the callback cycle will automatically grant credentials on your behalf to the requester with no prompt.
However, Twitter also requires users to authorize the application each time an application requests an OAuth token, so the possibility of using these keys for hijacking is limited (although it might be possible to use them to make a phishing attempt look more authentic).
Since these keys were lifted from an application that does out-of-band auth flows, any other app could use them similarly at will.
Of course, you still have to log in as a user, and Twitter could blacklist accounts that use this key on non-Twitter apps, which are going to have a lot of 'tells' and a specific signature in patterns of how they use the API.
(Twitter could even take advantage of that by hiding a code in a usage pattern, kind of like the POW who blinked in Morse code when he was put on TV)
In at least one way, yes. New third-party Twitter clients are limited to 100k users, but Twitter's official clients are unlimited. If those clients built in a "use your own authentication token" UI, you could put your official client's tokens in and work around that limit.
I don't know about API quotas, but I'm totally sure that they allow more than 100K tokens.
The Chrome app Hotot too. https://chrome.google.com/webstore/detail/hotot/cnfkkfleeioo...
It's possible, but I don't think it is at all an easy call.
Not true. Say you have a malicious Twitter client app that posts "Lose Weight In 30 days! <link>." Normally, Twitter could shut this offending app down by rejecting their client ID/secret; if they're using the official Twitter creds though, doing so would shut down all official Twitter apps in the process.
Anyone: what is best practice here (Android and/or iOS)?
Storing application secrets in Android's credential storage . I have no idea how secure this actually is.
Should I obfuscate OAuth consumer secret stored by Android app? 
Then run `strings' on virtual memory image of the offending process. Same difference.
If the client just sends the secret as part of an authentication request, then a proxy would reveal it. But if some form of challenge/response  process is used, where the value sent is derived from the secret and an unpredictable challenge sent by the remote service, then as far as I know a proxy wouldn't help.
I don't know enough about the details of the Twitter/DropBox/etc APIs work to know if they use challenge-response.
For what happens in the real world, see Georgiev et al.'s "The most dangerous code in the world" at https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-... (spoiler: I described this paper in our internal knowledgebase as "very readable. Promises lots of facepalming and delivers in spades.")
One simple solution is to set N-1 arrays to random data (hardcoded or generated at compile time) and set the last array to the real secret XOR random array #1 XOR random array #2 XOR ... XOR random array #N-1; this doesn't exactly stop a determined attacker, but it does stop "strings".
It is helpful as a way of ensuring random applications don't get hold of the data, but not for keeping the data from a determined user.
Twitter could make some way for the official client to fetch new keys from a server without a binary update, but then they'd have to somehow protect that mechanism from third parties...
I suppose the next logical step would be to procedurally generate keys based on the date, and have only the algorithm (not the keys themselves) known to the official client. Not in any way insurmountable, but a little more difficult to crack.