Hacker News new | past | comments | ask | show | jobs | submit login
Consumer key/secret for official Twitter clients (gist.github.com)
244 points by matsuu on March 7, 2013 | hide | past | favorite | 98 comments

And with this one of the huge flaws of OAuth comes to play. OAuth just doesn't work with locally installed applications as it's impossible to hide anything there, but OAuth strongly relies on the client having some secret knowledge (the client token).

As long as all clients are equal when using the API, this might go well (minus some malicious clients), but once some clients start to be more equal than others - even more so as the service starts to get to be real jerks - then the whole system will fall down.

What we see here is twitter's secrets leaking out (though remember: That's more or less public data as it's technically impossible to hide that info - the server has to know) due to them being jerks giving their client preferential access.

What does this mean? For now, probably not much as I can imagine the bigger third-party-clients want to behave.

It might however make Twitter reconsider their policies.

If not, this is the beginning of a long cat and mouse game of twitter updating their keys and using heuristics to recognize their own client followed by twitter clients providing a way to change the client secret[1].

Though one thing is clear: Twitter will lose this game as the client secret has to be presented to the server.

Using SSL and certificate pinning, they can protect the secret from network monitors, but then the secret can still be extracted from the client, at which point, they might encrypt it in the client, at which point the attackers will disassemble the client to still extract the key.

It remains to be seen how far twitter is willing to go playing that game.

[1] even if the keys don't leak out, as long as twitter allows their users to create API clients, an editable client secret is a way for any twitter client to remain fully usable

It's not really fair to call this an OAuth flaw, since it's just another instance of the 'trusted client' problem in security.

Essentially it's impossible to differentiate Twitter's own client on an untrusted platform since it will always be possible for 'malicious' client to behave exactly like Twitter's own client.

IMHO OAuth doesn't work for desktop applications because all the aspects where it tries to provide more security than traditional username/password authentication are easily circumvented on the desktop.

As such OAuth on the desktop IMHO is not much more than snake oil and does nothing aside of increasing the complexity for the implementer while providing next to zero additional security.

First is client authentication: OAuth tries to authenticate clients as well as users. On a desktop it's not possible to authenticate clients because whatever they do, the information can be extracted and simulated by a malicious client.

The other thing is that by now many desktop OAuth clients embed a webview for the authentication handshake. There is no way for the user to be sure that they are typing their credentials into the site they think they are. There is no browser chrome, there is usually no URL bar and even if there was, there is zero trust that the URL is actually showing the correct URL.

Worse: How many client apps are actually going through the trouble of checking SSL certificates (or even that SSL is on)?

Embedding a webview for an OAuth handshake provides (to the user) no additional security compared to just showing a username/password dialog.

The only way how I see this actually work is if the application opens the default browser for the handshake. But of course that will show a big-ass security warning when redirecting back to the local url protocol.

In consequence this means that the only method by which OAuth on the desktop could provide additional security is the one method that presents a security warning to the user. How ironic.

All of the points you've made about poor SSL security could just as easily be made against software that communicate over SSL using a username and password. At least with OAuth, there's only potential for a username/password leak on sign in—after that only the revokable token could be leaked.

> Embedding a webview for an OAuth handshake provides (to the user) no additional security compared to just showing a username/password dialog.

Worse yet, the webview is terrible UX. So not only is there no security advantage, but you also directly harm the user experience.

That makes sense. Is there a better way for users to use a “desktop” (non-web) third-party app without handing out their username and password?

That's why we have Kerberos.

No Kerberos won't help in this case. Once you have the keytab you can impersonate the client. Kerberos doesn't solve the "trusted client" issue.

It doesn't help in the Twitter client use case, but it will help in the user/password compromise scenario described in the parent comment.

If I compromise the keytab, I can impersonate the domain member server and presumably the active tickets... but the username/password is on the KDC/DC.

OAuth has been called flawed by security experts because it's filled with trap holes for developers to fall into[1] and former developers who left the project[2].

1. http://homakov.blogspot.jp/2013/03/oauth1-oauth2-oauth.html

2. http://en.wikipedia.org/wiki/OAuth#Controversy

Sure, but that's beside the point. OAuth does not attempt to deal with this problem. If you have the secret consumer token, then you are that consumer in the eyes of OAuth.

One technique that can help establishing "trusted client" on untrusted platform is TRCE (Trusted Remote Code Execution)[1]

The client is required to download a piece of code from server to compute a result given a challenge from server. You can embed all sorts of logic to validate the client per connection! Think of this as using virus for good :)

[1] I'm getting good at inventing 4 letter security acronyms :)

This is what Blu-Ray's BD+ works.

And yet, it's still possible to rip Blu-Ray disks. DRM doesn't work.

I guess Twitter could start banning users who are detected running unauthorized clients (like Xbox Live), but that'll get really ugly PR-wise.

TRCE is fundamentally different from BD+ in that the the code is served remotely instead of locally. That means that the validation code can be changed per connection with unlimited variations. The only way to deal with TRCE is PSB (Perfect SandBox), which is not trivial to implement.

Twitter doesn't need to ban unauthorized clients. They can use the detected client info for QoS to give preferential treatment to their own client and render the rest in the "best effort" service class, which could make unauthorized clients appear to work but unreliably.

This is a surprise to absolutely no-one. That client keys & secrets were semi-public knowledge was obvious years ago, before I started working on OAuth at a much younger Twitter. The client key and secret is a rough trust metric for clients that are distributed publicly. Twitter can distribute new clients with new more-hidden secrets, and gain a bit more trust, for a while.

The place where the client id and secret actually offer real security is in the hosted scenario, where the secret is never distributed outside a trusted environment. Anyone who tells you different is wrong. The same applies to every single copy protection scheme (SSL/TLS, HDCP, DVD Region Coding, etc, etc, etc) and barring some mathematical breakthrough, this will always be true.

> OAuth just doesn't work with locally installed applications

That's not really true, Twitter just has a broken model where they want to authenticate the application as well as the user...

The rest of your comment is quite spot on, though. This is going to be a cat and mouse game for Twitter and I'm not sure they can win.

> That's not really true, Twitter just has a broken model where they want to authenticate the application as well as the user...

No, it is still true. There's no point, at all, to OAuth on a native client.

Native clients can:

1) Present fake browser chrome

2) Present no browser chrome

3) Inject their own scripts into the webview to acquire the username/password

4) Extract the browser username/password from the keychain

... and so on.

Enforced OAuth is simply a case of web developers foisting complexity onto native developers because they don't appreciate the goals and priorities in the native app problem domain.

The nice thing about OAuth on a native client is that it allows a non-evil client the option of not storing the cleartext password in a recoverable way on the client.

This way if the device on which the native client lives is compromised, the attacker won't be able to obtain the cleartext password, unless it was compromised during the OAuth login process. Sure, the attacker will get the token, but they won't be able to get the cleartext password and log into other unrelated services with it.

> The nice thing about OAuth on a native client is that it allows a non-evil client the option of not storing the cleartext password in a recoverable way on the client.

Why does this matter?

If you compromise somebody's desktop or phone, you're in an extremely privileged permission. This means that you can spy on their authentication processes while they are occurring. Including processes that occur on the web.

You can also read the user's keychain, containing the saved passwords for the OAuth service that you're trying to protect.

OK, so what are you proposing then?

  * That all applications that run unattended store the user's service password?  
  * That the user change the app password on the 2 desktops and 3 tabs/phones they use because they rotated their service password?  
  * That users should NOT be able to revoke or change an application's access rights without changing their password? 
  * That applications *shouldn't* request the minimum privileges they need to do their job?  
  * That it is NOT a good thing for a user to be able to log into their service provider and check what applications are connected with the service?
You're thinking in an extremely binary and absolutist fashion. Systems have lots of failure modes.

> If you compromise somebody's desktop or phone...

This is my day job. After compromise, the first thing is the "smash and grab" where you loot as many credentials as possible off the machine with a bunch of scripts. If there's stuff in a browser's built-in password manager, win. If your app stores a password, win.

You're assuming that all failure modes are alike.

> You're thinking in an extremely binary and absolutist fashion. Systems have lots of failure modes.

You seem to be thinking of applications as if they were web sites.

> After compromise, the first thing is the "smash and grab" where you loot as many credentials as possible off the machine with a bunch of scripts. If there's stuff in a browser's built-in password manager, win. If your app stores a password, win.

So they smash and grab the keychain, which holds the master passwords to the services granting OAuth tokens.

You might have me vaguely convinced if you were advocating for XAuth+OAuth, in which case it would provide a bare modicum of additional security without negatively impacting user-experience at all.


I hadn't run into xAuth before. It looks like a reasonable way to make the grant process not such a Desktop UI cluster$%$^ - which I totally agree that it is.

Re: the keychain thing, again, it depends on the level of compromise, whether the machine is unlocked, what platform you're on and whether the creds are in there.

And yes, with enough access you can keylog, man-in-the-browser or "desktop phish", but it's higher risk for the attacker and requires maintaining longer-term access.

But if your app stores passwords the user instantly loses.

Failure modes matter. A kind of parody version of how I see your argument is: "Sites shouldn't bother hashing passwords, because an attacker with a shell could just modify the site's login process to capture passwords".

Depends on the level of compromise. If an attacker can interface to the screen and keyboard, or access other process's memory to directly read the key from there, or spoof the SSL library so you get cleartext network transmissions, sure, they can steal a lot of data.

But if the attacker only has access to the filesystem, then it certainly matters whether the filesystem contains enough information to impersonate a user.

And there are things that people do to desktops that allow other people to access their filesystems. Things like cloud backup, sending your computer in for repairs, or selling an old hard drive. Sure, you should remove the hard drive or wipe it before you do any of these things, but a lot of people...don't. There are plenty of stories about that situation floating around the interwebs, but this comment is long enough...

Thanks, this.

Also, because the access token for each app is unique, you can grant different account permissions to your apps so you can in principle limit their access to your account.

Also there is more accountability. If an access token gets leaked or misused you at least have a chance of figuring out where it came from.

OAuth is usually a better choice for API authentication than username/password, even for desktop apps.

If you want to limit their access to your account, uninstall the application. No more access.

If you're worried about genuinely nefarious applications, OAuth ought to be the least of your worries, because it's not going to protect you.

> There's no point, at all, to OAuth on a native client.

As far as I understand it, you're right, except when OAuth is the only authentication mechanism provided and you want to, you know, develop a desktop app.

Of course, providing OAuth as the only authentication mechanism appears then to be the problem. But that is not a problem application writers can solve; it seems to be a problem that OAuth deployers have to deal with.

> Twitter just has a broken model where they want to authenticate the application as well as the user...

Can you elaborate? As far as I understand how it works, this is the case by design in OAuth.

(edit: typo)

I'm new to this problem (so please jump in and correct me if I'm wrong) but it occurred to me that Twitter could require that their most trusted "preferential" clients proxy requests from native apps through their own servers, which do the authentication.

This would be a pain for these clients, but if Twitter really wanted to do this they could probably justify that it's the price you have to pay for preferential access.

Edit: It just occurred to me that this might just be kicking the can down the road: you would end up with same problem authenticating with proxy server. It would have to be something along the lines where the proxy server itself has preferential access, but the clients to the proxy server do not (e.g. they are rate-limited by the proxy or something.)

I don't think it's a flow of OAuth. It's more a software design issue here. It was wrong for them to use OAuth for this task. It's such a poor design choice (they repeated the error many times for all these applications) that it's hard to believe they didn't want that in the first place.

As to know why they want these keys to be basically 'public', I'm not sure why yet..

For some reason several of the commenters here are explaining this away as a protocol bug (specifically with OAuth) but the challenge isn't at all protocol specific. Rather, it's a hardship with all client/server apps, specifically in that trusting any client requires additional support from the platform (self-assertion or possession of a secret by the client alone is insufficient) and even then it's known hard problem.

This has been true of client/server apps for a very long time, well predating any particular protocol. I'd be sincerely interested in any solutions that people come up with that don't depend on additional extrinsic platform capabilities.

I have a work around for this. Is it mathematically provable that it is more secure? I don't know but I beieve it to be much better in many respects for native apps. It essentially leverages push notifications to deliver bearer tokens.

Push, in my mind, makes way more sense for mobile and benefits from code signing, known users, known devices, and an essentially private out of band network for push messaging, much of this exists because of the app publishing model in play. You can trust the binary because the developer is known via a developer code signing certificate. The user is known because they had to create an account in the platform's app store. You can trust the device (pretty sure of this) because of the unique device id. Incidentally this process could be achieved with any out of band communication it could be a whisper in the ear, a note delivered by carrier pigeon, an email or whatever, push just makes it more user friendly because its directly tied into the app making the request through a service managed by the OS. This system is basically a bearer token with an out-of-band delivery mechanism.

There is a "distributed authentication service" in this flow. This is essentially a central registry of all apps and services that use this auth strategy (I'm thinking something is needed like a certificate signing authority), The apps and services are catalogued and users can see security warnings from this service before they grant access to an app or service. Have you ever wondered what happens if a trusted compant goes out of business and the app gets bought by the mafia and an update goes out making it a malicious app? OK. you surely have not but a company going out of business, a blog post about a security breach, things like this are thret indicators and should play into the system, How DO you know if it is safe to grant access to an app or service you are new to? The only recourse you have is revoke access after you find out the problem. This is not good enough. I think there should be something like community awareness of security issues with apps and services that can alert users of said issues and that should be integrated into the auth process. This service provides a management console for users to enable and disable application access to specific user data or other third party services and audit activity by the user, their apps and by third party services. This management console also has the concept of levels of security. If the code has been audited and is from a highly reputable source it is put in a certain group, if it is a new app from a new company then it would not be graded as highly. We have something like this with green bar ssl certs. Banks and credit card companies do this kind of risk grading all the time, if we want an auth system to be worthy of our efforts and then is should be worthy of banking and e-commerce, we should be serious about preempting, identifying, and responding to risk throughout the auth process not just let people revoke access tokens when and IF they find out there was a problem.

This Twitter client situation reminded me of some ancient (1999) history, so in case you're wondering how far companies will go to try to enforce a theoretically-impossible preference for one client of their service...

The MSN Messenger team added America Online chat support to the Messenger client. AOL didn't like that and tried a variety of approaches to reject Messenger. The protocol was undocumented, so there were lots of tricks they could play. At one point they went (IMHO) a bit too far: they deliberately exploited a buffer overflow in their own client!

One person's contemporaneous summary: http://www.geoffchappell.com/notes/security/aim/index.htm

If you ship a binary to a person’s computer and that binary has a secret embedded in it, that secret will eventually be discovered.

This has been discussed here before: http://news.ycombinator.com/item?id=4411696

Something I've been pointing out about OAuth for ever is that it's a method for delegating authorization to agents who wish to act on behalf of the user. When it is the actual user him/herself who is acting, there's nothing wrong (and a lot of things right) with username/password authentication.

OAuth prevents apps from nabbing passwords, though.

If the app is displaying its own internal web view for OAuth, it can load any page it wants in there and tell the user it's Twitter. It can even fake an address bar with a twitter.com URL if it wants. Then use the common phishing technique of 'oops, you must have entered your password wrong' (the user didn't, but now the phisher has it), followed by a forward to the real site so the user suspects nothing.

But this is hypothetical. In reality there is little motivation for apps in an App Store-like environment, which survive on customer goodwill, to want to do this.

The user's security is probably not why Twitter chose OAuth.

It doesn't even need to be that complicated. They could literally show the real OAuth flow, and inject a script into the page (since they control the WebView) that harvests the username/password. No "oops you got it wrong", no risk of visual inaccuracies.

> But this is hypothetical. In reality there is little motivation for apps in an App Store-like environment, which survive on customer goodwill, to want to do this.

Would anybody ever find out though?

It's clear that the App Store does no real checking of the apps they accept. Since the GUID use was banned, everyone has just switched to the just-as-unique MAC address to identify devices.

Any token-based system will.

OAuth, having solved this problem, then goes on to create many more.

I use OAuth for an application written in PHP, and as such, there's no possible way to trust the client/secret, given that the source is not obfuscated in any way. This application talks to my own server, and the OAuth flow is basically just a way to avoid storing username/password combinations. The client key/secret have to be treated as permanently compromised, so the only thing I use those for is version usage statistics.

The question is, given that your key/secret will be compromised, is there any point in even having it in the OAuth flow?

Interestingly, the keys were posted 5 months ago.


For people who think this is going to cause drive-by Twitter hijacks, remember that Twitter stores the callback URL on their side for this very reason. Any web app impersonating these apps will fail at the callback stage.

The client can intercept whatever URL the embedded webview is redirected to. The callback URL provides no security against this.

That's not a drive-by hijack, all bets are off when it comes to apps. For all you know the app is presenting a fake login dialog.

It would be a drive-by hijack on the web because there's a good chance you're already authenticated with Twitter and the callback cycle will automatically grant credentials on your behalf to the requester with no prompt.

Right - we were discussion OAuth in the context of client apps.

I'm pretty sure I was setting the context, and that context was the web.

Fair enough. I was referring to the larger discussion.

The Twitter API lets you specify the callback URL as part of the request: https://dev.twitter.com/docs/api/1/post/oauth/request_token

However, Twitter also requires users to authorize the application each time an application requests an OAuth token, so the possibility of using these keys for hijacking is limited (although it might be possible to use them to make a phishing attempt look more authentic).

I just tested out the iPhone key/secret using the script here [1] and it worked perfectly. It'll probably bump my actual iPhone client off though I'm assuming.

[1] https://gist.github.com/tcr/5108489/download#

It could be non dangerous if oauth1/2 would follow my advices (static redirect_uri):


Here is an interesting talk on OAuth by its creator: http://2012.realtimeconf.com/video/eran-hammer

what's wrong with making the oauth_callback parameter not override whatever you put in Twitter? wouldn't this fix the problem?

Yes, but break a lot of other apps that happen to use different domains/pages for different contexts (mobile, desktop, etc).

Does twitter honor the the oauth_callback parameter? Otherwise, how can those keys be used by an attacker?

oauth_callback is only interesting for web app authorization. For out-of-band authorization flows, you can't protect it with a callback filter.

Since these keys were lifted from an application that does out-of-band auth flows, any other app could use them similarly at will.

The Twitter apps don't use the OOB flow. They use xAuth password exchange "flow".

Nope, untrue. Read the description for oauth_callback.


if you find this sort of 'research' fun and/or you find this sort of stuff to be the norm rather than the exception ;), you should chk out https://www.appthority.com/careers

uh, this is not good. Why would someone post that under their own github account?

It's more or less public knowledge. You can find it yourself by running "strings" on the Twitter app binary. Any attempts on Twitter's part to limit the disclosure of these tokens would almost certainly invoke the Streisand Effect.

Couldn't another app use these tokens and take advantage of lax api limits ?

Yes. And that's the point of the disclosure.

Anyone know what are the API limits for these keys? Is Twitter really favoring this key, or is that hypothetical?

Of course, you still have to log in as a user, and Twitter could blacklist accounts that use this key on non-Twitter apps, which are going to have a lot of 'tells' and a specific signature in patterns of how they use the API.

(Twitter could even take advantage of that by hiding a code in a usage pattern, kind of like the POW who blinked in Morse code when he was put on TV)

> Is Twitter really favoring this key, or is that hypothetical?

In at least one way, yes. New third-party Twitter clients are limited to 100k users, but Twitter's official clients are unlimited. If those clients built in a "use your own authentication token" UI, you could put your official client's tokens in and work around that limit.

> Is Twitter really favoring this key, or is that hypothetical?

I don't know about API quotas, but I'm totally sure that they allow more than 100K tokens.

On Android, the foss client Twidere let users change the tokens in the options. https://play.google.com/store/apps/details?id=org.mariotaku....

The Chrome app Hotot too. https://chrome.google.com/webstore/detail/hotot/cnfkkfleeioo...

NekoTsui supports to change consumer key/secret. https://itunes.apple.com/app/nekotsui/id476924886

I think Apple will simply not permit applications that use these keys and are not official clients in the App Store. Looks like something that is pretty easy to automate.

You presume that one would use the keys on iPhone. No reason you couldn't run them on a Linux box in AWS...

I'm not sure why Apple would play police for Twitter, though.

Isn't Twitter integrated into Apple's mobile operating system? Such tight partnerships is plenty reason for them to play police for Twitter.

Yes and no. Apple wants to protect their Twitter partnership, but... Apple knows that there aren't any effective police in the park next door. So the question is whether Apple values their Twitter relationship enough that they're willing to cede most of the future energy and enthusiasm around third-party Twitter clients to Android.

It's possible, but I don't think it is at all an easy call.

How would Apple know that the app uses these keys? If they run something similar to strings then all you have to do is store the keys in some kind of obfuscated form.

Right but as soon as the press find out, and they will, that developer account will be banned. Most devs won't see it as worth the risk.

What responsibility does Apple have to Twitter except the notification center widget?

Twitter did this to themselves. Without the limit, this information is worthless. It'll make sense for an app like Tweetro[1] to add custom token as a feature or easter egg.

1: http://www.theverge.com/2012/11/11/3631108/tweetro-user-toke...

> Without the limit, this information is worthless.

Not true. Say you have a malicious Twitter client app that posts "Lose Weight In 30 days! <link>." Normally, Twitter could shut this offending app down by rejecting their client ID/secret; if they're using the official Twitter creds though, doing so would shut down all official Twitter apps in the process.

They already have spam systems in place to catch repetitive spam tweets and block them.

It would be possible to obfuscate a secret by storing it in several parts and combining them at run time. Still very far from secure, but this would require much more effort to extract the secret from the app.

Anyone: what is best practice here (Android and/or iOS)?


Storing application secrets in Android's credential storage [1]. I have no idea how secure this actually is.

Should I obfuscate OAuth consumer secret stored by Android app? [2]

[1] http://nelenkov.blogspot.co.uk/2012/05/storing-application-s...

[2] http://stackoverflow.com/questions/7121966/should-i-obfuscat...

> It would be possible to obfuscate a secret by storing it in several parts and combining them at run time.

Then run `strings' on virtual memory image of the offending process. Same difference.

Correct me if I'm wrong here but I believe that then all one would need to do is stick an SSL intercepting proxy (such as http://mitmproxy.org/doc/ssl.html) in the middle and get the keys from there.

That depends on how the secret is used by the client to authenticate with the remote service.

If the client just sends the secret as part of an authentication request, then a proxy would reveal it. But if some form of challenge/response [1] process is used, where the value sent is derived from the secret and an unpredictable challenge sent by the remote service, then as far as I know a proxy wouldn't help.

I don't know enough about the details of the Twitter/DropBox/etc APIs work to know if they use challenge-response.

[1] http://en.wikipedia.org/wiki/Challenge%E2%80%93response_auth...

It's at least possible to accept only the keys from your real servers, which would stop this attack.

For what happens in the real world, see Georgiev et al.'s "The most dangerous code in the world" at https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-... (spoiler: I described this paper in our internal knowledgebase as "very readable. Promises lots of facepalming and delivers in spades.")

Incorrect. The consumer-secret and access-token-secret are not transmitted from client to the server during oauth. They are only used to sign requests.

Clarification: For v1. In v2 the secrets just go over SSL.

Only if the app uses the phone's certificate store, as opposed to a hard-coded one.

For Android, I suppose you could just run a Java bytecode obfuscator before converting the bytecode to Dalvik. There doesn't seem to be something comparable for iOS.

One simple solution is to set N-1 arrays to random data (hardcoded or generated at compile time) and set the last array to the real secret XOR random array #1 XOR random array #2 XOR ... XOR random array #N-1; this doesn't exactly stop a determined attacker, but it does stop "strings".

There's very little point obfuscating strings in iOS - since you can attach a debugger to the binary itself on jailbroken handsets (or using cycript) you can step through to the method(s) that use the secret keys and pull them out from there.

Unless you know the device isn't rooted this doesn't really achieve very much. On a rooted device an "attacker" could have replaced the crede tial storage with something that will conveniently store the data unprotected.

It is helpful as a way of ensuring random applications don't get hold of the data, but not for keeping the data from a determined user.

Isn't this actually a great thing since it enables developers to develop Twitter clients that aren't dependent on tokens and approval from Twitter?

You could only make an app that would explode if/when they decide to change keys

Along with any official client that is slightly out of date. Twitter might be hesitant to alienate their users.

The app could also just download the latest extracted keys from your server when it experiences an authentication failure.

I guess twitter, and any other client would just say - key revoked, you need to update the app for it to work again. It's an endless game of cat and mouse.

Using the official key in an unofficial client sounds like a problem that will be solved with the legal system, not by increasing the burden on Twitter.

Maybe. Look at AIM and the free/shareware clients for a historical example.

But they can't change those keys without isolating every installation until it's updated, right?

Indeed. Constantly changing keys would cause as many problems for users of the official client as it would for unofficial clients.

Twitter could make some way for the official client to fetch new keys from a server without a binary update, but then they'd have to somehow protect that mechanism from third parties...

I suppose the next logical step would be to procedurally generate keys based on the date, and have only the algorithm (not the keys themselves) known to the official client. Not in any way insurmountable, but a little more difficult to crack.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact