Given the simplicity of the exploit, I really doubt that claim. Seems more likely they just don't have a way of detecting whether it happened.
After edit: "you might be all right." was a poor choice of words. If you piss off the wrong people, you won't be all right.
If they have a log of all JWTs issued that records which user requested and which email in JWT, then they can retroactively check if they issued any (user, email) pair that they shouldn't have.
Then they can assert that there was no misuse, if they only found this researcher's attempt.
One is where the API ("2nd step" mentioned in the doc, POST with a desired email address to get a JWT) is an authenticated API, meaning it requires a valid credential, but Apple's implementation of this API made a mistake of not checking if the user-requested email belongs to the user or not. In this case, the log can give enough information for the forensic analysis to determine misuse. I presumed this was the case.
The other possibility is if they implemented that API as unauthenticated. I presumed this was not the case - as this is a more difficult mistake to make, and given that they claimed some knowledge of no misuse - but I have no way to know for sure this isn't the case here. The end result would be the same. If the root cause was this case, indeed it's difficult to know if no misuse has happened.
request 678: request from user bananas
request 678: issued token for bananas
request 987: request from user <blank>
request 987: issues token for carrots
Without really smart and well-considered limitations and logging, it's impossible to tell the User from the User* without digging through audit trails, etc.. and if the developers/architects involved didn't consider the limitations and logging in the first place, odds are they didn't consider the audit trails either.
And yes, I do this for a living.. and have seen bad things from major organizations. :(
There are obviously lots hypotheticals for which this might not be verifiable.
What makes you say that? Lots of hacks and leaks shows that Apple only see Privacy as a word to sell stuff. It isn't something they code for if not forced by leaks and hacks (laws in the US also is against privacy by design).
While I do think that Apple is using privacy mostly for PR, I wouldn’t be too cynical. It’s likely that new projects are built with higher privacy standards. But also, I wouldn’t be too surprised if their PR department is writing checks that their engineering teams cannot fully cash.
Just as a first-hand anecdote to back this up, a dev at my former company which did a mix of software dev and security consulting found a much more complex security issue with Apple Pay within the first hour of starting to implement the feature for a client and engaging with the relevant docs.
How did no one else notice this? The only thing I can think of is the “hidden in plain sight” thing? Or maybe the redacted URL endpoint here was not obvious?
You know what I love about the internet? You think something like this, and you just know somebody's looked into it in some details :D - https://cocosci.princeton.edu/papers/absentData.pdf
You can read about the cultural and traditional idiom I wrote at the wikipedia page for "Evidence of Absence" where the first paragraph mentions, "Per the traditional aphorism, 'Absence of evidence is not evidence of absence,' positive evidence of this kind is distinct from a lack of evidence or ignorance of that which should have been found already, had it existed."
There is further information in the wikipedia page for "Argument from ignorance" that shows why your use of evidence is also a logical fallacy. You can infer from indirect evidence, but that doesn't prove a fact.
While indirect evidence may lead one to to believe a fact has been proven, that is not what happens. You can read some of the legal ramifications of using indirect, inferential, or circumstantial evidence to convict beyond a reasonable doubt at https://www.legalzoom.com/articles/why-cant-some-juries-conv....
Thank you to everyone who educated me.
I can’t provide an explanation of the behavior you observed without more information, but I can reasonably conclude that the vulnerability here wasn’t the cause.
I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.
That was a good bounty. Appropriate given scope and impact. But it would have been a lot cheaper to offer a pre-release bounty program. We (Remind) occasionally add unreleased features to our bounty program with some extra incentive to explore (e.g. "Any submissions related to new feature X will automatically be considered High severity for the next two weeks"). Getting some eyeballs on it while we're wrapping up QA means we're better prepared for public launch.
This particular bug is fairly run-of-the-mill for an experienced researcher to find. The vast majority of bug bounty submissions I see are simple "replay requests but change IDs/emails/etc". This absolutely would have been caught in a pre-release bounty program.
The token described in this disclosure is an OpenID Connect 1.0 Token. OIDC is a state of the art AuthN protocol that supersets OAuth with additional security controls. It's used by Google, Facebook and Twitch amongst others.
I'd do more analysis, but the author leaves off the most important part here (not sure why)
It's often so easy to reach for the values in the params/payload first because you're already working with them, instead of remembering to use the session values instead.
This would be a great audit to do for entire codebases.. Just check all places that are using params/payload values and see if there's actually already a session value that should be used instead.
If they have all the JWTs, seeing if one had a different e-mail than the logged-in user should be fairly doable.
On the backend, maybe they can look up who requested a token at a time.
But otherwise, no it doesn’t seem to be
Why was Apple signing a response JWT when the user only supplied an email?
I’m not a web guy so I just don’t see what they were going for here.
Your apple account is safe, but if a 3rd party trusts the signed apple payload without further verification of email, an attacker could sign in as you on the 3rd party app.
The third party is not supposed to link the information from an OpenID Connect client system by email address, which could change or go away at any time, and is also not guaranteed to be unique.
Rather, they should use the 'sub' claim which is meant to be the same over the lifetime of the user account with the issuer.
At best it's a work stoppage when someone changes their email. At worst you make assumptions about reusability of emails and give data access to the wrong account.
Seems little too obvious to exist, but people make mistakes.
It’s just that with JWT/JSE/JOSE already questionable security options that you would be extra super careful if you are using it. (Biggest flaw off the top of my head is it literally says what encryption is used with a NONE option meaning you can just switch to “none” and forge jwts to anyone that didn’t know not to accept those, second would be that they mix symmetric and asymmetric encryption options)
HS256 relies in shared secrets, so anyone who can verify a token can also change it. RS256 allows you to download the IdP keychain every once in a while and verify tokens offline.
OAuth tokens also are not meant to be used for authentication, and require either a separate token (as OpenID Connect did) with appropriate security, or to wedge additional security on top of access tokens as Facebook did with Connect.
This is basically because access tokens are meant to be messages about allowed access to the API resources, not messages to the client software about the user.
"Issues that are unique to designated developer or public betas, including regressions, can result in a 50% additional bonus if the issues were previously unknown to Apple."
At least from the writeup, the bug seems so simple that it is unbelievable that it could have passes a code review and testing.
I suspect things were maybe not as simple as explained here, otherwise this is at the same incompetence level as storing passwords in plaintext :O.
https://news.ycombinator.com/item?id=15800676 (Anyone can login as root without any technical effort required)
And to top it off (https://news.ycombinator.com/item?id=15828767)
Apple keeps having all sorts of very simple "unbelievable" bugs.
There seems to be kind of a common theme to these:
- SSL certificates not validated at all
- root authentication not validated at all
- JWT token creation for arbitrary Apple ID users not validated at all
I think these are all very likely due to error and not malice, but it's pretty crazy how these gaping holes keep being found.
(The session itself was ok-ish. It was some trainings about xsrf, nothing special either)
(That incident also triggered me to purchase a sheet of  stickers from xkcd to put on my laptop, so the next time this kind of thing happens I can just point to the sticker on my laptop. But I didn't got a chance to do that yet since received the stickers)
Apple pioneered usable security with TouchID and the secure enclave; a lot of Android fingerprint readers were gimmicks for years, same with the face unlocks. https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...
They also invest piles of money into privacy https://apple.com/privacy (1 minute overview), https://apple.com/privacy/features (in-depth with links to whitepapers).
I imagine that's where your teacher was coming from.
Regarding security, see examples like https://qz.com/1844937/hong-kongs-mass-arrests-give-police-a...
Google doesn't have the intention to keep all your data private, sure. They are an ads company after all. But for things they want to protect, in most cases they are competent enough to protect them.
(Disclaimer: I also worked for Google, but the "employer" I mentioned in my original comment was not Google)
I think it’s too early to claim anything for that one.
I hired a very high level pen test company, they mandated iPhones for their company work. They were the best infosec company we’ve ever hired. Sample of one.
I wouldn’t suggest iPhones are more safe than Android, but i also wouldn’t suggest in any way they are less safe overall.
It is extremely frustrating. Especially when Catalina removes features that were working perfectly.
I’ll upgrade when some piece of software I need to use requires it.
I have an iMac that uses it and a Mac Mini that is on Mojave and for some reason, High Sierra just feels more stable with some software.
Firefox runs fine on High Sierra and has crashed multiple times in the past few weeks after using it on Mojave.
Maybe I'm just biased having used High Sierra for so long and dreading Catalina lol.
But I’m not aware of any new feature in Mojave I want or need, so the 2013 MacBook Pro Retina I’m using will stay on High Sierra for today :)
It's just that so far, only Twitter has bothered to do so on Mac. Even software like Slack which does so for iOS just hasn't bothered to port that code to their Mac app - most likely because of the Mac app using a different Electron-based codebase.
The whole pro-multimedia production crowd probably cares...
(vs the current Apple paramour: the multimedia consumer who wants to order pizza and get back to netflix on their phablet or whatever..)
They are the Nintendo of Computing. They have some novelties, but in general they are average at their best. Notice that both Nintendo and Apple are big advertisers.
Edit: Looking at the OAuth picture in the article, my guess would be like adding a step in between 1 and 2 where the server says "what email address do you want here" and the (client on the) user side is responsible for interacting with the email relay service and posting back with a preferred email address. Or the server does it but POSTS back to the same endpoint which means the user could just include whatever they want right from the start.
The only thing that makes me think I might not be right is that doing it like that is just way too dumb.
AND I'm guessing a bunch of Apple services probably use OAuth amongst themselves, so this might be the worst authentication bug of the decade. The $100k is a nice payday for the researcher, but I bet the scope of the damage that could have been done was MASSIVE.
Edit 2: I still don't understand why the token wouldn't mainly be linked to a subject that's a user id. Isn't 'sub' the main identifier in a JWT? Maybe it's just been too long and I don't remember right.
The details are very sparse in the post, but I believe the "sub" claim is a unique and stable value for the user against a particular relying party (based on that being a requirement in OpenID Connect.)
You _should_ be relying on sub rather than email address, which is not guaranteed to be sent every time, to stay stable, or be unique across accounts.
So while this was a zero day in terms of providing arbitrary email addresses as verified addresses, it may have not led to any account compromises.
1. Don't add these.
2. If you must add something, structure it so it can only exist in test-only binaries.
3. If you really really need to add a 'must not enable in prod' flag then you must also continuously monitor prod to ensure that it is not enabled.
Really hoping they follow up with a root-cause explanation.
Took about two years to fix. Gave me credit. No money.
I'm not surprised here.
They basically made a huge fundamental design mistake.
Sometime code review is just "Please change the name of this function" and testing is just testing the positive cases not the negative ones. Yes, even in companies like apple and google.
After this, they should remove the requirement of Apple Sign in. How do you require an app to implement this with such a ridiculous zero day?
The problem I have is that I can’t tell what their processes are beyond the generic wording on this page
Also, it's not sufficient to "have a test case". The intent and the implementation must be coherent.
No, it's completely inexcusable. There should never be such a simple, major security vulnerability like this. Overlooking something this basic is incompetence.
 - https://developer.apple.com/news/?id=03262020b
If this is not the issue, then the implementation might be too complex for people to compare it with the spec (gap between the theory and the practice). I would be extremely interested in a post mortem from Apple.
I have a few follow up questions.
1. seeing how simple the first JWT request is, how can Apple actually authenticate the user at this point?
2. If Apple does not authenticate the user for the first request, how can they check that this bug wasn’t exploited?
3. Anybody can explain what this payload is?
"email": "firstname.lastname@example.org", // or "XXXXX@privaterelay.appleid.com"
My guess is that c_hash is the hash of the whole payload and it is kept server side.
It's not a bug with protocol or security algorithm. A lock by itself does not provides any security if its not put in the right place.
1. User clicks or touches the “Sign in with Apple” button
2. App or website redirects the user to Apple’s authentication service with some information in the URL including the application ID (aka. OAuth Client ID), Redirect URL, scopes (aka. permissions) and an optional state parameter
3. User types their username and password and if correct Apple redirects them back to the “Redirect URL” with an identity token, authorization code, and user identifier to your app
4. The identity token is a JSON Web Token (JWT) and contains the following claims:
• iss: The issuer-registered claim key, which has the value https://appleid.apple.com.
• sub: The unique identifier for the user.
• aud: Your client_id in your Apple Developer account.
• exp: The expiry time for the token. This value is typically set to five minutes.
• iat: The time the token was issued.
• nonce: A String value used to associate a client session and an ID token. This value is used to mitigate replay attacks and is present only if passed during the authorization request.
• nonce_supported: A Boolean value that indicates whether the transaction is on a nonce-supported platform. If you sent a nonce in the authorization request but do not see the nonce claim in the ID token, check this claim to determine how to proceed. If this claim returns true you should treat nonce as mandatory and fail the transaction; otherwise, you can proceed treating the nonce as optional.
• email: The user's email address.
• email_verified: A Boolean value that indicates whether the service has verified the email. The value of this claim is always true because the servers only return verified email addresses.
• c_hash: Required when using the Hybrid Flow. Code hash value is the base64url encoding of the left-most half of the hash of the octets of the ASCII representation of the code value, where the hash algorithm used is the hash algorithm used in the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is HS512, hash the code value with SHA-512, then take the left-most 256 bits and base64url encode them. The c_hash value is a case sensitive string
How many members of the public think that they have to use their E-mail account password as their password for Apple ID and every other amateur-hour site that enforces this dumb rule?
MILLIONS. I would bet a decent amount of money on it. So if any one of these sites is hacked and the user database is compromised, all of the user's Web log-ins that have this policy are wide open.
Then there's the simple fact that everyone's E-mail address is on thousands of spammers' lists. A simple brute-force attack using the top 100 passwords is also going to yield quite a trove, I'd imagine.
Apple IDs didn't originally have to be E-mail addresses. They're going backward.
If anything, the issue is that third parties treat the email address as a unique, unchangeable identity, and then agree to rely on Apple's assertion of what your email address is. But given how hard identity is - and the challenges in dealing with passwords, account recovery, and name changes at scale - it's a pretty reasonable tradeoff to make.
1. what sign in with apple is
2. sign in with apple is like oauth2
3. there's some bug (not explained) that allows JWTs to be generated for arbitrary emails
4. this bug is bad because you can impersonate anyone with it
5. I got paid $100k for it
Note that when they give the POST request, they say "Sample Request (2nd step)".
But what is step 2? The diagram above shows step 2 as a response, not a request. At least that's how I interpret an arrow pointing back toward the user. So the write-up conflicts with the diagram.
How do you resolve that conflict? One guess is that "Sample Request (2nd step)" should say "1st step" instead.
Another guess is that the arrow directions don't necessarily always indicate whether a step is a request or a response, so that step 1 could be a request and response, and step 2 could be another request and response that POSTs to a secret URL that was learned about in step 1. (This guess could make sense because the request is a JSON message with just the email field. There must be credentials somewhere, so either it's redacted or some kind of credentials were given another way, like in step 1.)
If this second guess is right, then a follow-on guess is that the crux of the bug is that in step 1, you sign in with a particular email, then Apple says "OK, now here's a secret URL to call to get a JWT token", and then in step 2, you change email, and it doesn't notice/care that you changed emails between step 1 and 2.
> Here on passing any email, Apple generated a valid JWT (id_token) for that particular Email ID.
You need an additional bug on the relying party for this to allow someone to gain access - that they associate the apple account based on the unstable email address claim rather than the stable "sub" claim.
For those not aware, some time ago apple decided it would be a good idea to develop their own sing in system, and then force all apps on their store (that already support e.g. Google Account login) to implement it.
So they brought a huge amount of additional complexity in a large amount of apps, and then they fucked up security. Thank you apple!
A big problem of many apps is that they only had a "log in with google"/"log in with facebook" button, which is very problematic for people who have neither.
On Android this is more acceptable since you need a Google account for the OS itself anyway.
I don't think you do, I'm pretty sure I've skipped that step during device setup on occasion.
I'm also very lazy when it comes to payment methods. Trying to order food and the app doesn't support Apple Pay? Delete it and do something else.
Clearly there are issues with the entrenchment of Apple at the center of all this, and these problems would be better solved with open standards, but the consistency and convenience makes an actual measurable benefit in the end user's daily life.
Microsoft gave Internet Explorer away for free when Netscape was selling their browser to businesses, an obvious attempt to undermine Netscape.
They also threatened to cancel the Windows 95 licenses for companies like HP that shipped Netscape with their computers instead of Internet Explorer. That would have essentially put them out of business.
Because Microsoft had 95% of the operating system market share, it had signed a decent decree with the federal government that they wouldn’t use their monopoly in operating systems to their advantage in web browsers, which were a new software category then.
So of course, they bundled Internet Explorer with Windows 95 and claim they couldn't be separated, an obvious lie, claiming Internet Explorer was a critical part of the operating system.
All of this orchestrated by future humanitarian Bill Gates, who was quoted as saying then Microsoft needed to “cut off Netscape’s air supply.”
Even in the United States, Apple isn’t a monopoly with about 40% market share. Everything Apple mandates is with companies who've contractually agreed to be part of Apple's developer program and abide by its rules.
Nobody agreed to not ship a competing web browser back in the day.
This is a non argument. A Duopoly is no reason to not being able to behave like a monopoly. If you don't play by Apple's or Google's rules you essentially lose 50% of the market.
I suspect if you don’t want to deal with Apple or Google directly, you can create web apps.
> Monopolies aren’t illegal; it’s the use of monopoly power in corrupt ways that’s illegal.
That is exactly my point. You don't technically a monopoly to abuse your power in corrupt ways. A duopoly is good enough to abuse your dominant position towards the deteriment of users and the market.
It's likely (although like others have noted, this is scant on details), that this value was correct and represented the authenticated user.
A relying party should not use the email value to authenticate the user.
Not contesting that this is a bug that should be fixed and a potential security issue, but perhaps not as bad.
Anyone else? Am I reading this right?
e.g https://news.ycombinator.com/item?id=15800676 and
So I don't get shocked anymore seeing Apple security issues.
Even then, the only "security" that developers had was that the attacker wouldn't know the victim's Apple userId easily. With this zero-day attack, it would have been trivial for many apps to get taken over.
Looks like Federighi agrees with this diagnosis and tries to improve the overall development process but not sure if it can be really improved without changing the famous secretive corporate culture. At the level of Apple's software complexity, you cannot really design and write a quality software without involving many experts' eyes. And I have been complained by my friends at Apple about how hard to get high level contexts of their works and do a cross-team collaboration.
And IMO, this systematic degradation of the software quality coincides with Bertrand's leaving, who had allowed relatively open culture at least within Apple's software division. I'm not an insider, so this is just a pure guess though.
This defenitly wasn't complex in any shape or form. This was very basic.
That's not how zero-day works
Regardless, this bug is definitely not 0 day given that it was disclosed to the vendor last month.
That's not what that word means. Zero-day refers to actively exploited bugs. Stop hijacking words just to overhype your research.
(sign in) with (apple zero day)
which is kind of appealing
I actually think they have a good approach. Rewarding major finds with good payouts and avoiding the flood of info and low level web app ‘bugs’.
Sign in with Apple: zero day flaw
And if I understand this correctly, the issue is in the first API call, where the server does not validate whether the requester owns the Email address in the request.
What confuses me are where're the "decoded JWT’s payload" comes from. Is it coming from a different API call or it's somewhere in the response?
If so, what extra validation did Apple add to patch the bug?
Props to Apple for raising the bar on bounties!
I've had multiple occasions of "Seriously, Apple hired person X? lol" over the past five years or so.
I am not sure if I am understanding the blog post correctly, because its simplicity is beyond ridiculous.
The author even says that Apple found no evidence of it being exploited.
By definition when this blog post was published it was not the 0th day.
In the initial authorization request rather than passing a string with an email address, the caller could pass a boolean `usePrivateRelay`. If true generate a custom address for the third party, if false use the email address on file.
With that one change the implementer no longer has the opportunity to forget to validate the provided email address, and the vuln is impossible.
He could literally send a POST request to that endpoint with arbitrary email addresses and get a valid JWT.
This is clearly explained under the "BUG" section.
But no. Instead they make more proprietary shit without having the basic skills to do so. Then they force that shit on their users.
Is there any bug bounty program for small businesses/apps? I only found hackerone but it seems to be only for enterprise. Is there any recommended platform for small businesses to create their own public bounty program?
Who is implementing that stuff?
Fucking hell. Even after tax, that's a substantial pay-out.
What are they teaching them in computer school these days. How can you write a security function and not test it for these kind of bugs. Unless all there accidental backdoors have a more nefarious purpose <shoosh>
Apple has been spending a lot of money on a security-focused marketing campaign these past few years, and encouraging a high-price payout of $100k is sage marketing.
In that case, you'd catch it way before even implementing the fuzzer.
So in this case, I don't think a fuzzer would have helped. Some E2E tests written by humans should have caught this though.
Perhaps you do not understand the staggering complexity that lies behind watching a cat video on your iPhone. There are almost uncountable ways to break into the system. This is why there is a bug bounty program from every company - from Stripe (you could argue as a major software company in payment systems) to Microsoft, from Apple to Gitlab, every company has a bug bounty program. Why do you think they give out $1m for a serious bug? If they were not serious about it, that sounds like a big waste of time.
This kind of entitlement attitude is usually from people who've never developed a complex piece of software such as an operating system.
The headline makes me think the entire problem lies with Apple, when that’s not the case.
> ...affected third-party applications which were using it and didn’t implement their own additional security measures.
Third-party applications really have no recourse but to trust the signed JWT. That is just how OAuth2/OIDC works.
User impersonation against an IdP is a serious security issue. 100k is cheap.
The bug was basically on the IdP's "consent screen". Instead of using the email from the active logged in account, it allowed the attack to POST in any email they wanted.
Obviously not having the bug would be great. Apple could do "more", and layer on more things on top of OAuth, like a proof of key extension (DPoP) on the flow:
But if you have a bug like this, where you can edit your claims arbitrarily inside the IdP, extra security layers kinda don't matter.
The relying party would have to incorrectly rely on the 'email' claim, which is not guaranteed to be sent, stable, or unique, rather than the "sub" claim as documented.
This shouldn't be an account takeover problem unless your site did that or one of a few other things wrong. However, I'd be reluctant to blame the site in those cases - a lot of security issues (including this one on Apple's side) come from two components not fully understanding the contract between them.
> I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid. This means an attacker could forge a JWT by linking any Email ID to it and gaining access to the victim’s account.
While an application could potentially (not that I know exactly how in this case) further verify the received token, that verification is exactly what an authentication service is supposed to provide, hence the responsibility absolutely rests on Apple who provides the service.