Hacker News new | past | comments | ask | show | jobs | submit login
Zero-day in Sign in with Apple (bhavukjain.com)
1076 points by masnick on May 30, 2020 | hide | past | favorite | 267 comments



"Apple also did an investigation of their logs and determined there was no misuse or account compromise due to this vulnerability."

Given the simplicity of the exploit, I really doubt that claim. Seems more likely they just don't have a way of detecting whether it happened.


Seems the only way to trust the companies in such situations is to exploit the vulnerabilities from multiple, unconnectable devices and locations, over as long a period as possible. If the company cannot list all of the attacks, you know they're bullshitting.


I wonder if compromising your own account can be seen as unlawful. It's much like teenagers being found to break the law by making nude photos of themselves: they are found in possession of prohibited materials, even though they obtained them in a lawful way. Cracking your own account in a lawful way could possibly be done by a court order, but otherwise your actions are prohibited by law, even.though there cannot be a malicious intention.


Totally not a lawyer here, but my impression of laws like CFAA is that it revolves around unauthorized access of resources. If the only resources you access are those for which you have authorization, you might be all right.

After edit: "you might be all right." was a poor choice of words. If you piss off the wrong people, you won't be all right.


Aaron Swartz would likely beg to differ.


Added an after edit. I don't mean to say "you'll be fine, go ahead." But Swartz's case was a bit different that the hypothetical described above.


I think the flaw in this plan is "your own account". AFAIK, like credit cards, the company maintains that they own the account and can do whatever they want with it, just deigning to give you the right to access it as they see fit.


Luckily we don't all live under US law.


The one case (and about the only case) I can think of where they can claim above is:

If they have a log of all JWTs issued that records which user requested and which email in JWT, then they can retroactively check if they issued any (user, email) pair that they shouldn't have. Then they can assert that there was no misuse, if they only found this researcher's attempt.


How could you prove the user was the correct user in any given case?


I can think of two possible "root causes" with this vulnerability.

One is where the API ("2nd step" mentioned in the doc, POST with a desired email address to get a JWT) is an authenticated API, meaning it requires a valid credential, but Apple's implementation of this API made a mistake of not checking if the user-requested email belongs to the user or not. In this case, the log can give enough information for the forensic analysis to determine misuse. I presumed this was the case.

The other possibility is if they implemented that API as unauthenticated. I presumed this was not the case - as this is a more difficult mistake to make, and given that they claimed some knowledge of no misuse - but I have no way to know for sure this isn't the case here. The end result would be the same. If the root cause was this case, indeed it's difficult to know if no misuse has happened.


Assume they have two log entries.

  request 678: request from user bananas
  request 678: issued token for bananas
That looks good.

  request 987: request from user <blank>
  request 987: issues token for carrots
That doesn't look good.


They very like have a complete log of the action performed; I'd guess, they'd perform some kind of replay/playback after the bug was fixed, and see what failed to pass. Assuming their changes immediately flag things like the researcher's initial attempts and discovery, it'd probably be pretty safe to say that no one was affected if no other instances are flagged.


How does that answer the question? So what if you can replay logs of all attempts? How can you prove for any specific log that it was the “real” user making the request, and not someone using their email maliciously to make an identical request?


It doesn't. That's also the downside of most login/identity providers that implement some form of "Impersonation."

Without really smart and well-considered limitations and logging, it's impossible to tell the User from the User* without digging through audit trails, etc.. and if the developers/architects involved didn't consider the limitations and logging in the first place, odds are they didn't consider the audit trails either.

And yes, I do this for a living.. and have seen bad things from major organizations. :(


They are the issuer. it's trivial.


It depends what the fix was. If the fix was just to add a validation check to the POST endpoint to validate that the logged in user session matched the payload (and session data was comprehensively logged/stored), this may be verifiable.

There are obviously lots hypotheticals for which this might not be verifiable.


It’s no secret that Apple isn’t great at webservices and they have strong initiative not to keep user data. I could imagine a world where they just didn’t have enough logs to properly investigate and validate it.


> they have strong initiative not to keep user data

What makes you say that? Lots of hacks and leaks shows that Apple only see Privacy as a word to sell stuff. It isn't something they code for if not forced by leaks and hacks (laws in the US also is against privacy by design).


Initiative doesn’t mean they follow it, especially for legacy projects.

While I do think that Apple is using privacy mostly for PR, I wouldn’t be too cynical. It’s likely that new projects are built with higher privacy standards. But also, I wouldn’t be too surprised if their PR department is writing checks that their engineering teams cannot fully cash.


I agree, especially given how many developer “eyes” were on this from having to integrate the log in with Apple flow into their apps.

Just as a first-hand anecdote to back this up, a dev at my former company which did a mix of software dev and security consulting found a much more complex security issue with Apple Pay within the first hour of starting to implement the feature for a client and engaging with the relevant docs.

How did no one else notice this? The only thing I can think of is the “hidden in plain sight” thing? Or maybe the redacted URL endpoint here was not obvious?


Yeah, doesn't this just mean they didn't detect misuse?


It's not clear because it's not a direct quote and Apple probably wasn't explicit about the difference. I wouldn't infer one way or the other from this sentence.


I'd also like the exact wording of their claim. "There is no evidence of misuse or account compromise" is what I would expect them to say, as "There was no misuse or account compromise" likely opens them up to legal repercussions if that isn't 100% accurate.


Exactly. Lack of evidence is not evidence of lack.


Well, depending on how hard you look and what the false negative rate is like, bayesian reasoning would like to differ.

You know what I love about the internet? You think something like this, and you just know somebody's looked into it in some details :D - https://cocosci.princeton.edu/papers/absentData.pdf


Inference isn't evidence. Inference is drawn from evidence and is an educated guess. A guess is not evidence. You fell for the clickbait headline of a pdf.


Evidence isn't absolute proof and there are degrees of evidence. See: Weak evidence, strong evidence.


The idiom that I mentioned is still true, whether you discuss circumstantial or direct evidence. Lack of Evidence isn't Evidence of Lack.


Lack of evidence is always weak evidence for evidence of lack, and pretty strong evidence if you've looked hard enough. If you've trawled through loch Ness with a fine toothed comb for decades and haven't seen Nessie, well then that's pretty good evidence for lack of monster.


You're confusing correlation with causation. Regarding your point about searching - Have you ever lost your keys, searched for a long time without finding them, then found them by happenstance at a later point when not looking? Lack of evidence is not evidence of lack.


I am saying your definition of evidence is overly strict and at odds with both how we use the term in common speech and with what is useful. My definition of evidence for A is an observation B such that P(A|B)>P(A), and with this definition, lack of evidence most certainly is evidence of lack.


Yes, I understand what you mean. You are confusing your particular experiences with the common speech of everyone, which is a logical fallacy.

You can read about the cultural and traditional idiom I wrote at the wikipedia page for "Evidence of Absence" where the first paragraph mentions, "Per the traditional aphorism, 'Absence of evidence is not evidence of absence,' positive evidence of this kind is distinct from a lack of evidence or ignorance[1] of that which should have been found already, had it existed."

There is further information in the wikipedia page for "Argument from ignorance" that shows why your use of evidence is also a logical fallacy. You can infer from indirect evidence, but that doesn't prove a fact.

While indirect evidence may lead one to to believe a fact has been proven, that is not what happens. You can read some of the legal ramifications of using indirect, inferential, or circumstantial evidence to convict beyond a reasonable doubt at https://www.legalzoom.com/articles/why-cant-some-juries-conv....


I stand corrected and removing my message now since my scenario wasn’t related to this zeroday bug.

Thank you to everyone who educated me.


This isn’t an information disclosure vulnerability that would allow someone to gain knowledge of new Apple IDs. It also doesn’t affect first-party applications.

I can’t provide an explanation of the behavior you observed without more information, but I can reasonably conclude that the vulnerability here wasn’t the cause.


I find it hard to believe that signing up for an Apple ID caused the start of the phishing emails unless the email account or computer has been compromised. This is not normal when signing up for an Apple ID.


My guess would be that it was just a lucky guess/bot sending to a lot of addresses. I’ve had email addresses get spam before without using them anywhere.


they used this tool grep. look it up. /s


> The Sign in with Apple works similarly to OAuth 2.0.

> similarly

I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.

> $100,000

That was a good bounty. Appropriate given scope and impact. But it would have been a lot cheaper to offer a pre-release bounty program. We (Remind) occasionally add unreleased features to our bounty program with some extra incentive to explore (e.g. "Any submissions related to new feature X will automatically be considered High severity for the next two weeks"). Getting some eyeballs on it while we're wrapping up QA means we're better prepared for public launch.

This particular bug is fairly run-of-the-mill for an experienced researcher to find. The vast majority of bug bounty submissions I see are simple "replay requests but change IDs/emails/etc". This absolutely would have been caught in a pre-release bounty program.


> I understand why they wanted to modify OAuth 2.0, but departing from a spec is a very risky move.

The token described in this disclosure is an OpenID Connect 1.0 Token. OIDC is a state of the art AuthN protocol that supersets OAuth with additional security controls. It's used by Google, Facebook and Twitch amongst others.

I'd do more analysis, but the author leaves off the most important part here (not sure why)

https://openid.net/specs/openid-connect-core-1_0.html#IDToke...


The important part is in the author’s article. The POST to the opened endpoint generates a valid JWT token for the email address in the payload, not for the one in the logged-in session. Everything else is extraneous.


Oh man, that is classic sloppy web development.

It's often so easy to reach for the values in the params/payload first because you're already working with them, instead of remembering to use the session values instead.

This would be a great audit to do for entire codebases.. Just check all places that are using params/payload values and see if there's actually already a session value that should be used instead.


Don't just audit. Strengthen the critical APIs with types. A user provided string is not the same type as an authentic identity. Make the developer be explicit about the critical mistake about to be made. Perl had this down decades ago with the tainted checks.


This, thousand times. Don't just fix the bug, fix the process that led to the bug. Developers are (usually) not malicious, but we are often working with tooling which makes mistakes easy to do, difficult to detect and grave in consequences. Fix the tooling, not just the bug.


I wonder if encoding the query params with some dumb encoding (maybe a custom variant of base64?) and forcing decoding at every usage site would make it inconvenient enough to cause programmers to use the session values instead?


That also explains why Apple could exclude abuse happening in the logs, which some commenters have refuted.

If they have all the JWTs, seeing if one had a different e-mail than the logged-in user should be fairly doable.


Oh. Ok, so you did have to have an existing logged in session for any account, then could leverage that to get the token for another account by changing out the email?


Ah. So this creates a valid JWT for any email you want, but it is now associated with your own apple account?


> but it is now associated with your own apple account?

On the backend, maybe they can look up who requested a token at a time.

But otherwise, no it doesn’t seem to be


This is certainly logged.


I don't have any way to confirm, but yes, this is what I interpreted from the article.


My understanding is that the token itself is fine and within spec. But they altered the flow to accept an email address in one of the request payloads which opened the door for spoofing the email address. I've never seen an OAuth or OpenID flow that relied on the payload for identity.


This is likely it IMO. They probably pass the preferred email around as a parameter and the user can jump into the flow and modify it.


Wait... I don’t get it.

Why was Apple signing a response JWT when the user only supplied an email?

I’m not a web guy so I just don’t see what they were going for here.


The gap is that Apple fully verified identity via the normal OAuth flow, and then once identiy was verified they give the use control over what email to include in the token. The idea is that the user can include their email or an apple relay email (that forwards to their email). The bug seems to be in that step, and an attacker can provide an email that is neither their own nor an apple relay email.

Your apple account is safe, but if a 3rd party trusts the signed apple payload without further verification of email, an attacker could sign in as you on the 3rd party app.


Apple deviated from OpenID Connect, but...

The third party is not supposed to link the information from an OpenID Connect client system by email address, which could change or go away at any time, and is also not guaranteed to be unique.

Rather, they should use the 'sub' claim which is meant to be the same over the lifetime of the user account with the issuer.


Yup, this is it. It's a super common issue though, and we see it all the time. An app expected the email of a user to be static and keys off of it. User gets married and loses their code access.

At best it's a work stoppage when someone changes their email. At worst you make assumptions about reusability of emails and give data access to the wrong account.


That makes sense.

Seems little too obvious to exist, but people make mistakes.

It’s just that with JWT/JSE/JOSE already questionable security options that you would be extra super careful if you are using it. (Biggest flaw off the top of my head is it literally says what encryption is used with a NONE option meaning you can just switch to “none” and forge jwts to anyone that didn’t know not to accept those, second would be that they mix symmetric and asymmetric encryption options)


I noticed after reading your comment that HS256 is marked as “required” for compliant implantations. In practice, I have only ever seen signers implement RS256 though...


In OIDC you can actually specificy supported key types in the discovery process and the IdP always decides the key type anyway.

HS256 relies in shared secrets, so anyone who can verify a token can also change it. RS256 allows you to download the IdP keychain every once in a while and verify tokens offline.


Yes, that’s what I meant. I only see RS256 implemented by identity providers (and advertised via .well-known/openid-configuration). This makes sense, since most identity providers are decoupled from their clients and thus cannot feasibly share a symmetric key. I am confused what is meant in the JWA specification[1] by “HS256: required” under “implementation requirements” in the table of allowed values (the entry for RS256 reads “recommended” for that column).

[1]: https://tools.ietf.org/html/rfc7518#section-3.1


There is no endpoint in the OIDC spec that allows token creation aside from the redirect after login (implicit flow) and the token endpoint though. I don't imagine this vulnerability could've happened at either endpoint, but I could be wrong. I'll definitely be looking into how Apple implements things before I use their login for any of my own applications


I think it's actually the OIDC access token and not the ID token. The OIDC spec does not mandate any structure for the access token, but letting it be a JWT isn't out-of-spec.


I do not believe that Apple yet uses the access token bit.

OAuth tokens also are not meant to be used for authentication, and require either a separate token (as OpenID Connect did) with appropriate security, or to wedge additional security on top of access tokens as Facebook did with Connect.

This is basically because access tokens are meant to be messages about allowed access to the API resources, not messages to the client software about the user.


A few popular IdPs (e.g. keycloak) have no but semantic difference in access and ID tokens, they're both signed JWTs (EC/RSA or shudder HMAC shared secrets) with different typ fields.


Apple supposedly marks certain beta builds with a bounty multiplier. I say supposedly because like their "research iPhones" they mentioned it in a presentation once and I never heard about it again.


This might be what you're referring to:

From https://developer.apple.com/security-bounty/payouts/

"Issues that are unique to designated developer or public betas, including regressions, can result in a 50% additional bonus if the issues were previously unknown to Apple."


Yes, that's it.


I'm guessing that the research iPhones were given to a very select group of security researchers with track records of reporting important vulnerabilities under some kind of NDA.


1. Still never heard of anyone getting them and 2. that’s worse than useless.


Oh, for sure. I should clarify that I meant that they received the iPhones under an NDA, not that they reported bugs under an NDA (aside from the 90-day disclosure to get any bounties).


Word on the street suggests they don’t exist: https://twitter.com/thegrugq/status/1236264193906495488


I guess it's unrealistic to assume that their supply chain would be secure enough for these that no one would have heard anything.


Right. You can find pictures of actual internal devices all over the internet (supposedly some people will even sell them to you), so it's quite strange to not hear anything about these. With Apple going after Corellium, I think many researchers are thankful for the various exploits we've had recently that have kept iPhone open.


I'm fine with pre-release bugs being reported under an NDA. If pre-release bugs are publicly disclosed that is arguably a punishment for companies who seek that validation early in the cycle rather than later.


How is this something that can happen? I mean, the only responsibility of an "authentication" endpoint is to release a JWT authenticating the current user.

At least from the writeup, the bug seems so simple that it is unbelievable that it could have passes a code review and testing.

I suspect things were maybe not as simple as explained here, otherwise this is at the same incompetence level as storing passwords in plaintext :O.


Apple has had more simple "unbelievable" bugs, e.g

https://news.ycombinator.com/item?id=15800676 (Anyone can login as root without any technical effort required)

And to top it off (https://news.ycombinator.com/item?id=15828767)

Apple keeps having all sorts of very simple "unbelievable" bugs.


You can't forget the infamous "goto fail": https://www.imperialviolet.org/2014/02/22/applebug.html

There seems to be kind of a common theme to these:

- SSL certificates not validated at all

- root authentication not validated at all

- JWT token creation for arbitrary Apple ID users not validated at all

I think these are all very likely due to error and not malice, but it's pretty crazy how these gaping holes keep being found.


More recent example of Apple "undoing" patches: https://www.synacktiv.com/posts/exploit/return-of-the-ios-sa...


Last year (or maybe 2018?) my employer hired an external consultant to give engineers security trainings (all are optional, they provide a few sessions on different topics, and engineers can sign up for interested ones). In one of the sessions I signed up, during the pre-session chat (while waiting for everyone signed up show up in the conference room), the external trainer "casually" chatted about "if you have an Android phone, you should throw it out of the window right now and buy an iPhone instead". That's the point I lost all my respect to them.

(The session itself was ok-ish. It was some trainings about xsrf, nothing special either)

(That incident also triggered me to purchase a sheet of [citation needed] stickers from xkcd to put on my laptop, so the next time this kind of thing happens I can just point to the sticker on my laptop. But I didn't got a chance to do that yet since received the stickers)


This was pretty true not long ago. It's still a notoriously short window for OEM software patches on Android, whereas Apple's first 64-bit phone, the 5s from Fall 2013 is still getting patches (May 20th was the last one, iOS 12.4.7)

Apple pioneered usable security with TouchID and the secure enclave; a lot of Android fingerprint readers were gimmicks for years, same with the face unlocks. https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...

They also invest piles of money into privacy https://apple.com/privacy (1 minute overview), https://apple.com/privacy/features (in-depth with links to whitepapers).

I imagine that's where your teacher was coming from.


Yes I'm not gonna defend Google's privacy issues, but privacy is totally different from security. People tend to confuse them. I understand it if it's average Joe got confused. But if you are a "security consultant" and you still have no idea what's the difference between them, then that's a big problem.

Regarding security, see examples like https://qz.com/1844937/hong-kongs-mass-arrests-give-police-a...


For privacy, I guess you can have full confident that Apple has the intention to keep your data private. But do you have full confident that Apple has the competence to fulfill that intention?

Google doesn't have the intention to keep all your data private, sure. They are an ads company after all. But for things they want to protect, in most cases they are competent enough to protect them.

(Disclaimer: I also worked for Google, but the "employer" I mentioned in my original comment was not Google)


Fortunately, it seems google has separated security updates from the OEM updates on some newer phones it seems. The phone I bought In November 2018, right now is receiving monthly security updates via Play Services updates, and is right now on the May version for some time.


> The phone I bought In November 2018

I think it’s too early to claim anything for that one.


It’s none of my concern what camp people fall into... but...

I hired a very high level pen test company, they mandated iPhones for their company work. They were the best infosec company we’ve ever hired. Sample of one.

I wouldn’t suggest iPhones are more safe than Android, but i also wouldn’t suggest in any way they are less safe overall.


Apple has really lost their touch, software quality has declined dramatically


Anecdotally, I upgraded my wife's iMac to Catalina and she's experiencing issues (rendering latency) she's never had before (hadn't upgraded the OS since buying it 4 years ago). I figured is was good to get on the latest and greatest for security reasons, now she wont let me touch her computer anymore.


I used to be on the latest and security camp as well. But after all these years I am starting to understand why people dont update.

It is extremely frustrating. Especially when Catalina removes features that were working perfectly.


I’m still on High Sierra, most recent 10.13.6 security update was ~3 days ago.

I’ll upgrade when some piece of software I need to use requires it.


As an Apple user for decades, have to say that High Sierra seemed to be one of their better recent releases.

I have an iMac that uses it and a Mac Mini that is on Mojave and for some reason, High Sierra just feels more stable with some software.

Firefox runs fine on High Sierra and has crashed multiple times in the past few weeks after using it on Mojave.

Maybe I'm just biased having used High Sierra for so long and dreading Catalina lol.


I went to Mojave and that went without trouble except I lost the ability to use my external GPU, but I knew that.


Mojave removed Facebook, Twitter, Vimeo, and Flickr, integrations, none of which I use, so that would be good.

But I’m not aware of any new feature in Mojave I want or need, so the 2013 MacBook Pro Retina I’m using will stay on High Sierra for today :)


FWIW, they added the ability for these companies to control their own integrations by publishing iOS and Mac apps.

It's just that so far, only Twitter has bothered to do so on Mac. Even software like Slack which does so for iOS just hasn't bothered to port that code to their Mac app - most likely because of the Mac app using a different Electron-based codebase.


I'm about this close to making an e-petition demanding Snow Leopard FOSS for posterity, now that they have successfully milked us all multiple times. My 2011 hardware works just fine, and 'obsolete' is meaningless in a world where IRC was replaced by Slack and where Visual Basic tutorial fodder from 1998 become MVP web products...


Welcome to the late adopter group. Never upgrade, unless it is absolutely necessary.


does it really matter though?


Someone with an AV production studio totalling over $100k in Apple products and a few million in outboard gear whose drivers worked fine before may just care a bit...

The whole pro-multimedia production crowd probably cares...

(vs the current Apple paramour: the multimedia consumer who wants to order pizza and get back to netflix on their phablet or whatever..)


I'm not sure if Apple ever had quality.

They are the Nintendo of Computing. They have some novelties, but in general they are average at their best. Notice that both Nintendo and Apple are big advertisers.


It is more fashion than technology. Like ordinary clothes from famous brands. Expensive, praised yet nothing special.


My guess is that it has to do with that private relay because OAuth isn't too complex by itself. During the OAuth flow they probably collect the user preference, (if needed) go out to the relay service and get a generated email, and POST back to their own service with the preferred email to use in the token.

If that's it, it's about as bad as doing password authentication in JavaScript and passing authenticated=true as a request parameter.

Edit: Looking at the OAuth picture in the article, my guess would be like adding a step in between 1 and 2 where the server says "what email address do you want here" and the (client on the) user side is responsible for interacting with the email relay service and posting back with a preferred email address. Or the server does it but POSTS back to the same endpoint which means the user could just include whatever they want right from the start.

The only thing that makes me think I might not be right is that doing it like that is just way too dumb.

AND I'm guessing a bunch of Apple services probably use OAuth amongst themselves, so this might be the worst authentication bug of the decade. The $100k is a nice payday for the researcher, but I bet the scope of the damage that could have been done was MASSIVE.

Edit 2: I still don't understand why the token wouldn't mainly be linked to a subject that's a user id. Isn't 'sub' the main identifier in a JWT? Maybe it's just been too long and I don't remember right.


> Edit 2: I still don't understand why the token wouldn't mainly be linked to a subject that's a user id. Isn't 'sub' the main identifier in a JWT? Maybe it's just been too long and I don't remember right.

The details are very sparse in the post, but I believe the "sub" claim is a unique and stable value for the user against a particular relying party (based on that being a requirement in OpenID Connect.)

You _should_ be relying on sub rather than email address, which is not guaranteed to be sent every time, to stay stable, or be unique across accounts.

So while this was a zero day in terms of providing arbitrary email addresses as verified addresses, it may have not led to any account compromises.


The only thing I can think of is some 'test mode' override which inadvertently got enabled in production.

1. Don't add these.

2. If you must add something, structure it so it can only exist in test-only binaries.

3. If you really really need to add a 'must not enable in prod' flag then you must also continuously monitor prod to ensure that it is not enabled.

Really hoping they follow up with a root-cause explanation.


Apple? No way.


I found a customer data leak on their homepage.

Took about two years to fix. Gave me credit. No money.

I'm not surprised here.


They exploited you just like their customers, from whom they take money for overpriced lock-in products.


This is basically bad coding, I never used OAuth system but you are supposed to just validate token, not any additional incoming data as number one rule of distributed systems is “never trust the client”.

They basically made a huge fundamental design mistake.


> code review and testing.

Sometime code review is just "Please change the name of this function" and testing is just testing the positive cases not the negative ones. Yes, even in companies like apple and google.


Wow. That's almost inexcusable, especially due to the requirement of forcing iOS apps to implement this. If they didn't extend the window (from originally April 2020 -> July 2020) so many more apps would have been totally exploitable from this.

After this, they should remove the requirement of Apple Sign in. How do you require an app to implement this with such a ridiculous zero day?


I’m of the mind that just about any security bug is “excusable” if it passed a good faith effort by a qualified security audit team and the development process is in place to minimize such incidents.

The problem I have is that I can’t tell what their processes are beyond the generic wording on this page[1]

[1] support.apple.com/guide/security/introduction-seccd5016d31/web


Even if there was clear evidence that this system underwent a proper security audit, with a failure this basic you would have to ask why it didn't work. What is going on inside Apple that brought them to the point of releasing a lock that simply opens with any key, despite the efforts of their state of the art lock design process and qualified lock auditors?


Writing some test cases for "can anyone generate a valid token" or "does an invalid token allow access" should be the first thing to do when writing an auth system.


Your test cases make sense, but they ignore an obvious hypothetical possibility: The OIDC implementation was a well-tested core feature (with the tests that you mention), but the email proxy feature was a bolt on that was somehow not considered risky (so it could easily have bypassed a full, renewed security audit).

Also, it's not sufficient to "have a test case". The intent and the implementation must be coherent.


> That's almost inexcusable

No, it's completely inexcusable. There should never be such a simple, major security vulnerability like this. Overlooking something this basic is incompetence.


I believe the deadline is June 30. [0]

[0] - https://developer.apple.com/news/?id=03262020b


This is an amazing bug, I am indeed surprised this happened in such a critical protocol. My guess is that nobody must have clearly specified the protocol, and anyone would have been able to catch that in an abstracted english spec.

If this is not the issue, then the implementation might be too complex for people to compare it with the spec (gap between the theory and the practice). I would be extremely interested in a post mortem from Apple.

I have a few follow up questions.

1. seeing how simple the first JWT request is, how can Apple actually authenticate the user at this point?

2. If Apple does not authenticate the user for the first request, how can they check that this bug wasn’t exploited?

3. Anybody can explain what this payload is?

{ "iss": "https://appleid.apple.com", "aud": "com.XXXX.weblogin", "exp": 158XXXXXXX, "iat": 158XXXXXXX, "sub": "XXXX.XXXXX.XXXX", "c_hash": "FJXwx9EHQqXXXXXXXX", "email": "contact@bhavukjain.com", // or "XXXXX@privaterelay.appleid.com" "email_verified": "true", "auth_time": 158XXXXXXX, "nonce_supported": true }

My guess is that c_hash is the hash of the whole payload and it is kept server side.


The bug is not in the protocol. The bug is about the extra value addition that apple was doing by letting the user choose any other email address. 1. The account take over happens on the third party sites that use the apple login. 2. This seems like a product request to add value to user by providing a relay email address of a user's choice. From the report- `I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid.`

It's not a bug with protocol or security algorithm. A lock by itself does not provides any security if its not put in the right place.


Exactly, a case of broken security by overdoing privacy.


For #3 it's part of the JWT ID Token. Take a look at https://openid.net/specs/openid-connect-core-1_0.html#Hybrid...


It's exploitable through apple's web-based login flow used by web sites and Android devices. There are multiple round trips between the user and apple, and state is passed over the wire. The state could be modified at a certain point in the flow to cause the final result (the JWT) to be compromised. The flow is still the same, they seem to have fixed it entirely by adding checks server-side.


(sorry, WAS exploitable)


All your questions can be answered by reading “Sign in with Apple REST API” [1][2]:

1. User clicks or touches the “Sign in with Apple” button

2. App or website redirects the user to Apple’s authentication service with some information in the URL including the application ID (aka. OAuth Client ID), Redirect URL, scopes (aka. permissions) and an optional state parameter

3. User types their username and password and if correct Apple redirects them back to the “Redirect URL” with an identity token, authorization code, and user identifier to your app

4. The identity token is a JSON Web Token (JWT) and contains the following claims:

• iss: The issuer-registered claim key, which has the value https://appleid.apple.com.

• sub: The unique identifier for the user.

• aud: Your client_id in your Apple Developer account.

• exp: The expiry time for the token. This value is typically set to five minutes.

• iat: The time the token was issued.

• nonce: A String value used to associate a client session and an ID token. This value is used to mitigate replay attacks and is present only if passed during the authorization request.

• nonce_supported: A Boolean value that indicates whether the transaction is on a nonce-supported platform. If you sent a nonce in the authorization request but do not see the nonce claim in the ID token, check this claim to determine how to proceed. If this claim returns true you should treat nonce as mandatory and fail the transaction; otherwise, you can proceed treating the nonce as optional.

• email: The user's email address.

• email_verified: A Boolean value that indicates whether the service has verified the email. The value of this claim is always true because the servers only return verified email addresses.

• c_hash: Required when using the Hybrid Flow. Code hash value is the base64url encoding of the left-most half of the hash of the octets of the ASCII representation of the code value, where the hash algorithm used is the hash algorithm used in the alg Header Parameter of the ID Token's JOSE Header. For instance, if the alg is HS512, hash the code value with SHA-512, then take the left-most 256 bits and base64url encode them. The c_hash value is a case sensitive string

[1] https://developer.apple.com/documentation/sign_in_with_apple...

[2] https://developer.apple.com/documentation/sign_in_with_apple...


Let's start with the fact that Apple is forcing people to use an E-mail address as a user ID. That's just straight-up stupid.

How many members of the public think that they have to use their E-mail account password as their password for Apple ID and every other amateur-hour site that enforces this dumb rule?

MILLIONS. I would bet a decent amount of money on it. So if any one of these sites is hacked and the user database is compromised, all of the user's Web log-ins that have this policy are wide open.

Then there's the simple fact that everyone's E-mail address is on thousands of spammers' lists. A simple brute-force attack using the top 100 passwords is also going to yield quite a trove, I'd imagine.

Apple IDs didn't originally have to be E-mail addresses. They're going backward.


The thing that made this bug possible was because, while your Apple ID has to be an email address, Apple has a mechanism to avoid exposing it to third parties - unlike Google, Apple, or Facebook's single sign-on implementation; the bug seems to be in the step between verifying your identity and telling Apple whether you would or would not like your email address to be exposed.

If anything, the issue is that third parties treat the email address as a unique, unchangeable identity, and then agree to rely on Apple's assertion of what your email address is. But given how hard identity is - and the challenges in dealing with passwords, account recovery, and name changes at scale - it's a pretty reasonable tradeoff to make.


The point wasn’t that the address is exposed by Apple; it’s that E-mail addresses are widely exposed by USERS, out of necessity.


Sign in with facebook also lets the user choose whether or not to share their email address.


Is it me or is this writeup low on details? There are a couple of commenters saying that this is a great writeup, but all it amounts to is:

1. what sign in with apple is

2. sign in with apple is like oauth2

3. there's some bug (not explained) that allows JWTs to be generated for arbitrary emails

4. this bug is bad because you can impersonate anyone with it

5. I got paid $100k for it


I think the write up is so short because the bug is so simple. Send a POST to appleid.apple.com with an email address of your choice, and get back an auth token for that user. Use the auth token to log-in as that user. It's that simple.


Did it show what URL you had to send the request to? It looked to me like that was redacted. I'm guessing that that URL would have been in the developer documentation.


The URL has "X"s in it. I don't know if that means it is redacted or is variable.

Note that when they give the POST request, they say "Sample Request (2nd step)".

But what is step 2? The diagram above shows step 2 as a response, not a request. At least that's how I interpret an arrow pointing back toward the user. So the write-up conflicts with the diagram.

How do you resolve that conflict? One guess is that "Sample Request (2nd step)" should say "1st step" instead.

Another guess is that the arrow directions don't necessarily always indicate whether a step is a request or a response, so that step 1 could be a request and response, and step 2 could be another request and response that POSTs to a secret URL that was learned about in step 1. (This guess could make sense because the request is a JSON message with just the email field. There must be credentials somewhere, so either it's redacted or some kind of credentials were given another way, like in step 1.)

If this second guess is right, then a follow-on guess is that the crux of the bug is that in step 1, you sign in with a particular email, then Apple says "OK, now here's a secret URL to call to get a JWT token", and then in step 2, you change email, and it doesn't notice/care that you changed emails between step 1 and 2.


It seems low on details because the exploit was incredibly simple. AFAICT you didn't have to do anything special to get the signed token, they just gave it out.

> Here on passing any email, Apple generated a valid JWT (id_token) for that particular Email ID.


Based on the information given, I don't know if you can really impersonate people. Rather, you can give an arbitrary email address and have it represented as valid, _against your account_.

You need an additional bug on the relying party for this to allow someone to gain access - that they associate the apple account based on the unstable email address claim rather than the stable "sub" claim.


it's literally that simple.


Wow, I'm so glad that apple forced me to implement this broken garbage into my apps!

For those not aware, some time ago apple decided it would be a good idea to develop their own sing in system, and then force all apps on their store (that already support e.g. Google Account login) to implement it.

So they brought a huge amount of additional complexity in a large amount of apps, and then they fucked up security. Thank you apple!


Actually, developers are only forced to implement it _if_ they support logging in with other social auths.

A big problem of many apps is that they only had a "log in with google"/"log in with facebook" button, which is very problematic for people who have neither.

On Android this is more acceptable since you need a Google account for the OS itself anyway.


But if I support "create an account" and "sign in with google" then you don't need a google account, but still I need to do sign in with apple.


> On Android this is more acceptable since you need a Google account for the OS itself anyway.

I don't think you do, I'm pretty sure I've skipped that step during device setup on occasion.


I still trust Apple over a rando site or SaaS app. No system is flawless.


I think I trust Apple over a random website too, but was adding an additional kind of sign in and forcing everyone to use it even needed in the first place?


I'm very willing to believe that this move was driven by actual user research. As a user, the last thing I want to do is create a user name and password for your app, click a link to validate my email, then enter my password again into some sort of cross platform widget that doesn't support keychain autofill. Unless it's an essential service like a bank or an airline, I'll probably opt out of using it.

I'm also very lazy when it comes to payment methods. Trying to order food and the app doesn't support Apple Pay? Delete it and do something else.

Clearly there are issues with the entrenchment of Apple at the center of all this, and these problems would be better solved with open standards, but the consistency and convenience makes an actual measurable benefit in the end user's daily life.


Almost like apple and google have dangerously monopolistic positions in the mobile sphere and we need meaningful anti-trust action to claw back user freedom and choice?


Fortunately some rando site or SaaS app doesn't have the leverage to force me to implement additional garbage! Apple does, and did. I'm still surprised that this didn't trigger an antitrust investigation like when Microsoft abused their monopoly to push internet explorer. This is exactly the same thing, if not worse.


I'm still surprised that this didn't trigger an antitrust investigation like when Microsoft abused their monopoly to push internet explorer. This is exactly the same thing, if not worse.

Um… no.

Microsoft gave Internet Explorer away for free when Netscape was selling their browser to businesses, an obvious attempt to undermine Netscape.

They also threatened to cancel the Windows 95 licenses for companies like HP that shipped Netscape with their computers instead of Internet Explorer. That would have essentially put them out of business.

Because Microsoft had 95% of the operating system market share, it had signed a decent decree with the federal government that they wouldn’t use their monopoly in operating systems to their advantage in web browsers, which were a new software category then.

So of course, they bundled Internet Explorer with Windows 95 and claim they couldn't be separated, an obvious lie, claiming Internet Explorer was a critical part of the operating system.

All of this orchestrated by future humanitarian Bill Gates, who was quoted as saying then Microsoft needed to “cut off Netscape’s air supply.”

Even in the United States, Apple isn’t a monopoly with about 40% market share. Everything Apple mandates is with companies who've contractually agreed to be part of Apple's developer program and abide by its rules.

Nobody agreed to not ship a competing web browser back in the day.


> Even in the United States, Apple isn’t a monopoly with about 40% market share.

This is a non argument. A Duopoly is no reason to not being able to behave like a monopoly. If you don't play by Apple's or Google's rules you essentially lose 50% of the market.


Monopolies aren’t illegal; it’s the use of monopoly power in corrupt ways that’s illegal.

I suspect if you don’t want to deal with Apple or Google directly, you can create web apps.


By the same logic, if you didn't want to use Internet Explorer you could use Fax or Mail?

> Monopolies aren’t illegal; it’s the use of monopoly power in corrupt ways that’s illegal.

That is exactly my point. You don't technically a monopoly to abuse your power in corrupt ways. A duopoly is good enough to abuse your dominant position towards the deteriment of users and the market.


I agree that the requirement from Apple here is kind of dumb, but I don’t see how it would not be in the best interest of a user of an app on an iOS device to have the option to sign in with an Apple ID. It also seems silly to consider it “garbage” when you already using a Google Account solution that is essentially the same thing.


It's garbage because it was forced into already functioning apps with the threat of removal, and because it evidently has gaping security holes.


Yeah but I don't trust them over Google or Facebook when it comes to server side security, and this proves it.


I will still take an Apple security incident over the corporate surveillance apparatus that is Facebook and Google.


Eh. I’ll take a found and fixed security issue in a feature aimed at keeping my email address private over Google and Facebook’s invasive behaviors.


OTOH, rando SaaS flaws don't compromise the security of billions


haveibeenpwned would say otherwise.


Just want to mention something about the id_token provided. I'm on my phone, so I don't have apples implementation handy, but in OIDC, the relying party (Spotify for example) is supposed to use the id_token to verify the user that is authenticated, specifically the sub claim in the jwt id_token.

https://openid.net/specs/openid-connect-core-1_0-final.html#...

It's likely (although like others have noted, this is scant on details), that this value was correct and represented the authenticated user.

A relying party should not use the email value to authenticate the user.

Not contesting that this is a bug that should be fixed and a potential security issue, but perhaps not as bad.

Anyone else? Am I reading this right?


The apple endpoint returned an apple-signed jwt with an email of the attacker's choice in the sub field. It didn't even have to be an email associated with an apple id. Relying parties verify the id_token against Apple's cert and that is Apple's guarantee that the email is correct.


the sub field does not contain an email address in SiwA.


So the way I believe that it works is that the vulnerability was that a valid email is used to generate an Apple signed JWT. The server side validation would be unable to tell that the token wasn’t issued in behalf of the user since Apple actually signed it.


the SiwA identification is based on "sub", email address is an important address but you aren't supposed to link accounts based on it since the user can change the email address or revoke email proxy at any time.


true, email shouldnt be used when you can identify by unique id. I doubt the bug was even exploitable with most apps. Apple just paid magnitudes more than its severity.


Wow, I'm in shock. How could Apple let this one slip in? When I was a junior fullstack I had to design a very similar system and this was one of the very basic checks that I had in mind back then. I don't know how could anyone excuse this very basic bug in such critical service.


Apple has let all sorts of things slip in which seem unbelievable.

e.g https://news.ycombinator.com/item?id=15800676 and

https://news.ycombinator.com/item?id=15828767

So I don't get shocked anymore seeing Apple security issues.


Excellent writeup! About 4 months ago, I wrote a comment[0] on HN telling folks how Apple simply omitted the server-side validations from their WWDC videos. And given the lack of good documentation at the time, WWDC videos were what most developers were following.

Even then, the only "security" that developers had was that the attacker wouldn't know the victim's Apple userId easily. With this zero-day attack, it would have been trivial for many apps to get taken over.

[0] https://news.ycombinator.com/item?id=22172952


your original post has several replies explaining why this is not a security issue. the token you ultimately get is a signed concatenation of 3 base64 encoded fields, and unless you decided to manually separate and decode these without verification (instead of doing the easy thing, just using a standard OIDC library) you would not have any user data that could ultimately result in a security issue


After observing its endless flow of security and reliability bugs, I begin to think that the recent decline of Apple's overall software quality over the several years is probably a more of systematic problem.

https://www.bloomberg.com/news/articles/2019-11-21/apple-ios...

Looks like Federighi agrees with this diagnosis and tries to improve the overall development process but not sure if it can be really improved without changing the famous secretive corporate culture. At the level of Apple's software complexity, you cannot really design and write a quality software without involving many experts' eyes. And I have been complained by my friends at Apple about how hard to get high level contexts of their works and do a cross-team collaboration.

And IMO, this systematic degradation of the software quality coincides with Bertrand's leaving, who had allowed relatively open culture at least within Apple's software division. I'm not an insider, so this is just a pure guess though.


"At the level of Apple's software complexity"

This defenitly wasn't complex in any shape or form. This was very basic.


Replace "zero-day" with "privately reported security bug for which I got $100k"

That's not how zero-day works


It was a zero day up until the first report was made.


That's always true though. 0-day implies it was discovered being actively exploited or that it was released on the 0th day it was discovered.


I believe “0 day” actually refers to the number of days that the vendor has had to fix the issue, not how many days it’s been since it’s discovered. For example, there might be a substantial delay between bug discovery and actual disclosure to the vendor–I usually take a couple days to write up a nice explanation and PoC. If I had found something and then published it publicly the next day without disclosing it, it’d still be a zero day.


You're right I should have said 0 days since it was disclosed to the vendor. That would be more accurate.

Regardless, this bug is definitely not 0 day given that it was disclosed to the vendor last month.


It doesn't matter what you "believe"

That's not what that word means. Zero-day refers to actively exploited bugs. Stop hijacking words just to overhype your research.


I didn't write the article, so I'm not sure what you mean by overhyping my research. But what you mentioned is referred to as "zero day exploited in the wild"; a zero day doesn't have to actually be actively exploited.


my brain mis-parsed as:

(sign in) with (apple zero day)

which is kind of appealing


I'm not sure that's a mis-parse. Anyone could sign in with the Apple zero day.


If Apple launched a product called Apple Zero Day - like haveibeenpwned maybe - Then the top search results for apple exploits would be an advertisement :)


For anyone else wondering, the correct parse seems to be (Sign in with Apple) (Zero Day)


I did that too and wondered if they were finally offering a real bug bounty platform…


They have a bug bounty program: https://developer.apple.com/security-bounty/

I actually think they have a good approach. Rewarding major finds with good payouts and avoiding the flood of info and low level web app ‘bugs’.


I am well aware of the bug bounty program. I think it needs work.


I’m not a security researcher, but this guy got paid $100k. Seems to be working?


One example of a bug being fixed and a researcher being paid does not mean it works, generally.


Me too! For me it was because I've usually seen "zero day" written as "0day".


I think they were missing a colon, like one of those old-time jokes:

Sign in with Apple: zero day flaw


The write-up is not very clear in my opinion. The graph seems to show that there're 3 API calls (maybe there're more API calls in reality?).

And if I understand this correctly, the issue is in the first API call, where the server does not validate whether the requester owns the Email address in the request.

What confuses me are where're the "decoded JWT’s payload" comes from. Is it coming from a different API call or it's somewhere in the response?


And the choice of black arrow on top of an almost black background... I am not a designer but that's just killing my eyes here.


"A lot of developers have integrated Sign in with Apple since it is mandatory for applications that support other social logins" -- How pathetic Apple is to force their own service on developers!!


Why are you surprised? They force you to use the App Store. They force you to process payments through their systems. They force you to comply with many things. How is this any different?


Wow that’s a really simple bug. Kudos to the OP to even try that. Most people would just look elsewhere thinking Apple of all companies would get such a basic thing right.


What do you mean simple? The result/exploit is simple, but what is the reason the bug is there? Surely the Apple code base is not that simple.


Am I understanding the article right: the endpoint would accept any email address and generate a valid JWT without verifying the caller owned the email address?

If so, what extra validation did Apple add to patch the bug?


$100,000 (!)

Props to Apple for raising the bar on bounties!


Feels low given the impact?


They’ve seemingly been fairly responsive for web-based issues.


With all those high-profile third parties using Apple ID, what would happen if somebody stole/deleted/damaged my data/assets on Dropbox/Spotify/Airbnb/...? Would I sue the provider who would sue Apple? But does Apple provide any guarantees to the relying parties? And if not and the only way is to depend on the reputation when choosing the ID providers you want to support, how would anyone want to support Apple ID after this? And could they not use it if Apple forces them to...?


The ToS of every service has a liability waiver.


Which doesn't hold in court in most of the civilized world.


Absolutely astonishing. The internal controls at Apple seem to be borderline non existent.


The average IQ and experience of their software developers has dropped remarkably over the past decade, as they have expanded.

I've had multiple occasions of "Seriously, Apple hired person X? lol" over the past five years or so.


frankly that's true of any silicon valley giant at this point.


I always have a minute of nervousness while I read these security posts hoping that the bottom will say it's already been fixed with XYZ security team. Glad it's fixed w/ Apple already. The "they still haven't fixed it" or "still haven't responded" ones are scary.


I am hoping WWDC 2020 will have some great news and events that let us forget all the mistake they made in Catalina and incidents like this.

I am not sure if I am understanding the blog post correctly, because its simplicity is beyond ridiculous.


What a click-bait title. 0-day implies it was found already being exploited in the wild.

The author even says that Apple found no evidence of it being exploited.

By definition when this blog post was published it was not the 0th day.


Some people commenting this to be overpriced, but I don't think so even if they are considering the INR value. The bug is quite critical considering how large the mac and iOS ecosystem is.


To me this seems like a poor protocol design that created an opportunity for an implementation error, and that opportunity was seized.

In the initial authorization request rather than passing a string with an email address, the caller could pass a boolean `usePrivateRelay`. If true generate a custom address for the third party, if false use the email address on file.

With that one change the implementer no longer has the opportunity to forget to validate the provided email address, and the vuln is impossible.


You misunderstand the bug, the exploit allows an attacker to generate an apple-signed JWT with an email address of the attacker's choice.


There are few other issues with how websites implemented it. For example, at work, appleid or few apple domains are banned (they wanted to ban iTunes streaming etc.) when I tried to login into Pocket (Read It Later) [Web Version], due to this blocking, the whole login form get hidden once page load complete, and I cannot even login with my username and password.


You didn’t make enough money.


It’s unclear to me exactly where the vulnerability is given the authors description in “technical details”. Does this occur in the implicit flow as well as the code flow? Is the token request unauthenticated? This seems highly unlikely. Or does Sign In With Apple deviate from the Open ID specification in a way that I’m unfamiliar with?


The author found out that the HTTP endpoint used to generate a JWT token would accept any email and respond with a valid JWT token for that email address.

He could literally send a POST request to that endpoint with arbitrary email addresses and get a valid JWT.

This is clearly explained under the "BUG" section.


That part I understand. It’s unclear to me in which auth endpoint(s) and auth flow(s) are affected. Is it the token endpoint, or auth endpoint? Or is it somewhere in the login flow before the user agent is redirected to the token endpoint?


What level of incompetence will it take for the government to step in and create some laws surrounding companies exposing user's private data because 'oops, we don't want to pay security experts what they're actually worth, even though we have billons sitting in bank accounts doing nothing'.


I'm still dreaming about a world where OpenID is the norm. Just think if Apple forced all apps to use that instead, that would be a great move for privacy and security.

But no. Instead they make more proprietary shit without having the basic skills to do so. Then they force that shit on their users.


Does it rely on a service to log you in with same email that you provide? Because normally services don’t do that. They suggest you to attach new apple account to old account with that email, but allowing outright logging in would be very bad practice.


I find it crazy that Apple can force devs to support apple id if they support a competing service. The US has gone soft on Monopoly abuse. People have got so used to it they dont notice. Gaping holes in security is only one of the consequences.


(Unrelated to the Apple bug)

Is there any bug bounty program for small businesses/apps? I only found hackerone but it seems to be only for enterprise. Is there any recommended platform for small businesses to create their own public bounty program?


I’m thankful for all the smart, diligent people working hard to keep us all safe.


Again something with sign in / log in in Apple products? Didn't we have the ridiculous empty password thingy a while back already?

Who is implementing that stuff?


> For this vulnerability, I was paid $100,000 by Apple under their Apple Security Bounty program.

Fucking hell. Even after tax, that's a substantial pay-out.


That was my first thought too. No wonder he's a full-time bug bounty hunter.


Glad we have people willing to disclose these vulnerabilities rather than just selling it on the black market.


Isn't this not a "zero-day"? Zero-day refers to when the company has no notice of an exploit.


“I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid.”

What are they teaching them in computer school these days. How can you write a security function and not test it for these kind of bugs. Unless all there accidental backdoors have a more nefarious purpose <shoosh>


Wow, this bug is incredibly simple but severe. I’m wondering how did Bhavuk Jain find it


honestly I'm not surprised people didn't run into it during testing... you make a test email account and get a signin token for it. And then realize wait... how does apple know I own that email??


Any word on what the fix was?


Assigning generated emails to the creator accounts and not allowing their re-use? What else, really...


Since this was an extremely simple exploit, I can't help but wonder if it was a purposeful one on Apple's part.

Apple has been spending a lot of money on a security-focused marketing campaign these past few years, and encouraging a high-price payout of $100k is sage marketing.


This is why it's good to run fuzzers against any public API (especially an auth API), to verify its behavior on novel inputs.

https://en.m.wikipedia.org/wiki/Fuzzing


In general I agree with you that it's good to run fuzzers against any endpoints, public or internal (as you never know if someone can wrangle data to go from public -> internal somehow), but in this particular case, you'd only find a issue if the fuzzer somehow randomly used the ID of another user that was already created, and verified that it couldn't access it.

In that case, you'd catch it way before even implementing the fuzzer.

So in this case, I don't think a fuzzer would have helped. Some E2E tests written by humans should have caught this though.


There's no reason that a fuzzer couldn't draw sample email addresses from a large pool of test valid email addresses to add as input. That would just require a fuzzer that allowed you to provide the sample population for a particular data type.


My point still stands. If the one setting up the fuzzer is thinking about the condition that A) you're using a valid email that B) already existing in the system and C) cannot be used to authenticate with another system, you can easily check the code for this directly. The fuzzer won't add anything over a simple integration/E2E test here.


Fair point! The question I have then is whether it's possible to create a tool that automatically detects bugs like this without needing to write an integration test for this specific cases.


where is the ptacek rant about JWT?!


If the bug is as simple as everyone is saying, why hasn’t it been discovered until now?


To the downvoters, this was an honest question from someone who wants to understand the situation better.


Easiest $100k ever made?


And the 'Won $100000 from apple's bug bounty program' in CV is enough to raise the salary by $100000


I would wear that distinction with pride! Kudos to him.


[flagged]


This software security issue in Sign In With Apple was unrelated to the security of Apple's hardware platform.


But for some reason I have never had to remove malware from my parents’ iOS or macOS devices.


[flagged]


Please read and follow the site guidelines. You broke at least two of them there.

https://news.ycombinator.com/newsguidelines.html


Literally every system in the world has flaws, no matter how secure. We just don't know about these bugs yet.


"Every system in the world has flaws" and "it's a serious problem that one of the world's most important software vendors, that markets itself as the most secure, keeps releasing products with flaws that would have been discovered in a very basic audit" are not incompatible statements..


Even then there will be more flaws. I don't think it is possible to build a 100% secure system in the modern age of 22 abstraction layers between the atom to the data center.

Perhaps you do not understand the staggering complexity that lies behind watching a cat video on your iPhone. There are almost uncountable ways to break into the system. This is why there is a bug bounty program from every company - from Stripe (you could argue as a major software company in payment systems) to Microsoft, from Apple to Gitlab, every company has a bug bounty program. Why do you think they give out $1m for a serious bug? If they were not serious about it, that sounds like a big waste of time.

This kind of entitlement attitude is usually from people who've never developed a complex piece of software such as an operating system.


WTF is a "zero-day?"


Why haven't they just implemented OAuth 2.0, like everyone else has done? They've tried to reinvent the wheel with their own implementation of a three-legged user authentication that doesn't add anything to what OAuth does and, surprise, they've exposed themselves to a critical vulnerability that could have been completely avoided.


> This bug could have resulted in a full account takeover of user accounts on that third party application irrespective of a victim having a valid Apple ID or not.

The headline makes me think the entire problem lies with Apple, when that’s not the case.


This seems very much like Apple’s bug, to the extent that they paid out a $100k bug bounty?


Really?

> ...affected third-party applications which were using it and didn’t implement their own additional security measures.


This allowed you to forge an attestation of user identity from Apple for any App that was setup to consume it. Apple is acting as an IdP for its consumer ecosystem. It's definitely their problem.

Third-party applications really have no recourse but to trust the signed JWT. That is just how OAuth2/OIDC works.

User impersonation against an IdP is a serious security issue. 100k is cheap.

The bug was basically on the IdP's "consent screen". Instead of using the email from the active logged in account, it allowed the attack to POST in any email they wanted.

Obviously not having the bug would be great. Apple could do "more", and layer on more things on top of OAuth, like a proof of key extension (DPoP) on the flow: https://tools.ietf.org/html/draft-fett-oauth-dpop-04

But if you have a bug like this, where you can edit your claims arbitrarily inside the IdP, extra security layers kinda don't matter.


> This allowed you to forge an attestation of user identity from Apple for any App that was setup to consume it. Apple is acting as an IdP for its consumer ecosystem. It's definitely their problem.

The relying party would have to incorrectly rely on the 'email' claim, which is not guaranteed to be sent, stable, or unique, rather than the "sub" claim as documented.

This shouldn't be an account takeover problem unless your site did that or one of a few other things wrong. However, I'd be reluctant to blame the site in those cases - a lot of security issues (including this one on Apple's side) come from two components not fully understanding the contract between them.


Is everyone in this thread only going to read the first two paragraphs of the article and skip the rest of it?

> I found I could request JWTs for any Email ID from Apple and when the signature of these tokens was verified using Apple’s public key, they showed as valid. This means an attacker could forge a JWT by linking any Email ID to it and gaining access to the victim’s account.


To be honest, I think the author is misrepresenting this, just as they falsely said that this was a "zero-day".


That there were ways of mitigating it (I'd assume verifying email addresses out of band?) doesn't mean it's not Apple's problem when their authentication system can be tricked to confirm false identities, when its entire purpose is confirming identities.


Apple is an email authority in this case, and as third party you have to rely on their security. Same as with ssl certificate authority.


This one rests squarely on Apple, as it was their auth service that contained the bug.

While an application could potentially (not that I know exactly how in this case) further verify the received token, that verification is exactly what an authentication service is supposed to provide, hence the responsibility absolutely rests on Apple who provides the service.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: