Hacker News new | past | comments | ask | show | jobs | submit login
How I Found a Vulnerability to Hack iCloud Accounts and How Apple Reacted to It (thezerohack.com)
447 points by alexcos on June 19, 2021 | hide | past | favorite | 141 comments



> If you do not have a paid Apple Developer Account, please sign up at <our site>, pay the membership fee and send us the associated Developer ID.

You can’t make this stuff up! It’s completely and utterly ridiculous that even when you do get a bounty, they’ll take a cut from it. Leave aside the fact that some stingy hands found a way to devalue a person’s contributions to their platforms by offering a tiny fraction as a reward.

I didn’t understand the later parts of this post well, but the correspondence frequency and the way this has been handled is a mark of shame to all the information security folks working at Apple.

P.S.: I intentionally put <our site> instead of the actual link. That site doesn’t deserve to be linked in this context.


> Once all steps are complete, we will reimburse you the cost of creating this account.

Seems they'll refund the paid account, still a weird thing to do.


They are saying, in effect, you must sign/agree to our 81-page developer agreement to receive the reward.


Lots of bug bounties are really just hush money, that you have to sign an NDA to get.

Always just publish your research. You can optionally offer it privately to the affected party in advance, but don't agree to any TOSes to do free work.

Full disclosure is responsible, too.


Yep, you should really give up significant income from companies that do responsible vulnerability disclosure in the name of a random HN's commenter's values.


At no point did I say you should give up income.


"Always just publish your research."

In most bug bounty programs I've seen (including Apple's and Facebook's) payouts are contingent on not publishing the research without consent.


I assume lots of bug hunters (especially those from third world countries or those currently unemployed) depend on the bounty money to support their livelihoods.


That’s a bit like hitting the slots to support your family. Not only do you have slim chances to find anything that pays out a worthwhile sum, even if you do find such a bug they might come back with a “sorry, already reported”. If they get back to you, that is.

It’s not something to rely on at all.


This is why I think a third party bug bounty middleman service is inevitable. They will be better equipped to exact appropriate remuneration and develop relationships.

Companies should be trying really hard to avoid this happening by offering better rewards with less hoops to jump through.


Agree. It is a business opportunity. It will have to be a US based company as only those will have enough funding to both fight the legal fights and lobby for legal protection.

For the first few years the company will be considered a level just above common criminals. After a few while, they will be considered an essential consumer protection service.


Any corporate is going to make you sign something to receive the cash. The terms would not normally be as strong as an NDA though, otherwise we wouldn't see any bounty reports.


> Once all steps are complete, we will reimburse you the cost of creating this account.

That literally sounds like a Nigerian email scam.


It is. But the subscription money is not the worst. You also have to agree to the terms of the developer account to open the account. Which means it will change the terms of your relationship with Apple before even getting any penny.


Maybe it’s their way of validating the identity of the person making the claim.


They have a mechanism to pay external developers, and they want to use that instead of creating a vulnerability-specific mechanism.


Everything about their platform is miserable. I'm so glad mobile Linux is starting to become usable and we finally have a real alternative.


I see this comment every year, for 15+ years that I’m active in the community, and yet I don’t see the promised hoards of Linux fanboys rioting on the streets wanting to burn down Redmond and 1 Infinite Loop.

Oh right, Linux on Desktop lost. Stop trying to make Linux on Desktop a thing.


Linux on the desktop is fine. 80% of Windows software runs carte-blanche, and the native ecosystem kinda embarrasses Microsoft's attempts at making Windows feel like a cohesive experience.

Of course, that's just my N+1 anecdata. I've been using Linux on my main PC for two years now without much issue, but ymmv.


I have tried using Linux for desktop and just sucks. I just need something that works and something that I need to chase a xyz problem after an each upgrade.

I still chase these issues for server dist upgrades, but at least I don't have to do for my desktop.

Dependency management utterly sucks in Linux. I can still run windows 98 binaries on win10.


Running Win98 binaries is not a bragging point. I read archived diskmags through Wine, and all I need to do is double-click the .exe file to get it to boot up.

> I still chase these issues for server dist upgrades

Yeah, I don't doubt it. Full OS upgrades are always dangerous, as proven by the Windows 8 -> Windows 10 install fiasco or the Catalina wipes. LTS distros tend to get ~5-6 years of support though, so it's not like you're going to be forced to reinstall for another few years.

Linux, Windows and MacOS are all different flavors of the same shitshow. I can assure you that package/dependency management is not one of Linux's shortcomings, relative to it's competitors.


>Dependency management utterly sucks in Linux.

What the hell? That's one of the biggest things desktop Linux got really right! Sure older binaries don't work but you shouldn't be flinging binaries around anyway.


as a windows user as my daily non-development driver unless that percentage hits 98%+ AND can maintain that I don't think many people are going to convert


I was with you 3 years ago, but Microsoft pushed a major update that caused random BSODs on my motherboards chipset. I switched over to Linux full time and haven't looked back, though to my knowledge the BSOD issue is still prevalent in Lenovo's Haswell prebuilts.


>Linux on Desktop lost.

All it had to do to "win" was be available to those who wanted it, which was the case. Mobile Linux is no different and in that sense it's definitely winning now.


Linux is used as the primary OS for many developers, it has a market share comparable to OSX and Windows there. PopOS has made desktop Linux intuitive even for people who don't care what's running on their machine.

With Apple and Microsoft effectively abandoning their desktop OSes it's very likely that Linux will become the dominant desktop OS even among non-tech-literate users.


I think your hand slipped and replaced "I hope" with "it's very likely". Whether or not PopOS is intuitive is a separate question from whether or not Linux will dominate on the desktop.

Apple and Microsoft "abandoning" their desktop OSes is a dubious claim at best but in a magical world where they did, their "non-tech-literate users" wouldn't flock to Linux, they'd be happy they don't have to restart their computer for updates as often anymore. Non-techie people aren't going to move to a new OS except when they get a new computer. Even when they get a new computer they're just going to use the pre-installed OS. This is a fantasy.

These things have inertia. People who don't know computers aren't arbitrarily just going to decide to switch to a new OS where they have to relearn everything and their software doesn't work and there are moderately more driver incompatibilities that cause errors that confuse them. It's just not going to happen.


> they’ll take a cut from it.

Did you just stop reading the post at the line you quoted, and came straight here to moan? They refund it. It's how they verify your identity and banking information.


Hey, could you please review the site guidelines and stick to the rules when commenting on HN? I'm afraid you've broken them badly here, and also with https://news.ycombinator.com/item?id=27519724.

This particular comment would be just fine without the swipe. We're trying for a different sort of internet here, if possible.


Doubt it. This smells of lawyers. It’s how they get you to sign their agreement IMO.


They refund it at their choosing. It's a total joke. You shouldn't need to provide banking information to get compensation....


I'm just wondering, what would you prefer to be the method? Isn't providing banking info the least they need? To send the money?


Issue a check? My whole point is you shouldn't have to pay money to "hopefully" receive money.


He/She just want $18,000 in cash.


Okay. That would not even be possible in most of the EU. That's why I wondered. Some EU countries have limits on the maximum amount of cash transactions that can happen legally. Mostly between 3000-10000 euros allowed, depending on the country.


I wonder if this will push some people to sell exploits on the black market instead.


Profit (creatively shielded from tax) above all. (Rotten) Apple.


What makes you believe Apple is unique in their avoidance in paying taxes?


Where did they say that they believe Apple is unique in their avoidance of paying taxes?


Are you defending Apple by using the Whataboutism fallacy?


No. I’m saying you’re hating the fish that swim in a polluted stream. Apple, and Amazon, and Exxon-Mobil, and Goldman-Sachs, etc. all swim in those same poisonous waters. The stream is the problem.


No it's saying you shouldn't hate the polluting fish in the polluted stream just because other fish are bad too.


> Are you defending Apple by using the Whataboutism fallacy?

Pointing out that Apple follows the law, like every other company, isn’t whataboutism. In any event, Apple pays its taxes in America. The manoeuvres are on its foreign income.


you literally just described whataboutism.

'apple follows the law like every other'

you are saying 'but apple isnt the only one, whatabout'

when the companies get to make the laws, they dont also then get to hide behind them as a shield.


I agree that distorting focus is often a propaganda tool and diversionary tactic, but it goes both ways. Sometimes the category of problem really is larger and more widespread than the single cited instance, and it’s necessary and reasonable to see the forest rather than just the tree, particularly when the best solution applies to the forest as a whole; e.g. to close the widely exploited loopholes.


In the second half of the post he suggests remotely determining an iDevice passcode was possible with his rate limit bypass. Essentially a salted version of the devices passcode is uploaded to the apple server which he could then bypass the rate limit to brute force it.

Isn't this a backdoor that would enable passcode bypass like was requested for the San Bernardino and Pensacola shooters phones?

This vulnerability is a massive deal. With the passcode determined there's nothing stopping bad actors surreptitiously access your data.

After all this a $2,180,000,000,000 market cap company offers a reward of $18,000. What a disgrace!


He is assuming that, because all the other endpoints had buggy rate-limits, this one would too, and they also fixed it.

This is a bad assumption to make, because the device passcode recovery flow is going through an iCloud Keychain HSM cluster, which is a completely different implementation from everything else (which are just web services). In fact, it is well-documented that Apple cannot update the underlying code of their HSM clusters, as they destroy the management cards after initial provisioning. So they can't have actually fixed this bug if it existed prior to his report. That code was surely audited much more carefully, and depends on a much smaller technology stack, than all the web service stuff, and certainly handles rate-limiting within the HSM itself.

So I have no reason to believe that this flow was vulnerable to the rate limit bypass race condition like the others.

However, we have a different problem now.

I called it the iCloud Keychain HSM cluster because, as documented, that whole thing is used for iCloud Keychain escrow (i.e. password store in the cloud). That's an opt-in feature. More info here:

https://support.apple.com/guide/security/escrow-security-for...

I did not know this, but apparently if your account has 2FA, the code used for this is indeed your device passcode, which smells dodgy to me:

https://support.apple.com/en-am/guide/security/secdeb202947/...

But still, the context is iCloud Keychain.

But now Apple are claiming this flow, which does not have the problem OP discovered, is used for all Apple accounts that have logged in from a passcode-protected iOS device. This implies that they are re-using this system, originally designed for the very narrow use case of (and trade-offs that necessarily come with) iCloud Keychain, as a general account recovery mechanism for all Apple accounts. That does, in fact, mean that they have (the ability to brute-force) the device passcode of everyone who has ever logged in their device.

That's bad. The HSM cluster stuff makes this hard, but it is a huge change in attack surface. It means that your device data security now not only depends on on-device software and hardware design (and Apple are famously good at this), but also the security of an HSM cluster at Apple HQ. It means that if you manage to break into a given HSM cluster, you can then brute force the passcodes for all users managed by it. And it means that if the HSMs have a vulnerability, that is a massive liability. The HSMs are third-party, and I honestly trust HSM manufacturers much less than Apple themselves when it comes to building secure systems.

So I would very much like to know what's going on here, whether that flow really does work on all accounts regardless of whether you use iCloud Keychain or not, and why none of this is documented in the Platform Security Guide. Does the hard 10-attempt limit used for iCloud Keychain escrow recovery also apply here, or can you continue trying PIN codes forever subject only to rate-limiting? Is this the same codebase or a different one? There are many questions here, and Apple's vague answers and lack of documentation for this make it sound like something very fishy is going on here.


I agree with your first point and disagree with the second. :)

I think it's highly likely that Apple was being honest and that the HSM service was not vulnerable to this attack. That's consistent (as you say) with this being a separate, highly-audited implementation, and frankly consistent with what Apple said--and to save, what, $200k, they have no real reason to lie to a researcher here.

To your second point, while this is indeed a big change in attack surface, I am not sure it's as problematic as you say. Doesn't the iCloud backup basically contain (for most users) all of the device contents--photos, messages, etc--that an attacker would want? Conversely, users want to be able to restore their iCloud backups from a new i-device if they lose their existing one, ideally without having to know more than the lockscreen PIN.

Given that, the two systems--the cloud system and the i-device--are storing mostly-identical data, offering mostly the same security guarantees (hardware-backed key derivation from a weak PIN, rate limiting), and the only issue is that this is just a second hardware security module that's separate from the one in the i-device.

For users who have turned off cloud backup, this might be a bad tradeoff. (Maybe turning off cloud backup turns off the HSM/PIN syncing?) But for most users, the gain in usability seems likely to far outweigh the hypothetical additional risk.


iCloud backups aren't a requirement. You can have an iPhone with iCloud on disabled, and privacy-conscious users might choose that approach; additionally, those backups don't necessarily contain all device data.

But if you want to download apps at all from the App Store you need to sign in, and if that alone gives Apple the ability to verify your device PIN even without iCloud, that's a problem.


Hmm. I think for such users this may be surprising. At the same time, you don’t really have any reason to trust the cloud HSM more or less than the Secure Enclave, right?

Certainly it does increase attack surface, but if Apple said “now I-devices ship with 2 HSMs”, we’d be like, ok, shrug. No?

The fact that this is “remote” is sort of immaterial, I think. You’re trusting Apple’s (bespoke or acquired) stack the same either way, and as far as we can tell the security properties of both local and remote HSMs are the same.


The issue is that if you break one HSM, you get the ability to bruteforce thousands (millions?) of users' PINs, without their knowledge. You could bruteforce someone's PIN ahead of time, then acquire their phone knowing you can get at the data with zero risk. Getting the phone first then figuring out how to break into it is a lot trickier.

I do in fact trust the SEP more than I trust cloud HSM, because the SEP is an Apple design, and the HSMs they use, as far as I know, are third-party.


That's fair. I think I agree with that characterization.

I think if this excluded users who turned off iCloud sync, I'd have no qualms about it, however; the tradeoffs seem ideal for giving users a secure recovery mechanism. But users who have turned off iCloud may not want this functionality, I agree.


Out of interest, what’s your reasoning for trusting HSM manufacturers less than Apple?

HSM manufacturers are essentially selling security, so any security issue could destroy their business, whereas Apple would likely survive.

I have seen pentests being done on an HSM and the steps they go to are quite impressive. Far further than I would ever have expected.


I don't know for sure that HSMs can't be trusted, but:

1. They're all closed-source, super-expensive devices that security researchers don't look at in any numbers.

2. We know getting security right is extremely difficult; products from big, sophisticated, motivated companies have security problems revealed by careful public scrutiny. A product that hasn't received such scrutiny seems unlikely to be better than that.

3. The first and biggest customer for obscure security hardware like TPMs and Smart Cards is the government/military. In my country, government/military tech efforts have a reputation for being delivered late and over budget; often ending in failure; and not being particularly secure. Are we to believe defence contractors managed to do a competent IT project when they couldn't do that before or after?

4. We know, from examples like Dual EC DRBG and Crypto AG, that governments will happily put in a backdoor when given the chance. If a giant defence contractor like Thales was asked by their number 1 customer to put in a backdoor, do you think they'd say no?


This, and also:

- Infineon and ROCA. We know the code audits the industry does are superficial and do not catch the worst bugs. That code would've been a big red flag for real cryptographers, and it would've been quickly found had it been open source.

- Government requirements. Stuff like FIPS certification decreases security by increasing bureaucracy and complexity. See vulnerabilities affecting only the YubiKey FIPS for an example. These certifications hold the industry back by mandating compliance with large suites of algorithms, forbidding newer, better cryptography, and stuff like that.

- The general culture of that industry. They sell security, and they are all about audits. Those audits are about ticking boxes. They do not measure good design, overall defense in depth, or anything like that. They are bullet point lists of security features and specific attack models. Interesting attacks use novel approaches, and those audits are completely worthless at determining whether a system is likely to be designed in a way to be robust against new attacks or not.


I think HSMs are selling security (rather than security being an additional feature) and therefore you may be surprised at the amount of testing and scrutiny being done.

As I mentioned, I saw a pentest happening, that included physical, network, software, pretty much anything that could be done over months.

Issues like the Debian OpenSSL issues show that something being open source does not mean it's getting the scrutiny it needs, and while open source software is easier to apply that scrutiny to, I think the relationship is far more complicated than that.

If a defence contractor like Thales was asked to put a backdoor in, do you think that wouldn't be found if their device was tested for months on end by external testers paid to, and motivated by, finding vulnerabilities.

I'm not saying it doesn't happen, but I do think they deserve a little more credit than you may be giving them.


> If a defence contractor like Thales was asked to put a backdoor in, do you think that wouldn't be found [..] by external testers

There are many ways to design a back door that would pass such security audits, yes.

One option is to simply give the auditors a device without the back door - or a firmware listing without it.

You can also create bugs that are undetectable by black box testing - such as generating keys with insufficient entropy.

Another is to copy subtle bugs from other products, that sat in plain sight for years.

And of course the testers will all be under NDA so if they do find your back door, they’ve already promised to take part in your cover-up.


Yeah, the “Your password is encrypted and cannot be read by Apple” is a bold faced lie. This can definitely be used to crack a passcode. Log out of iCloud if possible.

I made a mini series on self hosting my cloud services here: https://www.naut.ca/blog/tag/shs/


I'm not sure it's really fair to say it's a bold faced lie. It's been awhile since I looked at SRP, but it looks like server side only require storage of Username, Salt, Verifier in it's database according to http://srp.stanford.edu/design.html

Unless I'm missing something this is really an artifact of allowing a user to use a short pin on the device, but also because this short pin is allowed to be used somehow as part of the password recovery flow which as such requires properties to prevent brute force.

Unless I'm missing something, choosing an alphanumeric pin of sufficient length would avoid this and be difficult to brute force. So it would seem the guarentee's are weaker than expected due to this, but I doubt it's a bold faced lie.

Unless I'm missing something of course.


The point is SRP is no different, in this context, from storing a standard password hash. You can brute force the passcode if you have access to the verification material on the server.

Apple's device security model relies on rate- and attempt-limiting unlock attempts, so that people can in fact use short numeric passcodes. Their on-device model is carefully designed to make it very hard to bypass this.

The problem is now they have extended that model and attack surface to their HSM clusters. That's 1) not Apple hardware (I trust Apple hardware more than I trust the third-party HSMs they use), 2) not (only) Apple code (again I have less trust in HSM frameworks than Apple's), 3) shared for many users, so break one HSM cluster and you get to bruteforce a lot of passcodes, 4) strangely not documented in Apple's Platform Security Guide, which is very suspicious.


I was expecting iPhones to generate public/private key pair and upload public part to the cloud during provisioning. That way you can securely prove you own the device but there’s no need to send password or anything derived from it to Apple.


Yea, I think this missed expectation is valid, I certainly wouldn't have expected the pin code to be used as an authenticator on an internet facing API.


I wish they allowed more technically proficient users to disable password recovery and manage iCloud backups off cloud.


Why are the Security Departments of ever company so unfriendly? I feel like every blog post of a disclosed vulnerability has had some form or another of: - no replies - delayed replies - vague replies - playing down the vulnerability - reducing the bounty

It’s like they still treat a white hat hacker as a risk, instead o cooperating with them. I don’t get the corporations. The white hat hacker is in this case your best friend. They proved their ethics already by reporting it to them, and they know it already because they found it. There is literally no reason to try to keep the white hat hacker in the dark, not update them, etc. The white hat hacker could have exploited the vulnerability already!


I found and reported a security issue to Microsoft. They responded nearly right away and it was a real person . I was soon talking directly to the right team to explain it to. I provided assets required to replicate and they jumped right on it and fixed it. They even told me what KB* it was resolved in via follow up email. I didn’t want any kind of monetary reward - just happy to have it fixed. I regularly report security issues in open source projects too. Now, I also once tried to report a fairly serious issue that impacted iOS and MacOS X (at the time) and you’d have thought (naively) they’d have been super interested and helpful as Microsoft were. Wrong. In fact their first response basically meant I never ended up getting past their first auto reply.


That's a superiority complex. It can be seen especially well in Russia. I've read stories that their companies even start legal actions against white hackers that submit vulnerabilities to them.


I’d say it is just incompetence.

The group of people who run these corps are old enough not to be grown up with the internet, and some of them just can’t understand the dangers of this new world, even if you tried explaining it to them.

They are missing the fundamentals.


Not only in Russia. I once accidentally discovered a vulnerability in a rather popular job search site (EU). I reported it to them only to be harassed by their legal department afterwards. No good deed shall go unpunished.


No bad deed should go unpublished.


This sounds like it happens a lot everywhere outside large American companies. The reasoning probably goes like “why were you even trying to break in, that’s illegal.”


>why were you even trying to break in

Because you're offering money if a way in is found. I feel like that's most of their hesitance, really, they probably just don't want to pay the bounties.


To be fair, there is also some whining involved on the "white hat hacker" side. Triaging reports is not an easy task.

The issue I have often witnessed, is a higher up wanting to play down a vulnerability, or even make so nobody hears about it because they fear it will impact the stock price. You should not forget that there is a financial impact...


Bug bounty programs tend to get utterly swamped with low-quality reports and entitled researchers.

It can be hard to separate signal from noise.


I submitted a memory corruption bug in glibc realloc() to Red Hat's security email contact, and they replied quickly and friendly.


Even Google Security was really friendly, even as I reported something that seemed like an issue but turned out to be my own fault


Because they're not the only person the security team is dealing with?

Because they need to verify the claim and see what it really affects? If there are other repercussions?

Because you don't want to tell more than you need? (a security researcher should know that)

Vulnerability disclosure is the closest thing to a protection racket that is actually legal. So it's natural that people will be on the edge. Sure, it beats the alternative.


Confirmation bias? If the disclosure doesn't go as expected, that's an interesting event, which is likely to draw more attention than if everything was all right.


Security departments are there to protect against threats to the company. Bad publicity is a threat for companies. So never expect some company official to admit that there is a glaring hole in their security, until they are very, very sure that they have fixed it. Fixing a hole likes this can take a lot of time because you need a lot of testing. Apple is not going to tell the rest of the world what they are doing exactly because of possible bad publicity. When they have fixed the problem there are very good reasons for Apple to downplay its importance, which implies that they should pay some money but not too much.


The signal-to-noise ratio is very poor, especially for large companies running bounty programs. In addition to honest and diligent researchers, there are also scam artists, script kiddies, and soccer moms sending in false reports saying “my ten-year-old found this bug in FaceTime, pay up!”


Maybe because legal needs to be involved.


They get the vuln in all cases if you are talking to them: either for free if you post it to f-d, or for some reduced rate (and hushed up) if you agree to the NDA to get the bounty.

The people selling to rogue states and TAO aren't talking to the vendor in the first place.

Bug bounty programs are, for the most part, bullshit.


How sophisticated the attack/exploit may be is not the point here. The salient point is that he demonstrated complete iCloud account takeover, and Apple lists that as a $100k bounty reward, but are only offering him $18k. Please correct me if there is something I'm missing.


The sophistication is relevant because he proves that the vulnerability he originally reported could take over any icloud account but he wasn't able to do so himself as it was patched between the time he first reported it and 8 months later when he tries it again. Apple then seems to refuse to acknowledge this and offers only 18K vs. 350K


He claims the vulnerability as originally reported could take over any iCloud account. Apple claimed otherwise. We do not have hard evidence from either side.

However, given the implementation involved, I think Apple's claims are more likely, as I detail here; OP is assuming the stack used for the passcode recovery is likely vulnerable because the others were, while we know it is a completely different validation technology and, given how it works, I would expect it not to be vulnerable (unlike the web service stuff): https://news.ycombinator.com/item?id=27567730

My take on this is:

1) Apple are probably not lying when they say this wouldn't have worked on most accounts.

2) The author is likely wrong in his assumption that the passcode flow was vulnerable like the others were.

3) $18k is still way too low for an account takeover exploit that only affected a subset of accounts.

4) Apple are not being open about how this system works, and if I'm not mistaken, this is a new system/flow.

5) The author's discoveries aside, Apple need to document how this works, because as far as I can tell they are massively increasing the attack surface for the data security of iOS users who log in to their Apple accounts on-device, using a new, undocumented mechanism/use case.


You seem to have a lot of knowledge on this, so apologies if I am misunderstanding, but aren't you still overlooking the fact that that for iCloud accounts that hadn't been used on Apple devices (even if it was a small subset of devices), he was able to reset the provided password via concurrent brute-forcing the OTP endpoint?

Isn't that alone sufficient to demonstrate complete iCloud account takeover?

Agreed that this should not be taken as proof that the other reset flow was not vulnerable, but to me it seems like two separate issues.


It is two separate issues; as I said, I think $18k is way too low for a subset of accounts takeover. Personally, I'd have awarded the full amount.


In each disputed area you suggest it’s “likely” Apple is right.

In my experience, security engineers, even Apple security engineers, have the same very human kind of “can’t see my own typos” bias as the rest of us.

In my experience, fresh eyes looking from a different perspective are more likely to be right. (Part of why pen testing and security researchers are a thing.)


I watched the Apple presentation on the iCloud Keychain implementation. They explicitly mentioned concurrency and having a consensus algorithm that forbids conflicting mutations on an escrow record.

I've written web apps, and I've written embedded security code. It's a lot easier to screw up and have a race condition in rate limiting code in a web stack than in a carefully designed HSM consensus algorithm (especially since the latter kind of depends on this being handled properly for data correctness, not just defending against attacks).


I think you are misunderstanding what he has achieved. There is a secondary attack that he theorizes was possible and patched by Apple before he demonstrated an ability to exploit it. I agree that he should not receive any bounty reward for this (theoretical) attack.

However, the first half of the article focuses on him successfully being able to reset the password on any iCloud account that hadn't been used to log in to an Apple device.

Being able to remotely change the password of an iCloud account should earn him the full $100,000 reward, even if it is only on some subset of iCloud accounts.


It’s not that implausible that passcode hashing was rate limited.


$100k is the maximum reward, not the reward.


We see, time and time again, password recovery systems getting exploited.

Aside from Apple's shitty response to the brute force vulnerabilities discovered here, I'm also annoyed that Apple isn't nearly as paranoid as they ought to be about the security of what is likely to be the #1 hacker target in their system.

Instead of using a 6 digit 2 factor key, they could easily use 12 character alphanumeric key. That's (20 + 10) ** 12 / (10 ** 6) = 500 billion times harder to brute force. And honestly, is having to type 12 characters such a burden for the exceptional case of a password reset? I don't think so.

Building secure systems is hard, I get that. I've made dumb mistakes myself. But Apple's iCloud contains people's locations, their photos, where they live, their email, notes and other secrets, and iCloud also circumvents all Apple's on-device encryption. It's fundamentally a system that sacrifices security for convenience, and it really sucks that all the real and serious security efforts made by other teams at Apple are negated by iCloud.

There are many people, myself included, who use password managers and who will never ever lose their full disk encryption keys, passwords, or recovery keys. I don't want backdoors. I don't want forgot-my-password systems. I want to opt out of all of it.


6 digit is perfectly fine if you only give one try before requiring a different code?


Not really.

If you don't rotate the six-digit code, the probability an attacker who tries sequential codes gets the correct code in 1M attempts is 1.

But if you do rotate the code, the Bayesian probability that an attacker who tries random codes gets the correct code in 1M attempts is still about 60%, if I did my math right (and of course it asymptotically approaches 1 with more attempts).


who wouldn't do random?

But either way it would be insecure if you did sequential... even if you had 2000 characters for your password.


It's a truism that secure things are secure. But engineers make mistakes and that's why you need layers of defense. And when you use recovery codes that are in the realm that can be brute forced you automatically have a problem when rate limiting or other anti-brute force measures fail. Which is why using stronger 2factor codes should be the default, especially for super high impact things like password recovery.


> And honestly, is having to type 12 characters such a burden for the exceptional case of a password reset? I don't think so.

If those "engineers" need minimum of 26 characters 1-time use passwords that can only be used one time to feel secure, I don't trust those engineers (unless they allow me to copy and paste it).

A one-time-use 6 digit password that can only be tried once is pretty damn secure if it is random.


Apple is right, if you login with an iPhone, or iOS the device is upgraded to not use SMS verification anymore.

While the author of the article has found something its not anywhere as serious as they think. The second part of the article with the on device based codes vs sms where 29/30 requests didn't work before was most likely that way before they found the vulnerability.

It is upsetting not to get the $100k but the post comes off a bit as a lash out against that.


I agree. This is a sorry situation to see, and neither of parties seem correct here. $18k is seems a little lowballing for a vulnerability that does actually work on a subset of iCloud accounts. and provides a method to bypass 2FA. At the same time the person lashing at Apple for handling seems quite unprofessional. In a perfect world a issue like this would be solved with better communication by both parties.


To me this looks like a case of someone looking that this dude was based in India, doing a cost of living/Apple US-India salary comparison and coming up with a ratio of ~1:5 (for $100K) or ~1:14 (for $250K) and deciding that he would be happy with $18K.


Cost of living indexation being applied to salary of a knowledge worker is a flawed economic thinking that is being perpetuated by some.

Firstly, knowledge workers should be paid market competitive rates – and the definition of market in a digital economy is global.

Secondly, cost of living index for a software engineer in India vs US is not that different - cost of electronics, housing, clothing, accessories, vacation/travel etc are all same.

In some cases, things are more expensive due to global trade economics – for example cars/bikes, fuel, travel, luxury goods etc are way more expensive in India than US.

Food and housekeeping was assumed to be cheaper – but that was based on flawed logic that someone is cooking food at home for you for free (an unemployed family member) and you are exploiting some poor person for housekeeping without caring about their healthcare or their children's education (these are basically un-costed externality that keeps poor families poor).

In reality, today's generation of software engineers have to cook their own food (costs time which is learning opportunity cost which is nothing but money) or hire professional help (costs money) or eat catered food (which is possible due to app based delivery services in major cities like Bangalore, costs money) every day.

Besides, no matter where you are living, only a small part of your paycheck goes towards non-discretionary expenses like basic food and basic shelter.

A large part of your paycheck should be going towards future savings/investments and discretionary spending like leisure/travel and enhancing your quality of life through better nutrition/healthcare, continued education etc. None of these things cost less in India than US. Expecting Indian engineers to do this any less than US engineers is just another form of discrimination.


> Food and housekeeping was assumed to be cheaper

Who assumed that?

You're comparing a situation where there's an unpaid family member, with a situation where the only person cooking is the information worker. In reality, there are a multitude of arrangement between those extremes, invalidating your argument.

Even then, health insurance costs differ between areas. Not to mention that the cost of every option you listed apart from using own time to cook is dependent on the local circumstances.

Similarly, travel and housing are very sensitive to geographic location, unless you want to travel across the world.

I don't think that "flawed economic thinking" follows from your second argument.


> the definition of market in a digital economy is global.

You do realize that global average GDP per capita is $11k? If you are in the US or Western Europe then the switch to "global economy" would probably push your salary down to $30k or less.


> the definition of market in a digital economy is global.

I strongly disagree. If you're a large tech conglomerate, and your center of mass is in the US, then the timezone difference (California -> New Delhi is 12.5hr) largely eliminates viability of real-time collaboration (without imposing significant burdens on the remote worker) and imposes significant communication delays if conducted asynchronously (e.g. over email, where your RTT is a full business day).

> cost of electronics, housing, clothing, accessories, vacation/travel etc are all same.

Mumbai seems to be broadly accepted as the most expensive city in India.

[0] says that if you want to buy a 1-bed apartment, it'll run you ~80 lakh - 1.5 crore. This translates roughly to $100k - $200k, in the most expensive city in the country.

For reference, in Atlanta, at the start of 2019 (so pre-pandemic shifts), the median 1-bed condo was just over $200k, per [1]. If we limit to top 200 metros, that's ~85th percentile, where the median price in the broader metro is comparable price-wise to a flat in the most expensive parts of Mumbai.

But okay, maybe instead it makes more sense to rent in Mumbai, at ~$300-500 per month. This is roughly half the price of renting in Oklahoma City, the cheapest metro surveyed by [2] (where according to [1], that same 1-bed would sell for $60k).

Are salaries in India (Mumbai in particular) depressed relative to comparable areas in the US? Likely! Looking purely at cost of purchasing housing, I'd expect salaries to be comparable to, say, Chicago, but that's nowhere near the case. Or if looking at rents, half of what a US-based engineer with comparable experience makes in a cheaper labor market (e.g. a new grad at Google in Chicago might receive ~$150k in total compensation).

Here's the thing, though: the primary driver of what companies are willing to pay you is not actually your estimated cost-of-living expenses (even if that's what they call it), but your local labor market. Minimum wage in Mumbai for skilled labor for college graduates is ~$18k (USD) / year. Minimum wage in Chicago is $14/hour; at 40 hours/week, 52 weeks/year, that's ~$30k / year which you could earn stocking shelves at Walmart or flipping burgers at McDonald's, as a high school dropout. Regardless, if you're good enough that $BIGTECH thinks you're worth the high salary, then you're good enough for them to want to relocate you. That bar's just much higher when they also need to sponsor a work visa (what with H1B quotas and all).

[0] https://housing.com/news/cost-of-living-in-mumbai/

[1] https://www.zillow.com/research/data/

[2] https://www.businessinsider.com/one-bedroom-apartment-cost-l...

> Besides, no matter where you are living, only a small part of your paycheck goes towards non-discretionary expenses like basic food and basic shelter.

Cheapest median rents in the Bay Area are about $2k / month for a 1-bed, although pre-pandemic you were almost definitely compensating with transportation costs / time spent commuting. If you work at a startup in SF and make a modest $100k/year, that's 25% of your pre-tax income (a third of your post-tax income) going to housing alone. Add on cost of food, utilities, commuting costs (let's underestimate each of these at $100 / month), and now you're looking at nearly 40% of your post-tax income. I don't consider that a small part of the paycheck.

(Okay, maybe you make $150k pre-tax instead because you're an engineer. Now it's "only" 30% of your post-tax income on these non-discretionary expenses, just to support yourself.)


This assumes a person doesn't travel. Often going on holiday to Europe will be more expensive to a person in India than in the US. This kind of thinking in Apple has racist connotations.


Do companies want people to ignore responsible disclosure and/or sell these vulnerabilities on grey/black markets?

I suspect they don't care in the end. The privacy/security stories are more there for marketing. End consumers won't know if the technicalities actually hold up in practice, so there's little incentive to run a tight and honest bounty program.

Disgusting behavior by Apple.


Once you become too big to fail, an ability to ignore issues is one of the perks.


I don’t think it is good for Apple’s image if they have another iCloud leak disaster.


Oh, definitely, if another Fappening happens Apple's privacy story turns into a complete joke, which is why Apple's incompetence with regards to this bug bounty is particularly baffling.


I still don't understand what the author means with his exploits (also against Instagram[0]) involving a race condition.

A race condition means that something unexpected happens when you do something concurrently. Not "it gives the same result as when done one by one, but we're doing it faster" (which is what the author appears to be doing). It has to be different from the result any sequential operation could achieve.

They used a few thousand IPs to hammer an endpoint, staying below the per-IP rate limit. Did it matter that this was done during the same time from all the IPs? If yes, it's not clear from the blog post how/why it mattered. If no (i.e. the result is the same when first completing all requests from the first IP, then the second, etc.), then it is not a race condition - just concurrency.

[0]: https://thezerohack.com/hack-any-instagram


From my understanding, there is a hard limit on how many attempts are allowed for entering the code for a specific account, regardless of the IP address. You can bypass that limit by sending all the attempts concurrently at once. The multiple IP addresses were used to bypass a different limit (a limit on concurrent connections).


A slightly more detailed answer than the other two you got:

It's likely a well-designed password (or similar) validation endpoint will both limit attempts per IP and per user, to avoid exactly the attack you describe.

This limit probably isn't permanent (though I think the design of Apple's HSM may be different here; too many attempts may lock or delete user data entirely?). Rather, it would be something like "allow 5 attempts per hour per user."

So, first, even if the attacker only exploits concurrency to speed things up--which implies the limits are only per IP--they can conduct attacks which are otherwise infeasible. (E.g. with 10k IPs at 5 attempts per hour per IP, they can brute force a six digit PIN in an average of an hour, as opposed to over a year.)

But second, what I think the attacker is describing is actually worse: he's saying that the quota "counter" is updated with a "read-modify-write pattern that's not safe for concurrent HTTP requests, so that you might have something like:

[request 0] read counter value = 0 [request 1] read counter value = 0 [request 0] set counter value = 1 [request 1] set counter value = 1

In this error, the counter is never incremented at all.

It's easy to imagine people making this mistake when they store the counter in something that does not support transactional updates.


> a well-designed password (or similar) validation endpoint will both limit attempts per IP and per user

I hope you‘ll never have to reset your password then as I‘ll happily send out those 5 requests every hour to lock you out ;-)

This is also a kind of attack.


The rate limit per IP cam be lower than per user to make this more difficult; you can also offer other options (captchas, less brute-forceable auth factors).

Not sure your point on “resetting” passwords, though. This applies to any validation flow, i.e. just logging in.


The race condition he was referring to might be that if you go fast enough at making requests, you can make more requests than the request limit.


While it feels HN to hate any companies that didn't do security "right", I think what Apple says makes total sense here. At least, the author claims that under his assumptions, his exploit would have worked and affected a majority of iCloud accounts, but it won't. You can't claim that you found a vulnerability without actually demonstrating it.

Key takeaway from Apple:

> They concluded that the only way to brute force the passcode is through brute forcing the Apple device which is not possible due to the local system rate limits.

The author did not understand that sentence accurately:

> There is very bleak chance for this endpoint to be not vulnerable to race hazard before my report because all the other endpoints I tested was vulnerable – SMS code validation, email code validation, two factor authentication, password validation was all vulnerable.

How I interpret that sentence is that, while other forms of verification are done on the server and thus subject to the vulnerability, the passcode verification is done on the device. When you send a passcode from another device, it is sent to the server and then routed to the device storing the passcode to perform the verification. Apple's servers do not store the hash throughout the process, and no form of brute force would have worked against the server. Instead, they are routed to the device storing the passcode, say iPhone, and the iPhone's HSM performs the verification. It's the HSM doing the rate limit here, and thus it's not subject to the vulnerability.


This isn't directly related to the article, but does anyone know of any good resources or best practices on how to report a vulnerability?

A few days ago I discovered a pretty major vulnerability on a certain website, but security isn't the focus of my day job and I wasn't sure where to begin and what to keep in mind. The author of this article had some problems with the disclosure process; maybe there are best practices that could avoid these.

I found the OWASP cheat sheet [0] really useful, but other than that, I didn't find too many other relevant resources.

The vulnerability I reported has now been fixed, but I'm still pondering whether to publish the details or if it would just stir up unnecessary trouble. So it would be good to have resources that will help inform my decision.

I think a lot of people who want to report vulnerabilities probably feel like they don't know what they're doing, and they probably don't feel very well supported through the disclosure process. At least, that's my experience.

[0] https://cheatsheetseries.owasp.org/cheatsheets/Vulnerability...


I would think any general contact form that merely opens the conversation would be reasonable (mailto:hello@example.com?subject=how+to+contact+your+security+team) as would checking the major bounty websites for a listing -- not that you would be shopping for the bounty, but because that's where a receptive audience would already be listening for such reports

As for whether to publish a fixed vuln, I would guess that boils down to whether you value the blog traffic and any commentary enough to wade into that. In my mental model, so long as your research was your own, then you generated that content and have every right to talk about it, perhaps even inspiring other non-traditional security researchers to try their hand, too


Just 6 blog posts prior, the author was writing about "How To Create A Blog On Bluehost In 3 Simple Steps".[1] While admittedly that was 3 years ago, still quite an impressive feat of leveling up!

[1] https://thezerohack.com/create-blog-bluehost


Seems like he was already hacking around on Facebook in 2015 [1]

[1] https://thezerohack.com/how-i-hacked-your-facebook-photos


$18k this time. $12.5k last time. Seems like this is his range


He got 30k from Instagram for the exact same concurrent bruteforce attack on their password reset flow.


Good finds :). Most of 2 factor auth or password reset flows I came across while consulting had bugs. One of the more fun findings was an authenticated encrypted username was used for password resets. Another part of the application used the same encryption key and acted as an encryption oracle. Copy ciphertext for the target username into the password reset link, and voila.


I mean honestly if money is what you're after, you should go to the dark market directly. You don't owe these corporations shit. And their consistent haggling with people who responsibly disclose vulns is proof of that.


Who's to say that the specific dark market isn't run/infiltrated by the FBI or other LEAs?


It seems that the author found a vulnerability in the iCloud password reset that could have potentially allowed you to not only gain access to an iCloud account but also the passcode of a device. The reason why I think Apple decided not to give him a combined 350,000$ bounty is because from my understanding he didn’t actually realise the severity of what he found initially to exploit it and provide a proof of concept and so his bug bounty claim was limited to what he found initially and then Apple patched it (not a coincidence really) before anyone could do anything more. As a result now he wants the full bounty but Apple has decided to come to a random number as bounty. It’s easy to see why. Apple doesn’t want the bad PR from the fact that some random enthusiast found a way to compromise both iCloud and passcode of an iPhone without even having the targets physical device (insert Hollywood movie scene) and the fact that Apple with all their might may be vulnerable to something like this. On the other hand the author is pissed he did not fully exploit it in the first place and claim the full bounty by maybe showing a proof of concept and tried to be the good guy.


> Apple doesn’t want the bad PR from the fact that some random enthusiast found a way to compromise both iCloud and passcode of an iPhone

It is always problematic to do free work for big corporations. Corporations have an incentive to create "competitions" and similar "challenges" were many people participate doing free work, as they find nothing, and they can still under-pay the few that find something.

Can you even imagine the cost for Apple/Amazon/Google/... if they had to find all this problems by themselves? Can you see the amount of free labor that they get?

I found this free work justified for open source, like Linux, as everybody profits from it. It is a contribution to society. To fix big corporation problems in your spare time, only causes security experts salary to go down, as you are doing the job for free.


Can you even imagine the cost to Apple/Amazon/Google if the white hat community decides these companies have no ethics or integrity, so why not just go black hat instead?

I have no idea what the dark market rate is for hacking a high profile iCloud account, but I'd be very surprised if it's less than $18k.


I wonder how many people are being driven by this behaviour to sell to the highest bidder next time they find something.


It's dangerous to get this kind of publicity. What happens the next time a security researcher finds a way to bypass Apple? They'll find this post and ask themselves "who is going to pay me more, Apple or the NSA?".


This is how you turn someone with good intentions into someone who'll choose to get paid and fuck the rest next time.

Well done Apple.


Do these bug bounty programs usually take a year from submitting the report to being approved for a bounty?


Not every other program. I have seen apple bug bounty reports of others completed in a month or so. But I don't know what took so long for them in my case.


>But I don't know what took so long for them in my case.

Patching of the vulnerability probably, so they could weasel out of paying the full bounty.


I wonder how much he had to pay to use those 28k IP addresses.


Any idea how I can even go about acquiring this many IPs?


So long as your method is able to travel through HTTP, https://duckduckgo.com/?q=residential+proxy&ia=web are quite prolific. I have heard more than one time that those exit points are why a lot of VPNs are "free of charge"

I also recognize that it might take some additional effort to ensure subsequent requests exit from different IPs, but given the number of vendors in that space, I would guess it's not ludicrous, either


All Apple have done here is ensure that if an account take over or information disclosure vulnerability is found in the future, nobody will trust they will get paid the expected bug bounty.

Congrats, Apple. You just helped increase the chance that researchers sell very bad exploits to state-sanctioned attackers, and you won’t ever know about it.


Looked at another way: Bad guys now know this guy has talent, they also know Apple ripped him off, what’s to stop them from making… a better offer?

Apple probably thinks they’re protected themselves but if this guy “turns” the entire industry is at risk due to Apple’s thrift.


I would take the 18k for a brute force attack dude… that’s more than fair. People tend to already know when you can brute force stuff.

> Rate limiting would be performed in the Apple server itself or in HSM (hardware security module). Either way, the rate limit logic should be programmed as such to prevent race hazard. There is very bleak chance for this endpoint to be not vulnerable to race hazard before my report because all the other endpoints I tested was vulnerable

The HSMs they’re using at Apple aren’t patchable… that’s kinda the point. Sorry.


HSM is not handling the http requests directly. If you send an improper request data like not sending proper xml, you will see the errors from the iCloud server endpoint. The endpoint that is sending data to HSM could be patched to validate concurrent requests. Of course I may be missing something but we will never know the truth until Apple confirms it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: