Yikes. Especially looking at the diff of the original problematic fix, it seems like they slapped a quick patch on there and called it a day, instead of investigating to find the underlying architectural issue. Doesn't really inspire a lot of confidence that the resolution for unc0ver is any more thought-through. I wonder if they've identified the root-cause? That'd be the real interesting piece to me.
Apple takes the approach of throwing humans instead of automation at a problem quite frequently :
> The press release mentions RMSI, an India-based, geospatial data firm that creates vegetation and 3D building datasets. And the office’s large headcount (now near 5,000) [used to create Apple Maps]
The lack of automated testing is something Apple is working on fixing, but they're a ways away from having anything substantial. The terrible iOS 13 release quite significantly bumped up the internal priority of stability and testing. iOS 14 is likely to be far less buggy than iOS 13 because of this culture change.
It all boils down to poor state management in a single algorithm.
The algorithm allocates a kernel object, then sends off a subroutine to do some work.
(The subroutine happens to run in another thread but that’s not really relevant to the bug.) As part of its job duty, the subroutine is supposed to free the object after its work is done, but only if condition A is true.
If A is false, the subroutine won’t free the memory, and it’s implied that the main routine is supposed to free the memory instead.
Now the issue is that there’s no common code that checks condition A. Instead, the main routine and the subroutine have slightly different ideas about whether condition A is true or not. The condition’s logic is pretty simple so it’s understandable that the kernel developer decided to write the same condition in two different places and two different forms (instead of e. g. factoring it out into a macro).
Still, they managed to get it wrong.
The result is that in one particular case, the subroutine thinks A is true. So it frees the object. When the main routine gets back control, it thinks A is false (due to the duplicated, slightly wrong logic), and frees the object, too.
There’s only a small time window between those two frees. But the window is large enough that a userspace thread, if it tries often enough, can force its own object into the place where the kernel object used to be,
just in time before the double free happens.
The statement is true but is not a root or a cause. Its a high level description of what happened.
> Still, I'm very happy that Apple patched this issue in a timely manner once the exploit became public.
Sh- should we be happy Apple fixed this so quickly? unc0ver allows consumers to get more out of their Apple devices, and Apple's fix isn't really optional (unless you disable auto-updates and tap "Later" on every update notification). Is this exploit even an issue? Apple's probably not going to let an app exploiting this zeroday into its App Store and sideloading is difficult; it's very unlikely someone malicious is going to trick people into installing malware that uses this exploit. It sounds to me like Apple is purposefully limiting consumer freedom by actively trying to prevent jailbreaking.
Jail breaking cuts into their profit a small amount because the community is small.
The benefits are very much worth it though. Most have had iOS 13 features since iOS 11/12. They have iOS 14 features now. Then there are other features that may not be released ever but people find them invaluable.
Ex: Per app specific Firewall per website. (Block tracking/ads)
-Disable apps ability to spy on your clipboard.
-Disable apps from accessing things you do not want them to but still launch.
-Themes, so many options: remove your status bar or put new things there.
-Detailed wifi, phone information
-Download old versions of apps because the company broke something.
-Detailed phone/memory/cpu info
Those are just a few off the top of my head.
Disclaimer: I continue to use an iphone primarily because I'm happy and fine with the walled garden.
Not even to fan boy, but half of those are things that android did from the go and the rest have been added or are generally easy to do.
To what end, my 5 year old midrange phone still loads everything instantly (Snapdragon 801). Is there actually any benefit of 'top of the line' mobile CPUs except for mobile gamers?
ipads and iphones are amazing hardware in every other respect but Apple really needs to chill with the ultra-thin fetish.
(same goes for their PC hardware, but the hardware is not particularly amazing there.)
That was an issue with one phone, once, and it was a problem with the battery, not Android
> Not getting security updates after the first year
That's actually solved by rooting since you can update from any source instead of just signed packages
> Boot loop
I guess that could be a problem but I've never had that issue and I've rooted all but one phone that I've had
There's two parts to Android patching - the kernel/Android OS itself - yes, this gets patched with a rooted/custom ROM.
Part two is the hardware drivers themselves, the modem driver etc etc. None of that is patched with a custom ROM. You still have to use the same binary blob/drivers that came with the latest official phone firmware.
When security updates stop for you phone, all the updates for the underlying hardware drivers stop too. So yea, you can slap some bandaids on it, but you're not really up to date even though you might be on a later/more recent version of Android.
If my house burns down, I personally won't care if it's android in general, the model, or the battery in the android phone which destroyed everything I own.
> That's actually solved by rooting since you can update from any source instead of just signed packages
Installing software downloaded from xda-developers is, what I like to call, malware as a service (tm).
Like the person said, it was literally one Samsung model that had issues and it could've just as well have been an iOS device. Looking at how many recalls and repair programs they have for design flaws, I don't think anyone should paint Apple as the better side here.
Just because something doesn’t have any know issues, doesn’t mean it’s not wise to avoid it because it flat out smells.
Are they going to plant malware which steals your banking details? Probably less likely than a random binary package you found on some forum.
I haven't gotten as much mileage on my Android phones as compared to my 4S, but the 4S cost about 3 times as much as the android phones I usually buy and 3 lower end-ish Android phones serves me easily for 10 years with no issues.
"Apple denied wrongdoing and settled the nationwide case to avoid the burdens and costs of litigation, court papers show." - https://www.reuters.com/article/us-apple-iphones-settlement-...
They haven't been "caught" doing that; they have been accused of that. Why is it the stupid conspiracy theory which wins the popularity contest, instead of the much more annoying true story - Apple cheaped out on batteries which couldn't provide enough current to run the phone as they aged? Or, just as factual, Apple slowed down the phone to keep it working longer on the same hardware thus making people avoid having to buy new phones.
They settled out of court. Must have though the case against them was pretty strong.
If that's not an admission of guilt...
Are there any lower-end phones that get security updates for 3 years?
> not worry about it.
I’m sorry, but this feels like a lie. I’m not trying to troll here, but most Android IT-level users I know flashed a custom ROM already and actually buy their devices based on whether or not LineageOS has support for it.
Almost all Android devices are non-Google devices, which makes their OEM state bloated with custom sync crap nobody even wants to use but have no choice.
Xiaomi, Motorola, Lenovo, Huawei, HTC ... all force their own shitty half-working synchronization platforms up their users’ phones. And I bet that this happens with major GDPR violations.
As an Android “user” I do not recommend using Android for people that do not want to “worry about it”. And product wise I think that’s a huge quality issue.
Fixing bugs used to attack people means fixing bugs used for jailbreaks. There isn’t some magical mechanism by which a jailbreak exploit isn’t exploitable but anyone else.
The current model fails to protect people anyway while providing an extremely strong incentive for the community to publish software that undermines the “security” of the device.
How do you propose getting “informed” consent from an audience who doesn’t care and willingly expose everything about themselves and everyone they know to find out which Star Wars character or 80s pop song they are most like? Genuine question, as this doesn’t seem the least bit a solved problem anywhere.
Usually, solutions in this area generally have a couple of characteristics: the first is that the "secure" case is generally useful for 95+% of people, to the point that they might not even know that there are other "modes" that are more permissive. The second is putting surmountable but significant barriers in place to prevent disabling these features, in an attempt to prevent casual/unintentional deactivation. Strange key combinations that lead to scary text and wiping the device seem to be fairly effective in keeping out people who cannot give informed consent. And a third is allowing a user-specified root of trust: for example, one can imagine an iPhone that is every bit as secure as any other iPhone today because I have enabled all the security features, but it's using my keys to attest against instead of Apple's. There's a lot of interesting work being done in this area: one I personally like is Chromebooks, which have the dual purpose of being secure, "locked-down" devices for general consumer use, but also for being useful for development work. And there we're seeing interesting solutions such as using KVM to run an isolated Linux, developer mode, write protect screws, …
Nobody said anything about clumsy people. As one of many examples, look at all the guides that tell folks to disable SIP and don’t explain the risk and really don’t even need to disable SIP, the app should just be fixed properly.
There are exceptions of course and good reasons to disable it, so I’m glad Apple has the option, but I’d venture to say 85% of the time it’s done by a person who isn’t really making an “informed consent”.
(Focusing on user consent obscures the actual problem; people often consent to running malware. What matters is whether the software to be run is useful and harmless.)
Ah, but not in an informed way–users don't typically run software they know to be harmful/useless :P (And no, telling them that it is harmful isn't apparently enough to inform them…) But I agree with the first part.
I believe all these begin with browser or existing-app based exploits.
None of them seem to rely on tricking the user into installing a new app. That would be too suspicious for the user, and would entail the attacker uploading their exploit code to apple, and giving apple a full list of users who they exploited...
Are you sure about this?
I'm far removed from the app store development world, but a cursory glance at the description and the original lightspeed bug seem to indicate this is a problem within the kernel interface, and as such I assume callable by any application??
Sorry, I could be missing something, just curious why this couldn't occur in the app store.
> I wanted to find the vulnerability used in unc0ver and report it to Apple quickly in order to demonstrate that obfuscating an exploit does little to prevent the bug from winding up in the hands of bad actors.
Of course if he was this talented, surely he would routinely diff new kernel versions and realize the old bug had been reintroduced before having to rediscover it in a jailbreak?
He abused the jailbreak to cause a crash. A talented researcher would try that before diffing kernel binaries.
If a single security researcher can de-obfuscate it in under a day, then a nation state with huge funding can too. Maybe not in a day, but eventually.
Currently, sharing security issues with Apple is a guaranteed that the tooling you are using to get access to your device won't work anymore, there's definitely an bad incentive to not report security flaws at the moment.
I take your point, which is that jailbreaking is good if what you want is to run random unapproved code on your machine. But you didn't seriously engage with the comment you rebutted, because it is also true that jailbreak prevention prevents persistent kernel compromise --- is in fact a predicate for preventing persistent kernel compromise --- which is a thing that really does happen; in fact, it's far more relevant to the overwhelming majority of Apple users than running unapproved code is.
To have an idea if an app is sharing your data you need to be jailbroken, to have an idea of what is being sent from your device you need to be jailbroken, to force a stricter control on apps you also need to be jailbroken. I mean, you get the point. Any action you could do regarding security requires you to be jailbroken first.
Some of us do that for this exact reason. I wish there was a way for me to just pick software to give root to though, this is way less secure.
As a side note, it's disappointing to see so much unfounded criticism here in the comments. Apple was going to find and fix this bug quickly, regardless of the author's efforts. In this case we get a peek into the inner workings of the exploit discovery process that would otherwise remain secret. The author and Apple both clearly noted that unc0ver was the source of the exploit, and the author made no attempts to hide that fact. Calling the author of this blog post "lazy" or an "informant" is out of touch and uncalled for.
> By 7 PM, I had identified the vulnerability and informed Apple
I don't know why this rubbed me the wrong way. Like, it feels "lazy" (for lack of a better way) to disassemble an exploit and run off to tell the vendor. If anything, the exploit writer should get the credit. I don't know.
They did: https://support.apple.com/en-us/HT211214
I'm guessing that's a policy/requirement of Project Zero as, presumably, the P0 folks are making "enough" already.
For locals, why bother? The optimizer will probably discard the writes, and worrying about stack addresses being reused is a waste of mental space and clutters the code.
All counts are rough numbers.
Project zero posts:
I was curious, so I poked around the project zero bug tracker to try to find ground truth about their bug reporting: https://bugs.chromium.org/p/project-zero/issues/list For all issues, including closed:
product=Android returns 81 results
product=iOS returns 58
vendor=Apple returns 380
vendor=Google returns 145 (bugs in Samsung's Android kernel,etc. are tracked separately)
vendor=Linux return 54
To be fair, a huge number of things make this not an even comparison, including the underlying bug rate, different products and downstream Android vendors being tracked separately. Also, # bugs found != which ones they choose to write about.
Thats a team of ~10 security researchers over many years...
Considering how many are being discovered each day/month/year, chances are that there are at least hundreds undiscovered...
If it only takes one to ruin your life, and a good security researcher can find one in a few weeks, or months at most, the barrier to someone evil is really really low...
This doesn't change the fact that someone evil will still probably find one.
The found issues will strongly depend who happens to be on the Google Project Zero team at the moment.
My post was to counter folks thinking P0 is a Google hit job, which seems to come up frequently on HN.
> My goal in trying to identify the bug used by unc0ver was to demonstrate that obfuscation does not block attackers from quickly weaponizing the exploited vulnerability.
And in fact, I will argue that this looks like it worked great: yes, someone--and of course, likely many people working in shadowy areas of organized crime, arms dealers, and government contractors--figured it out in hours, and they could have been malicious and used it to attack others. But the real question is then how many such attackers you enable and what their goals are. If you publish an exploit as open source code along with the tool (which some people have done in the past :/), you allow almost any idiot "end" developer to become an attacker: millions of people at low effort instead of thousands or hopefully even only hundreds (when combined with incentives, not just ability).
If you publish a closed source binary with obfuscation--one which is restricted to a limited usage profile (like if nothing else it isn't in the right UI form to "trick" someone into triggering it, or where what it ostensibly "does" is too blatantly noticeable) you limit the number of people who both have the time and incentives to work out the vulnerability and then rebuild a stable exploit for it (which is hard) down to a small number of people, almost none of whom (including the attackers) who are then incentivized to publish a blog post (or certainly code) until at least months after it gets fixed (as was the case here).
And so, as someone who had been sitting in the core of this community--where everyone is wearing a grey hat, the vendors are the "bad guys", and "responsible disclosure" is being complicit in a dystopia--and dealing with these ethical challenges for a decade, my personal opinion is "please never ever drop a zero day on the world without it being a closed source obfuscated binary" unless you want to drop the barrier to entry so low that you have creepy software engineers quickly using the exploit against their ex-spouse as opposed to "merely" advanced attackers using the vulnerability for corporate or government espionage.
Obviously you have a better understanding of the iOS jailbreak scene than I ever will, but I still have to say I disagree with this ethical viewpoint. Personally, I'd rather run an open source exploit chain than obfuscated binaries from parties I do not know that are difficult to be sure are safe. Thankfully in the case of unc0ver that is not an issue anymore, but in the past it has been an issue for longer time periods. OTOH, if there is really a moral dilemma in releasing 0days as open source specifically because of the small time abusers and not nation state adversaries, I don't understand how this moral quandary doesn't mean you can never ethically release an iBoot/more generally any bootrom exploit, for example.
I'm genuinely curious how many abusive people are motivated enough to come up with a creepy use for a tethered jailbreak. I know it's possible, but short of rolling your own stalkerware, it really doesn't seem too straightforward?
Google doesn't micromanage employees by the half hour like many companies...
Ooof. Talk about running in circles. Either this was someone who is swamped with work and spaced out, or a new programmer who wasn't familiar with the original. Oddly, I feel bad for both of them!
> Thus, this is another case of a reintroduced bug that could have been identified by simple regression tests.
So maybe writing and reading useful commit logs is not such a bad idea after all :)
Reminds me of this quote: "Those who don't know their history are doomed to repeat it."
I often wonder what goes through the minds of those whose work helps companies exert more control over their customers. Maybe some of them are not so "obedient" after all...