Cloning into 'tpwn'...
remote: Counting objects: 16, done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 16 (delta 3), reused 16 (delta 3), pack-reused 0
Unpacking objects: 100% (16/16), done.
Checking connectivity... done.
$ cd tpwn
gcc *.m -o tpwn -framework IOKit -framework Foundation -m32 -Wl,-pagezero_size,0 -O3
leaked kaslr slide, @ 0x0000000008e00000
Edit: for those of you wondering, no, I didn't just run this willy-nilly. I read the code thoroughly and determined there were no side-effects aside from just the PoC dropping to a root shell.
And wtf @qwertyoruiop this is how you "responsibly" release zero day?!
I just upgraded to 10.10.5 yesterday!
You're welcome to point out that you prefer that vulns be disclosed to vendors before public release, but to chastise someone for not subscribing to your preferred methodology seems a bit off base
No - chastising someone for performing an act you consider to be harmful to the public is totally legitimate.
This is a real 0day.
So you're going to tell me that this is a non-issue because it's "unexploitable"?
curl <some_url> | bash
gcc *.m -o tpwn -framework IOKit -framework Foundation -m32 -Wl,-pagezero_size,0 -O3
In file included from /usr/include/dispatch/dispatch.h:51:0,
/usr/include/dispatch/object.h:143:15: error: expected identifier or '(' before '^' token
typedef void (^dispatch_block_t)(void);
If you're feeling jumpy about binary packages, just set `HOMEBREW_BUILD_FROM_SOURCE="1"` in your shell profile.
Nobody's forcing binaries onto you.
clang version 3.6.1 (tags/RELEASE_361/final)
Thread model: posix
<-- can confirm it works out-of-the-box on OSX 10.10.4
It's encouraging that of these two major bugs, both didn't work in El Capitan. Not claiming that it's going to be some sort of panacea; I'm sure it's just a matter of time before it gets exploited too. But it's still encouraging.
But start the mac hate train regardless - if facts don't count :)
> Apple appears to be in no rush to fix the first one, I wouldn't bet my money on this vulnerability getting a fix any time soon, either ...
As it was clearly stated, there is a fix. Whether or not they'll release a 10.10 patch remains to be shown, and "no rush" is speculation.
I'll never understand the HN crowd, but I guess providing additional information to clear up a false statement, while correcting OPs assumptions is against the rules.
10.10.3, which includes that fix, was released 3 days ago. I'll never understand the HN crown, whining about the HN crowd while you didn't bother googling before writing your rant.
I didn't know there was a 10.10.3 patch - I never said there wasn't. OP said there wasn't, I said it remains to be shown. You're more than welcome to correct me on that.
I don't understand what my patch sentence has to do with whining - or how that is whining about the HN crowd?
It's still unclear what was unwanted about my original comment, so I can't really correct my behavior (which was what I wanted all along).
Granted, it's been several years now, but the parent did say "has a poor record", and a 3 year patch delay fits that description.
More recently, there was the 4-day gap between patching "goto fail" on iOS vs. OS X. While not really the same point as the parent comment, it reflects poorly on their commitment to rapidly fixing things.
I can see why you'd drop a 0-day if you were somehow ignored or they were stalling for years, but dropping it without even trying is just irresponsible. There's a lot of room for improvement in Apple's handling of vulnerabilities, especially with regards to response time and proactive work, but I don't see how deliberately waiting until 10.10.5 and then releasing a 0-day is helping anyone.
The largest target for local privesc are the people who get hit by different kinds of malware, usually as a payload in sleazy spyware installers. The malware authors now have a few profitable weeks before Apple patches it (here's hoping they're quick!) and only the elite few who know how to deploy an unsigned kext (that still isn't available) will be able to protect themselves.
Still not seeing the upside to this.
There are people in this thread more secure today than they were yesterday, and they owe it to this "irresponsible disclosure." Do they have less of a right to security than the "unelite"? We don't know whether this is already being exploited by someone else, and as you said, it could be weeks before an official patch. This release both lights a fire under Apple and helps a few people patch early or at least be extra-careful about what they execute. That's an "upside"... I guess it's down to one's own values and possibly omniscience to conclusively determine whether the downsides outweigh that.
A more responsible version of this might be to release the source of a kext that patches the issue concurrently with confirmation from Apple. Apple got a few hours' head-start, some people can patch early, malware authors will have to spend some time reverse-engineering a complete exploit.
Malware already preys on those least capable of defending themselves, so an unsigned 3rd party kext or a performance degrading boot option does nothing to protect them. We have no indication that this was being exploited by anyone else, if they were that would be newsworthy in itself.
I like your idea of releasing an unofficial patch instead of exploit code though. I still think that you should follow the established responsible disclosure process, but it would at least show some interest in helping users. Oh, and don't be a dick and release it on a Saturday afternoon.
You don't need omnicience to determine whether the downsides outweigh the potential benefits to a tiny number of jumpy elite who would have to be constantly following and applying patches.
Notably those patches couldn't be applied blindly - so all of those 'elite' in parallel would have to fully understand the exploit and patches lest these become just another attack vector.
There is clearly no justification for this. This isn't just irresponsible. It's a straight up attack on users.
I'm all for responsible disclosure. But I think we need to clarify good and bad here. Responsible disclosure is better than just announcing findings to the world, but telling people about what you've discovered is not bad.
That's why it's responsible to tell the people who can be responsible for mitigating the problem in sufficient time for them to propagate a fix before publicizing the exploit in the knowledge that you are giving it to bad actors.
The fact that the vulnerability existed before and may have even been being exploited does absolutely nothing to change this.
If it was already being exploited but isn't publicly known then irresponsible disclosure simly makes the problem worse by increasing its availability to more bad actors who weren't previously in the know.
Telling the world via a fully working exploit causes a ton of collateral damage, and the author made no effort at all to reduce the impact. Waiting for 10.10.5 and releasing it on a Saturday afternoon makes it seem like the point was to cause as big a mess as possible.
And no, it's not like throwing a rock through somebody's window. The information may be used by other people to cause damage, but the mere act of releasing it is not by itself damaging. Let's put blame where it belongs: on the people actually using exploits for bad purposes. If you want to encourage responsible disclosure, don't lead with bad analogies about what happens when you announce a vulnerability to the world, because it just reduces your credibility.
If qwertyoruiop had instead come up with plans for making a suitcase nuke from household ingredients, or for breeding an Ebola analog using a home beer making kit, your argument would imply that you think that posting them on the Internet would not be bad.
It's really difficult to have a serious discussion about computer security vulnerabilities when people keep comparing it to throwing rocks through windows or weapons of mass destruction.
And yes, it's relevant, because the severity of a problem can and does influence how problematic various approaches are.
This is a local root exploit. Those are common and not generally problematic. The barrier to escalating from a normal user to root is at best the absolute last line of defense, and often completely irrelevant. It's a problem that should be fixed, don't get me wrong, but the severity is about 2 out of 10.
The fact that I think it's not a bad thing to release information like this has no bearing on what I would think about releasing information on building a suitcase nuke from household ingredients.
Could we try to keep the conversation grounded, here?
But that isn't what you've been saying - rather, you've been making the general argument that releasing information is not damaging or bad, and that we should only hold the people who exploit vulnerabilities responsible - not those who disclose them. Multiple people have argued against you on this.
Now you have switched your position to 'It's not bad to disclose vulnerabilities unless they are severe'. This seems much more reasonable, and came as the consequence of you being presented with what you are calling 'ridiculous' analogies.
To me this seems like a serious discussion done right.
Just for the record: I did inform Apple beforehand. Not so much before, but before.
I do not consider this to be their fault in any way as someone in this thread seems to be implying. Again, I had my reasons to drop such a thing publicly. I've had this for months, and I did not intend on disclosing at all. Proof of my "for months" assertion: https://www.youtube.com/watch?v=8arPid8GtFk
> As a bare minimum Apple needs a few hours to analyze the bug
Again, for the record: Apple has full details of the underlying bug. They won't even need to check my github at all.
I mean you'd have to be omniscient to make the optimal choice with absolute certainty. What if somebody with the most sensitive data you can imagine, gets infected and the data exfiltrated if they have to wait for Apple to patch this, but is safe after applying one of the suggested fixes from the exploiter? If you were omniscient you could tally up the total harm (or whatever metric you want to use) from various choices and choose the best one.
I recognize the mainstream-accepted calculus is more along the lines you are arguing for, and of course it makes sense to use probabilities and percentages since nobody's omniscient. But, given we don't know who else had this exploit, I still see some value in rapidly securing a few people rather than letting them remain vulnerable.
You say 'given that we don't know who else had this exploit' there is value in rapidly securing a few people rather than letting them remain vulnerable.
The question here is has the public availability of the exploit secured more users from realistic attack than it has exposed?
The fact that the vulnerability wasn't publicly known before is evidence that it wasn't in widespread use. Not conclusive evidence, but evidence nonetheless. Not knowing who had the exploit before doesn't mean we have no information on which to base the decision, and certainly doesn't mean we should adopt a policy based on the idea that the exploit is currently in widespread use.
> The question here is has the public availability of the exploit secured more users from realistic attack than it has exposed?
It probably has exposed more. But, it could depend on whether some OS X servers were patched, or company-managed OS X workstations, or virus-scanner definitions updated, which could simultaneously protect many users.
> certainly doesn't mean we should adopt a policy based on the idea that the exploit is currently in widespread use.
In the current climate, I'm not so sure. Maybe not "widespread" use since, as you say, it doesn't seem to be publicly known. But, if it were only in narrow use and the discloser has determined this was the fastest way to warn people, do those narrow victims deserve security less than everyone else? Some of the discloser's statements make me wonder if he does know of other exploiters. You can question whether he knows enough or has the right to make that call, but if he disclosed responsibly, you'd be trusting Apple in the same way. Are they inherently more capable of making a good decision just because they're a corp?
P.S. It's interesting that you seem to rely on "the community" to notice whether this was being exploited by malware, but are mad at somebody from "the community" reporting it without it having been exploited by malware. Of course I understand the reason this may not be the best way for it to be reported, but maybe we should be glad this was found and reported at all, for free, by someone in "the community." It almost seems entitled, to expect someone else to do you a favor for free, and for you to also dictate the terms.
There is nothing 'entitled' about the opinion that it's wrong to distribute information about malware publicly where it can be used by bad actors, without first giving vendors a chance to distribute a fix.
IMO's it's naive to think nobody else knew about these relatively simple exploits, so I don't blame the reporter too badly for deciding they don't want to wait weeks for Apple to fix it, if they can fix it themselves in hours. It's useless for Apple to be "in a position to distribute a patch more rapidly and more widely than anyone else" if they don't actually distribute a patch rapidly. I guess we'll have to wait and see how rapid they are this time.
I guess we'll have to agree to disagree. I'm at least glad to read around a bit that it's a controversial topic, so we're both in good company in our respective opinions.
The point about people not wanting to wait for Apple is valid in that certain people can protect themselves ahead of apple distributing a fix.
However it clearly doesn't change the calculus since the number of people who can protect themselves is miniscule compared to the number of people made vulnerable. Even if it takes a month, that is going to distribute the patch far faster than this.
There is nothing controversial about this. It's a matter of statistics.
As for controversy, maybe I overstated that, but your flat counterstatement with no elaboration or support isn't going to change anyone's mind.
You are using it to make the situation seem less clear than it is rather than responding to a clearly articulated critique of your position.
If you differ on any of these topics why not say what you believe?
As an example of how "responsible disclosure" can fail, read https://en.wikipedia.org/wiki/Shellshock_(software_bug) . It was embargoed until a patch was ready, but the patch itself invited exploitation attempts and further scrutiny which revealed additional vulnerabilities. It was quite a mess, but IMO the only "best practices"-based way to avoid it would have been to never have introduced the vulnerability in the first place. Elsewhere in this thread, I cited an instance of Apple taking three years to fix a vulnerability responsibly disclosed. Would you say it was better to let that vulnerability sit for three years than to disclose it immediately so that it would get fixed within a few months? Obviously none of this proves we should jump to instant full disclosure, I just mean the existing approaches all have issues, so there is room for personal opinion and judgment. I don't even feel strongly about second-guessing this particular instance, because I'm betting the discloser knows more than we do. (And if you don't trust him, refer to my previous comments -- why trust Apple, when they have a record of being fairly slow?)
If I'm obfuscating by saying we can't know, you're making it deceptively simple by claiming you do know with broad statements like "the number of people who can protect themselves is miniscule compared to the number of people made vulnerable" (even though I've argued third parties can help secure unknowledgable users, if the issue is publicly disclosed before an Apple patch), "It's a matter of statistics", "the fact that the vulnerability wasn't publicly known" (how do you define "public" in a way that is both meaningful to your position and can be exhaustively searched to prove the "fact" that this wasn't known?).
it will have an hefty performance penalty, but if you value security over performance, it'll also protect you against a lot of (even 0day!) exploits.
The flag essentially prevents kernel from accessing userland memory unless special routines are used. Since the bug is a NULL pointer deference (which requires a read to userland memory in order to be exploited), exploitation becomes impossible. Due to this flag, however, your kernel will have to context switch every time a system call is done, which does have a noticeable performance impact. I will be releasing a KEXT to fix the bug soon.
And, coming from a Grub/ubuntu perspective, when you say "boot args", I think of the boot loader, which for Grub is configured with config files (text files) or else at boot-time, via the Grub menu. I know OSX has a single-user mode, but don't know of a way to edit boot args prior to completing the boot sequence.
Please don't take this wrong. I'm glad to see the original fix you gave, so much so that I want to know more about it. What provides the capability, how to know the specific options that mitigate such a vulnerability.
Some relevant excerpts from Mac OS X and iOS Internals: To the Apple's Core by Jonathan Levi:
From page 133: In 64-bit mode, there is such a huge amount of memory available anyway that it makes sense to follow the model used in other operating systems, namely to map the kernel’s address space into each and every process. This is a departure from the traditional OS X model, which had the kernel in its own address space, but it makes for much faster user/kernel transition (by sharing CR3, the control register containing the page tables).
From page 266:
Still, unlike Windows or Linux, OS X applications in 32-bit (Intel) used to enjoy
a largely unfettered address space with virtually no kernel reservation — that is, the kernel had its own address space. Apple has conformed, however, and in 64-bit mode OS X behaves more like its monolithic peers: the kernel/user address spaces are shared, unless otherwise stated (by setting the -no-shared-cr3 boot argument on Intel architectures). The same holds true in iOS, wherein XNU currently reserves the top 2 GB of the 4 GB address space (prior to iOS version 4 the separation was 3 GB user/1 GB kernel).
 A brief description: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_In...
Doesn't Linux perform this "context switch at every syscall" ? How does it get away with the performance penalty?
Linux also supports the SMAP feature on modern Intel CPUs which allows the kernel to set things up so that all accesses to usermode memory from kernel mode must be explicitly annotated.
Non-shared-cr3 Macs (and IIRC some versions of PaX) also change `%cr3`, which means user-space and kernel-space have completely different address spaces (rather than a shared kernel-space and per-process user-space). This is much more expensive.
On recent kernels, the vsyscalls are actually the slowest way of all to ask for the time or the cpu number. They're only supported at all as a fallback, and the fallback is very slow, because it tries to mitigate exploit risks due to having code at a fixed address.
The added penalty is a switch to usermode to read userland data, then a switch back to kernel to continue on...its just additional context switches for reading userland memory
I know essentially nothing about Darwin, but "no shared CR3" presumably means that the kernel will switch CR3 to make user memory inaccessible when running in kernel mode. This is approximately what grsecurity's UDEREF feature does.
On Linux, on Broadwell or newer, there's a similar HW mitigation called SMAP. Darwin might use it, too.
Linux also doesn't allow unprivileged programs to map very low addresses, making NULL pointer dereferences much harder to exploit.
(2) I'm trying to work through your ROP. Can you explain a bit more? Thanks.
2) When IOServiceRelease is called, vtable+0x20 is called. the vtable pointer is controlled, at +0x20 I place a stack pivot, which sets RSP = RAX and pops 3 times. At 0x20 I place a POP RAX;RET gadget to let the chain begin after 0x28. Payload then locates the credentials structure, sets UID to 0 by bzero()ing, cleans up the memory corruption, decreases the task count for current user and increases task count for root. It then unlocks locks held by IOAudioEngine to prevent your audio from freezing up, and then returns to the userland context.
I wonder if this is related to the behavior where my iMac wakes up every minute starting every morning at 2AM. This is so obnoxious that I now turn my iMac off at night instead of putting it to sleep.
You will have to disabling automatic updates for that not to occur.
tpwn has been tested from 10.9 to 10.10.5, but of course, your mileage may vary. the KASLR leak part is not 100% reliable, unlike the actual code execution which is. I'd be interested in panic logs to sort the issue out, if you could share.
~/code/tpwn % id -u
~/code/tpwn % ./tpwn
leaked kaslr slide, @ 0x0000000005600000
~/code/tpwn # id -u
leaked kaslr slide, @ 0x000000000f800000
bash-3.2$ id -u
I assume that if it doesn't work on 10.11, then rootless being enabled or disabled shouldn't make a difference. You still have a root user either way, it's just that if rootless is enabled, then the root user wouldn't be able to modify certain system directories, which could mitigate the consequences of such an attack if it did work.
It relies on two distinct bugs, an info-leak to obtain a pointer to an allocation in the kalloc.1024 zone and a memory corruption primitive (deriving from a NULL pointer dfr. in IOKit) allowing me to OR 0x10 anywhere in kernel memory.
To break kASLR I corrupt the size of a vm_map_copy struct, which allows me to read the adjacent allocation to the struct, which is a C++ object. First 8 bytes of said C++ object is a pointer to the vtable, which resides in __TEXT of some kernel extension. Since I can calculate the unslid address from userland without any issue, by subtracting what gets leaked with what gets calculated you get to know the kASLR slide.
Just to clarify: The code execution part has 100% reliability rate. The kASLR leaking part does have some chance in it, however empirical evidence indicates that the failure rate is extremely low.
If osx were s/xnu/minix 3-style, full microkernel/, sploiting Iokit as a least priv'd process, only it would get pwned and be limited to iokit's acls. Still bad, but it likely wouldnt have rights to exec a root shell.
XNU kexts have way too much authority, and all the syscalls they each tack on compounds the attack surface to the total codebases of all Apple and third-party kexts. Because once you've found and symbolicated the not-really-hidden call table, you're pretty much able to do whatever. And with a mutating mem kext bug ...
Expecting a finished product of a new project would be unreasonable. Minix 3 is early on and not the only full microkernel out there. They will probably find an approach to reduce context switches if it's a mature optimization to make. An Android-like mobile/embedded platform would make a sensible research -> real use-case, minus Java.
10.10.3 was actually the first version it was tested on.