Hacker News new | past | comments | ask | show | jobs | submit login
OS X 10.10.5 kernel local privilege escalation (github.com)
278 points by tyilo on Aug 16, 2015 | hide | past | favorite | 134 comments

$ git clone https://github.com/kpwn/tpwn.git

Cloning into 'tpwn'...

remote: Counting objects: 16, done.

remote: Compressing objects: 100% (11/11), done.

remote: Total 16 (delta 3), reused 16 (delta 3), pack-reused 0

Unpacking objects: 100% (16/16), done.

Checking connectivity... done.

$ cd tpwn

$ make

gcc *.m -o tpwn -framework IOKit -framework Foundation -m32 -Wl,-pagezero_size,0 -O3

strip tpwn

$ ./tpwn

leaked kaslr slide, @ 0x0000000008e00000

sh-3.2# whoami



Shit's real.

Edit: for those of you wondering, no, I didn't just run this willy-nilly. I read the code thoroughly and determined there were no side-effects aside from just the PoC dropping to a root shell.

I did the same, didn't get the root shell but instead got a standard OS X kernel panic screen and system reboot twice before it came back to life.

And wtf @qwertyoruiop this is how you "responsibly" release zero day?!

I just upgraded to 10.10.5 yesterday!

You have quotes around "responsibly", is that a quote from somewhere in this disclosure?

You're welcome to point out that you prefer that vulns be disclosed to vendors before public release, but to chastise someone for not subscribing to your preferred methodology seems a bit off base

chastise someone for not subscribing to your preferred methodology seems a bit off base

No - chastising someone for performing an act you consider to be harmful to the public is totally legitimate.

Nice to see a 100% working widely exploitable 0day without any caveats that make it not real-world applicable. Local, so you need a non-admin account on the box which is kinda hard.

I don't understand what you're implying. This is a 0day that could be exploited from any number of outside channels.

You can also use it to build malware that would normally be defeated by an unprivileged account right?

A java applet could exploit this. With a little work, a Flash load in Safari (or really any browser) could probably exploit it.

This is a real 0day.

I can't reply to your post lower in the thread, but saying a java applet and flash app can exploit this? It's a C program that requires a local account on the box. You would need to break out of the java sandbox into local unprivileged shell access before you could exploit this via java. Same with flash - you would need a flash exploit that breaks you out of the sandbox before you can exploit this. In other words, you'd need two major vulnerabilities to do what you want to do. That's why I posted a reply saying 'tell us how'. It's important to understand that to exploit this you need a local unprivileged account on the box.

Java applet and Flash 0days exist in the wild whether they're published or not. I am confident that 0days for them exist and are actively used right now.

So you're going to tell me that this is a non-issue because it's "unexploitable"?

it's a major piece of a multi-step own, no doubt. Get write and execute from browser, email client or office doc vulns, and shit's on. Rootkit installed and bot ready to install more payload components.

I think you spelled 'exploitable' wrong, since that's what I actually said when passing on my complements to the author above. I feel like I'm arguing with a dining room table.

easily exploitable in the mode of many project "installers" these days which is something like:

  curl <some_url> | bash

Add malware to valued homebrew tool, profit.

Go ahead and give us an example.

What user does your web browser run as? Do you use chrome? Any Chrome extensions use the NDK?

Shell script in a git repo?

Phew, didn't work for me out of the box:

$ make gcc *.m -o tpwn -framework IOKit -framework Foundation -m32 -Wl,-pagezero_size,0 -O3 In file included from /usr/include/dispatch/dispatch.h:51:0, from /System/Library/Frameworks/IOKit.framework/Headers/IOKitLib.h:56, from import.h:13, from lsym.h:5, from lsym.m:1: /usr/include/dispatch/object.h:143:15: error: expected identifier or '(' before '^' token typedef void (^dispatch_block_t)(void);

Just to clearify: that's just means you couldn't compile it. It doesn't imply your system is/isn't vulnerable.

Sure, but its somehow satisfying to know that its broken for those who don't make the effort to fix it. ;) (It happened to me because I'm using a different compiler than the exploit needs..)

Malware comes precompiled and ready to run, it doesn't depend on you having a compiler installed.

True that! I'm looking at you, homebrew ..

Not sure why you're looking at Homebrew.

If you're feeling jumpy about binary packages, just set `HOMEBREW_BUILD_FROM_SOURCE="1"` in your shell profile.

Nobody's forcing binaries onto you.

True, but I don't have all the time in the world to read the sources, either ..

Then you should not install them on a machine with confidential data at all, source or binary. It's that simple!

use `clang`, not `gcc`.

clang version 3.6.1 (tags/RELEASE_361/final) Target: x86_64-apple-darwin14.4.0 Thread model: posix

<-- can confirm it works out-of-the-box on OSX 10.10.4

Cool, thanks for pointing that out. I was pretty glib about my attempt to check it out ..

Anyone who is worried about privilege escalation on OSX should be aware that Apple ships sudo with requiretty disabled. This means that sudo authentication is not bound to the TTY in which the authentication occurred, and so using sudo for anything is tantamount to giving root to all of your processes.

UPDATE: https://news.ycombinator.com/item?id=10069706

So we currently have 2 local privilege escalation exploits [1] available for Mac OSX. Apple appears to be in no rush to fix the first one, I wouldn't bet my money on this vulnerability getting a fix any time soon, either ...

[1] http://bit.ly/1MrsdID

They've released a fix for the DYLD bug, and it didn't effect El Capitan anyway. Point of interest, but I had manually changed my sudoers file when I was running Yosemite. I'm now running the El Capitan beta. The sudoers file had been happily unchanged up until the most recent update, when it was reset... I don't know if it's related to concerns over the DYLD bug, but I'm guessing it is.

It's encouraging that of these two major bugs, both didn't work in El Capitan. Not claiming that it's going to be some sort of panacea; I'm sure it's just a matter of time before it gets exploited too. But it's still encouraging.

The DYLD_PRINT_TO_FILE exploit was fixed in 10.10.5.

It was not fixed. There are currently no plans to fix it in 10.10.


That article is two weeks old. 10.10.5 was released 3 days ago, and the fix is mentioned in the release notes (under dyld): https://support.apple.com/en-us/HT205031

One of the vulns in this exploit is fixed, rendering the exploit "useless" in 10.11.

But start the mac hate train regardless - if facts don't count :)

Why the massive downvotes when contributing to the issue he raises?

> Apple appears to be in no rush to fix the first one, I wouldn't bet my money on this vulnerability getting a fix any time soon, either ...

As it was clearly stated, there is a fix. Whether or not they'll release a 10.10 patch remains to be shown, and "no rush" is speculation.

I'll never understand the HN crowd, but I guess providing additional information to clear up a false statement, while correcting OPs assumptions is against the rules.

>Whether or not they'll release a 10.10 patch remains to be shown

10.10.3, which includes that fix, was released 3 days ago. I'll never understand the HN crown, whining about the HN crowd while you didn't bother googling before writing your rant.

I'm sorry, where's the rant?

I didn't know there was a 10.10.3 patch - I never said there wasn't. OP said there wasn't, I said it remains to be shown. You're more than welcome to correct me on that.

I don't understand what my patch sentence has to do with whining - or how that is whining about the HN crowd?

It's still unclear what was unwanted about my original comment, so I can't really correct my behavior (which was what I wanted all along).

Just curious when you disclosed this to apple? I'm impressed by your skill in finding this, but not sure it is a good idea to make it so easy for people to weaponize like this.

Apple traditionally has a poor record of responding to delayed disclosures. They burned that bridge a long time ago.

No they did not. Provide some evidence please. They've responded to hundreds of security issues and have credited people in security updates. About the only incident that I can recall is the Google one when Google automatically disclosed vuln's after Apple asked them for more time since it affected a very deep kernel issue.

My vague recollections of past vulns (and admitted dislike for Apple) leads me to believe the parent comment, so I did some Googling. "A prominent security researcher warned Apple about this dangerous vulnerability in mid-2008, yet the company waited more than 1,200 days to fix the flaw."[1] But hey, they did credit him in the release notes!

Granted, it's been several years now, but the parent did say "has a poor record", and a 3 year patch delay fits that description.

More recently, there was the 4-day gap[2] between patching "goto fail" on iOS vs. OS X. While not really the same point as the parent comment, it reflects poorly on their commitment to rapidly fixing things.

[1] http://krebsonsecurity.com/2011/11/apple-took-3-years-to-fix... [2] http://www.cnet.com/news/apple-finally-fixes-gotofail-os-x-s...

Apple fixes hundreds of vulnerabilities every year, and while they do drop the ball on one or two, those are exceptions, not typical. Reports to product-security@apple.com are responded to by a human within a few hours, and issues are typically patched in the next point release or the next after.

I can see why you'd drop a 0-day if you were somehow ignored or they were stalling for years, but dropping it without even trying is just irresponsible. There's a lot of room for improvement in Apple's handling of vulnerabilities, especially with regards to response time and proactive work, but I don't see how deliberately waiting until 10.10.5 and then releasing a 0-day is helping anyone.

I'm not sure if I would ever fully support this kind of disclosure, but maybe we should consider that both this person and Stefan Esser (DYLD_PRINT_TO_FILE) concurrently released fixes/mitigations, meaning vigilant users are secure faster than they would be if it was kept secret until Apple patched it.

Are you talking about his comment about -no_shared_cr3, which causes a noticable performance degradation, and how he will be releasing a kext soon - neither of which is mentioned in the github readme, but buried in HN comments? A small portion of users might be considered vigilant, but they're not psychic.

The largest target for local privesc are the people who get hit by different kinds of malware, usually as a payload in sleazy spyware installers. The malware authors now have a few profitable weeks before Apple patches it (here's hoping they're quick!) and only the elite few who know how to deploy an unsigned kext (that still isn't available) will be able to protect themselves.

Still not seeing the upside to this.

I'm counting on the press and community to distribute the information about the no_shared_cr3 workaround and any kext that comes out, just as Apple's fix is most effective when everyone's being told they should be sure it gets applied.

There are people in this thread more secure today than they were yesterday, and they owe it to this "irresponsible disclosure." Do they have less of a right to security than the "unelite"? We don't know whether this is already being exploited by someone else, and as you said, it could be weeks before an official patch. This release both lights a fire under Apple and helps a few people patch early or at least be extra-careful about what they execute. That's an "upside"... I guess it's down to one's own values and possibly omniscience to conclusively determine whether the downsides outweigh that.

A more responsible version of this might be to release the source of a kext that patches the issue concurrently with confirmation from Apple. Apple got a few hours' head-start, some people can patch early, malware authors will have to spend some time reverse-engineering a complete exploit.

As a bare minimum Apple needs a few hours to analyze the bug, and if the fix is straightforward and doesn't cause any regressions the QA process can begin. Getting the it out to the general public in less than a week is extremely unlikely, and that's provided that they deem it critical enough for an emergency patch (my guess is no). Otherwise we're going to be vulnerable until 10.10.6 and security update 2015-007.

Malware already preys on those least capable of defending themselves, so an unsigned 3rd party kext or a performance degrading boot option does nothing to protect them. We have no indication that this was being exploited by anyone else, if they were that would be newsworthy in itself.

I like your idea of releasing an unofficial patch instead of exploit code though. I still think that you should follow the established responsible disclosure process, but it would at least show some interest in helping users. Oh, and don't be a dick and release it on a Saturday afternoon.

This argument is a general argument for is irresponsible disclosure of 0-days.

You don't need omnicience to determine whether the downsides outweigh the potential benefits to a tiny number of jumpy elite who would have to be constantly following and applying patches.

Notably those patches couldn't be applied blindly - so all of those 'elite' in parallel would have to fully understand the exploit and patches lest these become just another attack vector.

There is clearly no justification for this. This isn't just irresponsible. It's a straight up attack on users.

Malware that uses this exploit to cause damage is a straight up attack on users. Merely revealing a vulnerability isn't an attack. The vulnerability was there all along, and there's no guarantee that this person is the only one who could find it, or even the first.

I'm all for responsible disclosure. But I think we need to clarify good and bad here. Responsible disclosure is better than just announcing findings to the world, but telling people about what you've discovered is not bad.

Telling people about what you have discovered is bad if you can reasonably tell that those people are bad actors who will use the information you are giving them to do harm.

That's why it's responsible to tell the people who can be responsible for mitigating the problem in sufficient time for them to propagate a fix before publicizing the exploit in the knowledge that you are giving it to bad actors.

The fact that the vulnerability existed before and may have even been being exploited does absolutely nothing to change this.

If it was already being exploited but isn't publicly known then irresponsible disclosure simly makes the problem worse by increasing its availability to more bad actors who weren't previously in the know.

There's writing a nicely worded letter, and sending it in a rose scented envelope. And then there's scribbling a note, tying it to a rock, and throwing it through the window.

Telling the world via a fully working exploit causes a ton of collateral damage, and the author made no effort at all to reduce the impact. Waiting for 10.10.5 and releasing it on a Saturday afternoon makes it seem like the point was to cause as big a mess as possible.

Maybe Saturday afternoon just happened to be when he finished.

And no, it's not like throwing a rock through somebody's window. The information may be used by other people to cause damage, but the mere act of releasing it is not by itself damaging. Let's put blame where it belongs: on the people actually using exploits for bad purposes. If you want to encourage responsible disclosure, don't lead with bad analogies about what happens when you announce a vulnerability to the world, because it just reduces your credibility.

for the record: i had no idea yesterday was saturday at the time I dropped the code.

Nothing says "hardcore hacker at work" quite like forgetting what day of the week it is.

Nothing says 'irresponsible' like not thinking at all about the impact of releasing information you know people will use for harm. Publishing to github comes after the hacking.

If qwertyoruiop had instead come up with plans for making a suitcase nuke from household ingredients, or for breeding an Ebola analog using a home beer making kit, your argument would imply that you think that posting them on the Internet would not be bad.

I disagree.

And again with the ridiculous analogies.

It's really difficult to have a serious discussion about computer security vulnerabilities when people keep comparing it to throwing rocks through windows or weapons of mass destruction.

And yes, it's relevant, because the severity of a problem can and does influence how problematic various approaches are.

This is a local root exploit. Those are common and not generally problematic. The barrier to escalating from a normal user to root is at best the absolute last line of defense, and often completely irrelevant. It's a problem that should be fixed, don't get me wrong, but the severity is about 2 out of 10.

The fact that I think it's not a bad thing to release information like this has no bearing on what I would think about releasing information on building a suitcase nuke from household ingredients.

Could we try to keep the conversation grounded, here?

If you had been saying before now that this wasn't a severe bug and so we shouldn't be too concerned about disclosure, then you wouldn't have been presented with these strong counterarguments.

But that isn't what you've been saying - rather, you've been making the general argument that releasing information is not damaging or bad, and that we should only hold the people who exploit vulnerabilities responsible - not those who disclose them. Multiple people have argued against you on this.

Now you have switched your position to 'It's not bad to disclose vulnerabilities unless they are severe'. This seems much more reasonable, and came as the consequence of you being presented with what you are calling 'ridiculous' analogies.

To me this seems like a serious discussion done right.

> and the author made no effort at all to reduce the impact


Thank you. Any particular reason you decided to lead with the exploit and not this patch?

I did not have the patch ready when the exploit was published, that's the only reason why. I had my reasons to publish the exploit in public yesterday, but all I can say is "no comment".

Just for the record: I did inform Apple beforehand. Not so much before, but before.

I do not consider this to be their fault in any way as someone in this thread seems to be implying. Again, I had my reasons to drop such a thing publicly. I've had this for months, and I did not intend on disclosing at all. Proof of my "for months" assertion: https://www.youtube.com/watch?v=8arPid8GtFk

> As a bare minimum Apple needs a few hours to analyze the bug

Again, for the record: Apple has full details of the underlying bug. They won't even need to check my github at all.

That restores some of my faith in humanity, thank you. We'll be looking at the kext today at work, but due to Apple's kext signing requirements I don't know how feasible it is to roll it out.

I have asked on Twitter if anyone could sign it for me. For some reason neither of the two people who tried to do so were able to sign it. No idea why. kexts were signed but they kept getting rejected for some reason.

You need a developer ID with kext signing ability, not a regular developer ID:


The video mentions iOS being vulnerable (around 1:00), but the exploit doesn't mention it. How vulnerable is iOS?

iOS is vulnerable too as far as the vulnerability is concerned. It is not directly exploitable on iOS, however having a NULL task_t still does give you some abilities, even if not (directly?) SVC code exec.

> You don't need omnicience to determine whether the downsides outweigh the potential benefits

I mean you'd have to be omniscient to make the optimal choice with absolute certainty. What if somebody with the most sensitive data you can imagine, gets infected and the data exfiltrated if they have to wait for Apple to patch this, but is safe after applying one of the suggested fixes from the exploiter? If you were omniscient you could tally up the total harm (or whatever metric you want to use) from various choices and choose the best one.

I recognize the mainstream-accepted calculus is more along the lines you are arguing for, and of course it makes sense to use probabilities and percentages since nobody's omniscient. But, given we don't know who else had this exploit, I still see some value in rapidly securing a few people rather than letting them remain vulnerable.

Your argument that you'd have to be omniscient to make the optimal choice with absolute certainty is true of all human decision making at all times, and therefore doesn't change the argument at all.

You say 'given that we don't know who else had this exploit' there is value in rapidly securing a few people rather than letting them remain vulnerable.

The question here is has the public availability of the exploit secured more users from realistic attack than it has exposed?

The fact that the vulnerability wasn't publicly known before is evidence that it wasn't in widespread use. Not conclusive evidence, but evidence nonetheless. Not knowing who had the exploit before doesn't mean we have no information on which to base the decision, and certainly doesn't mean we should adopt a policy based on the idea that the exploit is currently in widespread use.

You are right that my argument is sort of philosophical and not practical.

> The question here is has the public availability of the exploit secured more users from realistic attack than it has exposed?

It probably has exposed more. But, it could depend on whether some OS X servers were patched, or company-managed OS X workstations, or virus-scanner definitions updated, which could simultaneously protect many users.

> certainly doesn't mean we should adopt a policy based on the idea that the exploit is currently in widespread use.

In the current climate, I'm not so sure. Maybe not "widespread" use since, as you say, it doesn't seem to be publicly known. But, if it were only in narrow use and the discloser has determined this was the fastest way to warn people, do those narrow victims deserve security less than everyone else? Some of the discloser's statements make me wonder if he does know of other exploiters. You can question whether he knows enough or has the right to make that call, but if he disclosed responsibly, you'd be trusting Apple in the same way. Are they inherently more capable of making a good decision just because they're a corp?

P.S. It's interesting that you seem to rely on "the community" to notice whether this was being exploited by malware, but are mad at somebody from "the community" reporting it without it having been exploited by malware. Of course I understand the reason this may not be the best way for it to be reported, but maybe we should be glad this was found and reported at all, for free, by someone in "the community." It almost seems entitled, to expect someone else to do you a favor for free, and for you to also dictate the terms.

Warning people isn't the issue. Apple is without question in a position to distribute a patch more rapidly and more widely than anyone else. It has nothing to do with trusting them because they are a 'corp'.

There is nothing 'entitled' about the opinion that it's wrong to distribute information about malware publicly where it can be used by bad actors, without first giving vendors a chance to distribute a fix.

Technically in this case, Apple was given advance notice: https://news.ycombinator.com/item?id=10070799 . I'm guessing we'd all agree it was not a reasonable amount of time to actually fix it... but then who gets to decide what that is? If I'm counting right, it took over a month for Apple to patch DYLD_PRINT_TO_FILE, which was disclosed in a similar way.

IMO's it's naive to think nobody else knew about these relatively simple exploits, so I don't blame the reporter too badly for deciding they don't want to wait weeks for Apple to fix it, if they can fix it themselves in hours. It's useless for Apple to be "in a position to distribute a patch more rapidly and more widely than anyone else" if they don't actually distribute a patch rapidly. I guess we'll have to wait and see how rapid they are this time.

I guess we'll have to agree to disagree. I'm at least glad to read around a bit that it's a controversial topic, so we're both in good company in our respective opinions.

You keep saying things like 'who is to decide?'. If you really believed that, you would cease posting your opinions here.

The point about people not wanting to wait for Apple is valid in that certain people can protect themselves ahead of apple distributing a fix.

However it clearly doesn't change the calculus since the number of people who can protect themselves is miniscule compared to the number of people made vulnerable. Even if it takes a month, that is going to distribute the patch far faster than this.

There is nothing controversial about this. It's a matter of statistics.

When I say "who is to decide?", I mean, as we have discussed, there is no way to be absolutely sure of the best course given the unknowns, so that reasonable people can differ, and they will, based on differing ideologies on things like risk management, duties to the public, and personal agency.

As for controversy, maybe I overstated that, but your flat counterstatement with no elaboration or support isn't going to change anyone's mind.

You keep saying 'there is no way to be the absolutely sure' - but this is a truism that applies to all of human decision making.

You are using it to make the situation seem less clear than it is rather than responding to a clearly articulated critique of your position.

If you differ on any of these topics why not say what you believe?

It does apply to all of human decision making, which is why we have different political parties, different schools of thought within a scientific field, different types of government etc. My point is that full disclosure vs. secrecy (and various points inbetween) is an instance of that type of dilemma, so that only a closed-minded person would insist their particular choice is the only position that is always right.

As an example of how "responsible disclosure" can fail, read https://en.wikipedia.org/wiki/Shellshock_(software_bug) . It was embargoed until a patch was ready, but the patch itself invited exploitation attempts and further scrutiny which revealed additional vulnerabilities. It was quite a mess, but IMO the only "best practices"-based way to avoid it would have been to never have introduced the vulnerability in the first place. Elsewhere in this thread, I cited an instance of Apple taking three years to fix a vulnerability responsibly disclosed. Would you say it was better to let that vulnerability sit for three years than to disclose it immediately so that it would get fixed within a few months? Obviously none of this proves we should jump to instant full disclosure, I just mean the existing approaches all have issues, so there is room for personal opinion and judgment. I don't even feel strongly about second-guessing this particular instance, because I'm betting the discloser knows more than we do. (And if you don't trust him, refer to my previous comments -- why trust Apple, when they have a record of being fairly slow?)

If I'm obfuscating by saying we can't know, you're making it deceptively simple by claiming you do know with broad statements like "the number of people who can protect themselves is miniscule compared to the number of people made vulnerable" (even though I've argued third parties can help secure unknowledgable users, if the issue is publicly disclosed before an Apple patch), "It's a matter of statistics", "the fact that the vulnerability wasn't publicly known" (how do you define "public" in a way that is both meaningful to your position and can be exhaustively searched to prove the "fact" that this wasn't known?).

Apple trash talking and making up "facts" will not get you far here on Hacker News. This isn't reddit or 4chan, you need to think your comments through.

Any way to protect a machine until apple publishes an update?

add -no_shared_cr3 to your boot-args.

it will have an hefty performance penalty, but if you value security over performance, it'll also protect you against a lot of (even 0day!) exploits.

Could you provide some context for that? What does that flag do? How do you even set boot args for OSX? I have little context to OSX boot process, and would like to understand this better.

'sudo nvram boot-args=-no_shared_cr3' will do the trick.

The flag essentially prevents kernel from accessing userland memory unless special routines are used. Since the bug is a NULL pointer deference (which requires a read to userland memory in order to be exploited), exploitation becomes impossible. Due to this flag, however, your kernel will have to context switch every time a system call is done, which does have a noticeable performance impact. I will be releasing a KEXT to fix the bug soon.

Well, I was more looking for an explanation of what "no shared CR3" means. What is CR3, how do I know to go to that option as a way to disable this exploit.

And, coming from a Grub/ubuntu perspective, when you say "boot args", I think of the boot loader, which for Grub is configured with config files (text files) or else at boot-time, via the Grub menu. I know OSX has a single-user mode, but don't know of a way to edit boot args prior to completing the boot sequence.

Please don't take this wrong. I'm glad to see the original fix you gave, so much so that I want to know more about it. What provides the capability, how to know the specific options that mitigate such a vulnerability.

CR3: https://en.wikipedia.org/wiki/Control_register#CR3

Some relevant excerpts from Mac OS X and iOS Internals: To the Apple's Core by Jonathan Levi:

From page 133: In 64-bit mode, there is such a huge amount of memory available anyway that it makes sense to follow the model used in other operating systems, namely to map the kernel’s address space into each and every process. This is a departure from the traditional OS X model, which had the kernel in its own address space, but it makes for much faster user/kernel transition (by sharing CR3, the control register containing the page tables).

From page 266: Still, unlike Windows or Linux, OS X applications in 32-bit (Intel) used to enjoy a largely unfettered address space with virtually no kernel reservation — that is, the kernel had its own address space. Apple has conformed, however, and in 64-bit mode OS X behaves more like its monolithic peers: the kernel/user address spaces are shared, unless otherwise stated (by setting the -no-shared-cr3 boot argument on Intel architectures). The same holds true in iOS, wherein XNU currently reserves the top 2 GB of the 4 GB address space (prior to iOS version 4 the separation was 3 GB user/1 GB kernel).

On x86 cr3 is a pointer to the page table. (The page table is a mapping set up by the kernel, it maps virtual to physical addresses or in some cases lets the kernel trap memory accesses.) Once you change it, memory access becomes temporarily slower afaik because the TLB (effectively cpu's cache of page table) is discarded. So changing it more frequently can be a bad thing.

On the second part, Macs boot with UEFI, and the boot process is configured via a small number of variables stored in writable firmware memory [1]. Apple provides a command-line tool, nvram(8) [2], which can either print the current contents of the variables (nvram -p), or request a change to one. Changes are queued and written out at the next clean shutdown or reboot.

[1] A brief description: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_In...

[2] https://developer.apple.com/library/mac/documentation/Darwin...

Intriguing. Thanks for sharing.

Doesn't Linux perform this "context switch at every syscall" ? How does it get away with the performance penalty?

No, Linux x86-64 doesn't change %cr3 on syscalls. It mitigates this kind of bug (kernel NULL pointer dereference) in a different way - by not allowing userspace processes to map memory at NULL.

Linux also supports the SMAP feature on modern Intel CPUs which allows the kernel to set things up so that all accesses to usermode memory from kernel mode must be explicitly annotated.

All operating systems with separate user and kernel modes have a privilege-level round-trip on every syscall (typically `sysenter`/`sysexit`, on older systems the classic `int $0x80`/`iret`). This is just a controlled jump that changes the privilege level, and is what is bypassed by vsyscall.

Non-shared-cr3 Macs (and IIRC some versions of PaX) also change `%cr3`, which means user-space and kernel-space have completely different address spaces (rather than a shared kernel-space and per-process user-space). This is much more expensive.

On Linux, if you have a look at /proc/<pid>/maps, you'll see a 'vsyscall' section mapped into every program. That section has code stubs for each syscall. For some simple syscalls like gettimeofday() (not sure there are any others) just return the current time, which is stored somewhere in that area. For other syscalls, the stubs use the best method to enter the kernel (sysenter vs. int 80) available on your specific processor.

There were only ever "vsyscall" entries for three syscalls: time, gettimeofday, and getcpu.

On recent kernels, the vsyscalls are actually the slowest way of all to ask for the time or the cpu number. They're only supported at all as a fallback, and the fallback is very slow, because it tries to mitigate exploit risks due to having code at a fixed address.


OSX should be doing the context switch also.

The added penalty is a switch to usermode to read userland data, then a switch back to kernel to continue on...its just additional context switches for reading userland memory

CR3 is the x86 register that points to the root page table. When an OS switches between processes, it generally changes CR3. On the other hand, when an OS switches from user mode to kernel mode, it usually leaves CR3 alone.

I know essentially nothing about Darwin, but "no shared CR3" presumably means that the kernel will switch CR3 to make user memory inaccessible when running in kernel mode. This is approximately what grsecurity's UDEREF feature does.

On Linux, on Broadwell or newer, there's a similar HW mitigation called SMAP. Darwin might use it, too.

Linux also doesn't allow unprivileged programs to map very low addresses, making NULL pointer dereferences much harder to exploit.

I did `sudo nvram boot-args="-no_shared_cr3"`, and I think I see the (seemingly minor, at this point) performance hit. There's an explanatory comment in http://opensource.apple.com/source/xnu/xnu-2782.1.97/osfmk/x... that seems to explain the feature, though not in detail.

What do you mean by "minor performance hit"? Could you provide some data?

Unfortunately this 'fix' seems to break VirtualBox.

Thanks for the tip, applied.

(1) Can I also ask how you found this? Were you fuzzing Iokit?

(2) I'm trying to work through your ROP. Can you explain a bit more? Thanks.

1) I cannot really discuss specifics, but this particular bug would have been hard to find via a traditional IOKit fuzz, since it requires an invalid 'task' port passed over to IOServiceOpen. Most fuzzers use mach_task_self for that, and fuzz method calls/traps/properties/etc.

2) When IOServiceRelease is called, vtable+0x20 is called. the vtable pointer is controlled, at +0x20 I place a stack pivot, which sets RSP = RAX and pops 3 times. At 0x20 I place a POP RAX;RET gadget to let the chain begin after 0x28. Payload then locates the credentials structure, sets UID to 0 by bzero()ing, cleans up the memory corruption, decreases the task count for current user and increases task count for root. It then unlocks locks held by IOAudioEngine to prevent your audio from freezing up, and then returns to the userland context.

for the record: "At 0x20 I place a POP RAX;RET gadget" should be "At 0x18 I place a POP RAX;RET gadget".

Interesting. This prompted me to look at my Mac and it's running 10.10.3, I never got a prompt to update to 10.10.4 or 10.10.5, but when I open App Store it tells me there's an upgrade to 10.10.5. I guess Apple managed to break the automatic update mechanism in 10.10.3.

I wonder if this is related to the behavior where my iMac wakes up every minute starting every morning at 2AM. This is so obnoxious that I now turn my iMac off at night instead of putting it to sleep.

Do happen to have Adobe software installed? Perhaps something from Creative Cloud. It usually checks for updates at 2am, might be what its turning on your Mac from sleep.

You will have to disabling automatic updates for that not to occur.

Did you try turning off Power Nap? Might have broken for you.

I'm running 10.10.4, and it just crashed my Mac -- the "A problem has occurred" screen -- followed by a forced restart.

If I run echo '' | ./tpwn in a loop on 10.10.4, I get a kernel panic about 0.5% of the time, so you might have just gotten really unlucky.

Yep, same result here on 10.10.4 heh.

Interesting. I am on 10.10.4 myself, and that's the OS I tested it on.

tpwn has been tested from 10.9 to 10.10.5, but of course, your mileage may vary. the KASLR leak part is not 100% reliable, unlike the actual code execution which is. I'd be interested in panic logs to sort the issue out, if you could share.

You are not vulnerable since you have SMAP!

what's SMAP? can't find info anywhere. edit: ok, found, cpu security feature. so it's possible this issue is mitigated on newer CPUs?

well, this bug is a null pointer deference. smap is like -no_shared_cr3, but without the performance loss.

Does anyone know if 10.9.5 is vulnerable?

Yes, it is.


Okay, this is really weird... after rooting, and pressing ^D or typing exit, I stay root

    ~/code/tpwn % id -u
    ~/code/tpwn % ./tpwn
    leaked kaslr slide, @ 0x0000000005600000
    sh-3.2# exit
    ~/code/tpwn # id -u
Edit: and it crashes iTerm2 after the last `id -u`. Managed to get a screenshot of what I'm talking about: http://i.imgur.com/foWgTBN.png

This does not happen for me.

  bash-3.2$ ./tpwn
  leaked kaslr slide, @ 0x000000000f800000
  sh-3.2# exit
  bash-3.2$ id -u

I replaces your user shell with a root shell. You're not "root" in that terminal.

And here I was pressing "update later tonight." Thanks for the heads up!

Unless you manually update to 10.11 you are still vulnerable

Does it work on 10.11 with "rootless" mode disabled?

I just tested on 10.11 with rootless being disabled, and it prints out "not vulnerable".

I assume that if it doesn't work on 10.11, then rootless being enabled or disabled shouldn't make a difference. You still have a root user either way, it's just that if rootless is enabled, then the root user wouldn't be able to modify certain system directories, which could mitigate the consequences of such an attack if it did work.

In the README it states the vulnerability is not present in 10.11.

At least 10.11 isn't vulnerable

For what it's worth I believe he also has a 0day for bypassing rootless. Check his Twitter.

What is his twitter account?

The author also has an interesting and relevant series of blog posts: http://blog.qwertyoruiop.com/?p=38

So for anyone who hasn't tried it but is wondering about it - it works on 10.10.4 and 10.10.5, running the tpwn binary does drop you to a root shell. Looks like a weakness in the address randomization in OS X

I haven't look into this vulnerability, but how could it be a weakness in address randomization? Isn't address randomization supposed to be a mitigation, to make it more difficult to exploit other vulnerabilities.

There is no weakness in address randomization I relied on for exploitation.

It relies on two distinct bugs, an info-leak to obtain a pointer to an allocation in the kalloc.1024 zone and a memory corruption primitive (deriving from a NULL pointer dfr. in IOKit) allowing me to OR 0x10 anywhere in kernel memory.

To break kASLR I corrupt the size of a vm_map_copy struct, which allows me to read the adjacent allocation to the struct, which is a C++ object. First 8 bytes of said C++ object is a pointer to the vtable, which resides in __TEXT of some kernel extension. Since I can calculate the unslid address from userland without any issue, by subtracting what gets leaked with what gets calculated you get to know the kASLR slide.

Just to clarify: The code execution part has 100% reliability rate. The kASLR leaking part does have some chance in it, however empirical evidence indicates that the failure rate is extremely low.

So you can bail out cleanly and go around for another run if the infoleak fails, yeah? Or do bad things happen up in kernelspace?

If the heap info leak fails, I bail out cleanly. If the kASLR leak fails, it is usually because instead of hitting a vm_map_copy (the intended structure I need to corrupt), something completely unrelated is hit instead. If it happens to hit something different than expected, that's undefined behaviour usually ending up in a panic.

Mad props.

If osx were s/xnu/minix 3-style, full microkernel/, sploiting Iokit as a least priv'd process, only it would get pwned and be limited to iokit's acls. Still bad, but it likely wouldnt have rights to exec a root shell.

XNU kexts have way too much authority, and all the syscalls they each tack on compounds the attack surface to the total codebases of all Apple and third-party kexts. Because once you've found and symbolicated the not-really-hidden call table, you're pretty much able to do whatever. And with a mutating mem kext bug ...

while I agree with you on the security benefits of a full microkernel, to be entirely honest, if you had access to just IOKit you could easily use a network card or an hard drive controller to get a physical memory write-what-where, which in turn would allow you to gain access to anything, plus the microkernel performance issues of e.g. having to context switch on interrupts.

That things may be broken is no argument against defense-in-depth and least privilege. By having smaller codebases and smaller system components, the attack surface is far, far smaller than say Linux. IOKit is shit as is, and a full microkernel would break it up into processes based on areas of responsibility. Also, that shows the hardware needs better bus- and command-level security to prevent such attacks. (Don't even get me started on closed firmware blobs or unverifiability of commercial cores.)

Expecting a finished product of a new project would be unreasonable. Minix 3 is early on and not the only full microkernel out there. They will probably find an approach to reduce context switches if it's a mature optimization to make. An Android-like mobile/embedded platform would make a sensible research -> real use-case, minus Java.

What about 10.10.3?

> tpwn has been tested from 10.9 to 10.10.5, but of course, your mileage may vary.

10.10.3 was actually the first version it was tested on.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact