Hacker News new | past | comments | ask | show | jobs | submit login
An X86 Design Flaw Allowing Universal Privilege Escalation (blackhat.com)
167 points by thomnottom on June 5, 2015 | hide | past | favorite | 66 comments



to demonstrate how to jump malicious code from the paltry ring 0 into the deepest, darkest realms of the processor.

So you already have to be in ring 0. Pretty click baity title.


VT-x allows guest operating systems to run in Ring 0. If this exploit works then a guest OS can exploit the hypervisor.


Maybe. VT hypervisors still mediate stuff done in VM kernels. The attack could still work on native OS kernels and still not be effective on VMs.


Good call didn't tihnk about VT-x


"Privilege Escalation" implies you start from an unprivileged state does it not? That sentence might just be poorly structured.


It means you go from some level of privilege (which could be none at all) to more privilege. For instance, "a WordPress author can get a shell on the server" would be a privilege escalation, even though being able to write blog posts is quite privileged on its own.

That said, I'm not particularly convinced that (unvirtualized) ring 0 to SMM is really a violation of a security policy... it's not like SMM really tries to confine what ring 0 can do, it just wants things like interrupt priority.


Since SMM allows for all kinds of things that ring 0 can't do (one example: writing to flash even if write protections are enabled, unless a second set of write protections, added in later chip generations, is _also_ enabled), arbitrary code execution in SMM is a bad thing.

Also later ring 0 code can't get rid of code installed into SMM earlier (that may still trigger through events or timers).


If you go from having your own ring 0 (as a guest in a hypervisor) to messing around with everyone's ring 0, it's very much a privilege escalation.


Not intended as click bait. I saw it on twitter and was wondering just how serious a flaw people thought this could be. Or if the presenter might just be blowing smoke.


well it still says "universal" but in fact, its universal if you are root/can insmod/etc.

it reminds me of the unlimited data plans that are limited after 5Gb.

Not sure what happened to language :)


From a security perspective, this:

"universal if you are root/can insmod/etc."

.. only needs to be true once, and from that point on the hardware no longer belongs to the owner.

So, if you could for example get your secretNSA$hit installed on the Linux box that is used to test hardware at the PC assembly/manufacturing plant, before its sent off to be 'securely configured' by the sysadmin/ops as supposedly fresh equipment.

Lots of ways that can happen, of course its theatrical to consider it, but little in security these days is without drama it seems.

Truly, not being able to trust the microcode in my CPU is a worry, but it always has been. There are no guarantees that there aren't already CPU embeds that are configured to ship data to some quantum-bearing government spy satellite, and thus we're all fools for thinking we have any kind of security on this theatrical stage at all ..


even the non-microcode part you are forced to trust but then again so goes for the rest of the hardware, not just the cpu


I read it as meaning "escalating to give you universal privilege".


Intel CPUs support Ring0-3. But...

http://en.wikipedia.org/wiki/Protection_ring

Multics supported 8+ rings, OpenVMS 3 rings, OS/2 3 rings, Windows NT and Unix only 2 rings. Some hypervisor like XEN use afaik ring 1 (Intel VT-x "Vanderpool").

We need better ring support in modern operating systems and hypervisors (VM hosts).

"The x86-processors have four different modes divided into four different rings. Programs that run in Ring 0 can do anything with the system, and code that runs in Ring 3 should be able to fail at any time without impact to the rest of the computer system. Ring 1 and Ring 2 are rarely used, but could be configured with different levels of access."


Actually we should abandon the ring concept all together for more promising security models.

https://www.destroyallsoftware.com/talks/the-birth-and-death...


The suggested solution is to have process isolation implemented using software - namely asm.js-enabled JavaScript virtual machines embedded in a Linux kernel, which save you from needing hardware isolation, reducing overhead. Gary calls this idea "METAL".

I found little resource about the project, but there is a discussion on Reddit: http://www.reddit.com/r/compsci/comments/25w7vt/javascript_b...

And we had the discussion on HN too: https://news.ycombinator.com/item?id=7605687

Nevertheless an interesting topic, that doesn't deserve a downvote of my parent.


Actually hypervisors (VT-x) run in Ring -1: http://en.wikipedia.org/wiki/Protection_ring#Hypervisor_mode


That's actually not really how it works[0].

It's more that a ring 0 can manipulate an alternate context, but only for contexts that it can create. So guests can themselves be hypervisors for contexts that they themselves create. There's nothing stopping a guest from executing virtualization instructions[1].

That being said there is a sort of ring -1 called SMM mode.

[0] And this is why you shouldn't rely on wiki. Particularly since it looks like the deletionists are winning their war. It's source is informationweek in this case...

[1] Well, that's not totally true. When setting up the context for a guest you can signal that you as the hypervisor want to get trapped on lots of different types of privileged operations. ie. you can optionally, explicitly disallow the guest to use virtualization extensions. But it's opt in, not policy set by AMD/Intel.


For better or worse, AMD dropped rings in x86-64.


> So you already have to be in ring 0.

Yes, but the article implies that you can e.g. do bad things in a bootloader and then have the OS run while the exploit stays resident and undetected by the OS.

This definitely has previously undocumented exploit potential if you can force the target to e.g. boot from a malicious USB device.


>if you can force the target to e.g. boot from a malicious USB device.

If you can control the boot media, aren't you already past the point where you need further exploits to control the machine?


Yes - but that's a bit besides the point. The situation is a lot worse if you can remain undetected after tampering with the boot media or firmware. This can be a lot more damaging than a ring-0 exploit.


I remember a post a few years ago about running code by triggering page faults in the MMU. Basically they created a single instruction computer (which can be Turing complete) out of the MMU, hence allowing processing to occur with the MMU that is invisible to the CPU and everything else. I wonder if related.

http://en.wikipedia.org/wiki/One_instruction_set_computer


(author here) https://github.com/jbangert/trapcc

We did get a few hypervisor crashes and the Intel architecture has all sorts of subtle behaviour that is often not modelled properly. It would be good to see someone build on my work.


The worst case is a VM escape for hardware accelerated (VT-x/AMD-V). So does this exploit work under virtualized ring 0? That would be a disaster for many cloud providers and for virtualization in general.

Maybe this is something about controlled change of flow of execution in SMM mode?


My guess is that if you have a universal hardware hypervisor escape, you write the abstract for that talk much differently.


Earlier work from the ITL/Qubes team, including SMM attacks: http://invisiblethingslab.com/itl/Resources.html


I wonder if this is SMM or something even darker and deeper.


SMM was my first thought too, although "gone unnoticed for 20 years" (1995) suggests something newer; SMM was introduced in 1990 with the 386SL, and this suggests P6-level.

Microcode seems about right - that was introduced in the P6. But, it's also signed (at least Intel ucode is - not sure about AMD) according to previous research... either way, this is going to be interesting.


If it is microcode it wouldn't be a complete disaster. Microcode updates are volatile and can ship with BIOS. Even when updated by Linux it happens early out of initrd.


Signed how, though? A crypto scheme that Intel came up with 20 years ago is probably not secure, and if they haven't updated it since then, well...


http://inertiawar.com/microcode/ suggests a variant of SHA1 or SHA2 with 2048-bit RSA. No doubt it's changed since the P6, probably with different private keys for each CPU model/family, but the public key must be present in the hardware in order to verify, so theoretically it could be extracted...

Edit: public key. There might be a test mode which bypasses this or something.


No, the private key doesn't have to be present. They sign with the private key, the CPU verifies with the public key.


"directed against a uniquely vulnerable string of code running on every single system"

That seems to fit with the idea of microcode, but it's pretty vague. What else would be running on every system from the P6?


I think you're right - SMM mode it is, we know the NSA has a bunch of SMM rootkits, I bet someonme found the vector the NSA uses


"Memory sinkhole" seems be a pun on the "memory hole", which suggests it might be MMU related?


If you read "but 40 years of x86 evolution have left a labyrinth of forgotten backdoors into the ultra-privileged modes" then I'm more inclined to agree, though also wondering about V86 monitor after looking at http://www.scs.stanford.edu/05au-cs240c/lab/i386/s15_03.htm


"deepest, darkest realms of the processor..." is it the ME? https://recon.cx/2014/slides/Recon%202014%20Skochinsky.pdf


ME is in the chipset and it's much newer than 20 years.


It's kind of scary that there's a micro-JVM on my chipset that allows new code to be loaded in to such a privileged system dynamically.


Makes me think of this guy who managed to install linux on his hard drive: https://spritesmods.com/?art=hddhack&page=7



Is this about the IA-32 debugger? There's a Phrack article that describes something similar: http://phrack.org/issues/65/8.html


Strangely, when I clicked the link my Macbook Air kernel-panicked. This is the first time it happened in maybe 2 years.. I though the website was actually using an exploit to kill all PCs kernels.

After a reboot all went fine, though.


This sounds very significant to me.

Basically this vulnerability affects multi-tennant systems where if you have a VM, you can then expoint this vulnerability to take control of a VM of somebody else.

This type of vulnerability, however rare, could be an argument why you should run systems with different security classifications on different physical hardware (different physical virtualisation clusters).


Wow. Ok so given that this is a hardware flaw and not a software flaw is this kind of disclosure reasonable? In the case of software vulnerabilities, researchers typically inform vendors and allow them time to distribute a patch. Is there anything analogous here for hardware? If yes how does intel (or any hardware vendor) get ahead of this by distributing a fix?


> Ok so given that this is a hardware flaw and not a software flaw is this kind of disclosure reasonable?

Yes. Always, always yes. Even if it's unfixable, ignorance of there even being a problem is not better.

> Is there anything analogous here for hardware?

Yes, actually. BIOS updates can include microcode updates and workarounds for hardware issues. Similarly if it takes a specific pattern to happen (which this almost certainly does as that's how these things work), that's going to be trivial for everyone's antivirus scanners to catch.


If its an unfixable problem then there's an obligation to disclose. Else only the 'bad guys' know.


But doesn't this assume that the 'bad guys' have knowledge of the exploit. This feels like a faulty assumption. However after disclosure the bad guys will definitely have knowledge.

Also after disclosure what is the right course of action? We certainly can't require the replacement of all vulnerable hardware.


Let's break it down for examination. Before disclosure, the bad guys may know and the good guys certainly do not know. After disclosure, both bad guys know and good guys know. Good guys who know can take protective steps.

Who gains the most?


What is more privileged than ring 0? Microcode?


A hypervisor runs below ring 0 and system management mode runs below that.


... sort of.

"Ring 0" is a historical abstraction from the 80286 protected mode model. There was a two bit field associated with segment and gate descriptors that enforced privilege separation, so you couldn't load segment registers with data at a higher priviledge level, and were disallowed from making traps into higher levels except as specifically allowed (we call those "syscalls" today).

None of this stuff is used anymore. We have the kernel in ring zero and we have everything else.

A hypervisor is absracting the whole CPU, so the guests have their own rings, etc... SMM is likewise outside the ring model.

And of course we have all sorts of other priviledge abstractions in modern hardware: iommu's exist for this purpose of course (though with a different threat model), as does memory mapping handled by microcontrollers on the fabric of modern SoCs. The NX bit doesn't fit into "rings" but is clearly related technology, etc...

Basically we need to stop talking about 286 protected mode except when that's really what we mean. Frankly I have no idea what this attack means by "ring 0", but I'm guessing like everyone else this is an exploit in SMM code.


Rings are an extremely convenient abstraction when it comes to talking about operations that trap (so, basically, the whole foundation that security is built on). It's perfectly reasonable to keep talking about it even if the origins are no longer used (or usable at all, in long mode). It even makes sense with the NX bit, thanks to the control register that lets ring 0 bypass write protection.

To be clear, the only thing I'm disagreeing with here is your last paragraph.


None of this stuff is used anymore. We have the kernel in ring zero and we have everything else.

Not using two of the four rings does not really mean nothing is used anymore. The reason only two are commonly used is probably to a large extend due to portability to processors with only two rings and maybe also architectural simplicity.


Even x86 barely supports rings 1 and 2. The modern (386+) paging system only recognizes two privilege levels, and the fast privilege change instructions (SYSCALL, SYSRET, etc) are only useful when switching between rings (really "CPL") 0 and 3.

If you're programming a 286, then you can go whole hog with 4 rings.


In essence, the original concept of fixed number of rings that not only can, but have to, be distinguished by code runing in them is why there have to be hackish more-than-ring-0 modes like SMM.

Probably architecturally cleanest solution involves having only two modes with privileged operations in one affect hardware directly and in the other all such operations trap in a way that can be emulated. Interesting variation on this concept is having only simple trap and interrupt dispatcher running in the privileged mode with OS kernel being run in user mode as seen by hardware (Alpha does essentially this, but the privileged code depends both on hardware and operating system, so you don't get the benefits of easy virtualization).


Sure. My point was more that the overwhelming majority of the privilege separation implemented in modern SoC's has absolutely nothing to do with a 33 year old "ring" model and we should stop using that term. It hurts more than helps.


Where can a person read about the various modes and levels of abstraction without delving into x86 whitepapers and spec sheets?


The Intel Software Development Manual is surprisingly readable. Look for "CPL" (which stands for Code Privilege Level, IIRC).


SMM mode


Am I missing something here? It's just a couple of sentences of some BS about negative rings.


It's an exploit that'll be presented at the upcoming Blackhat conference.


Thanks, it makes much more sense sense now.


So guy's did I understand correctly ? They found a flaw in x86 so the guest can exploit host ? ( Maybe much restricted than what I said , but the main idea is correct ?)


Regardless of the content, white text on black background is VERY BAD for reading. Please make this text more readable.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: