Hacker News new | past | comments | ask | show | jobs | submit login
The DrK Attack: De-randomizing Kernel ASLR (github.com/sslab-gatech)
159 points by tsgates on Oct 26, 2016 | hide | past | favorite | 30 comments

Providing user control over page faults and using that for a security exploit reminds me of the classic UNIX tale of password checking. A version of UNIX had a privileged mechanism that would check a password (provided by pointer); it did so character-by-character. It also had a way for userspace processes to handle page faults themselves. So, put a password buffer across two pages, with the page boundary after the first character, and change the first character until you get a page fault. Repeat for each character of the password...

The system mentioned is TENEX (not *nix) according to http://research.microsoft.com/en-us/um/people/blampson/33-Hi... (a great paper from '83 that was posted here recently. search for password to find the exact reference there)

Thanks for the correction. The version of the story I saw came from a systems programming book, which attributed it vaguely to some past UNIX system of yore. Glad to have a more accurate reference.

It's nice because it allows a slow and controlled timing attack, but you don't need a page fault to do that, you can simply measure elapsed time with high resolution timers, perf or whatever you have on hand :-)

True, a privileged process having data-dependent execution time for secure data can lead to a variety of side-channel attacks.

For more complex cases than password validation, it would help if it were easier to write side-channel-resistant algorithms in something other than assembly. Even a C compiler, let alone something higher-level, can introduce side-channel leaks through optimization.

That wasn't UNIX. It was TOPS-20 on the DEC 36-bit machines.

So yet another KASLR bypass.

Reminds me of[1]:

> Consider this our "I told you so" that we hope you'll remember in the coming years as KASLR is "broken" time and again. Then again, in this offensive-driven industry, that's where the money is, isn't it?

[1] https://forums.grsecurity.net/viewtopic.php?f=7&t=3367&sid=e...

That article is correct. Address-space randomization wasn't a fix. It was a way to avoid fixing problems, by claiming that the ability for attacks to execute hostile code in some other process's space wasn't a problem.

The problem is giant kernels that change constantly. It doesn't have to be that way. Look at L4 and QNX.

With sufficient code bloat, all bugs are deep.

"With sufficient code bloat, all bugs are deep."

That's an interesting statement. Might have to think on it. While we're on secure kernels, did you ever have any experience with the ASOS system or hear about it from someone who did? It was the only Ada OS aim for A1 class that actually got delivered far as I can tell from papers. Then info disappears into a black hole. Be interesting to find out how well it worked or didn't as putting a modern version of that in a container or on top of seL4 might be worth trying.

Note: MaRTE OS is an Ada-based, open-source RTOS still in development and use. Not security-focused, though. Muen is just a separation kernel. A secure one could draw on them a bit.

Not ASOS, no. I worked on KSOS-11, an attempt to cram a secure OS written in Modula into a PDP-11. It ran, but the 64K address space was too much of a limitation.

As for "with sufficient code bloat, all bugs are deep," that's my answer to Torvalds' "given enough eyeballs, all bugs are shallow". Time has proven him wrong - the number of bugs in the Linux kernel continues to increase as the kernel becomes more bloated. 16 million lines of code and still growing![1]

This is why microkernels are the way to go. Good microkernels run about 10,000 lines of code. It's possible with 10,000 lines of code to have a steadily decreasing number of bugs.

We now have a situation where neither Linux nor Windows is securable. We're paying the price for that.

[1] https://www.linuxcounter.net/statistics/kernel

"Not ASOS, no. I worked on KSOS-11"

Darn. Oh well. I'm aware of KSOS as we discussed it before. I appreciated the perspective on it.

"This is why microkernels are the way to go."

I've been thinking more about the alternative where you do monolithic ones decomposed internally like microkernels with medium-assurance techniques for development. Basically done like this without formal proofs:


Might be a better model as CPU's with more-efficient, isolation mechanisms come online. I'm recommending micro and separation kernels with mediated middleware in interim since they're proven to be better than traditional monoliths. The newer style of development might make new monoliths way better than before, though.

You should take a look at some of the rumpkernel/anykernel work in netbsd, or projects like the Linux kernel library (LKL). They rearchitect the kernel to make it more modular.

From what I recall, the Rumpkernels in NetBSD specifically avoided modifying the kernel to instead come up with an easy way to get to (or test) the drivers. I still have Anti's thesis & links saved for when I have time to dig into the concept more. LKL is interesting. A truly modular OS that's monolithic would let you do something like eCos does below under "Configurability:"


Each unnecessary module can be automatically stripped from the system to make the OS specific to your needs. The "Just Enough OS" projects for virtualization were aiming to do similar things with Linux but they kept the kernel that I'm aware. Ideally, you'd be able to select just the API's, drivers, etc you need with the rest being stripped out of the generated source before compile. Similarly for user-land.

Most of those 16 million lines are in drivers though, not the core kernel. (Which if I recall correctly is only about 200k lines. A far cry from 10k, but also a far cry from 16m.)

The question is how much code is inside the kernel protection boundary. All that code can potentially be exploited and can crash the kernel.

I'll add basically anything they can call in any way. That is (a) directly because it's currently in kernel, (b) loaded through some command they shouldn't be able to access, or even (c) some abnormal mode of operation they cause that has more kernel code running than usual. I'll supplant (c) with running obscure, kernel functions that get less attention in security review than the main ones. Quite a few, major vulnerabilities come from those parts people forget about or only used in weird contexts.

So yet another KASLR bypass.

Is KASLR broken or is TSX broken? Seems to me that TSX is broken, yet again. For example, substituting Rowhammer for TSX and L4 for Linux, you could then say that L4 is broken because of Rowhammer when what's really broken is DRAM.


With sufficient code bloat, all bugs are deep.

You might call it bloat, but I'd classify ASLR as Defense in Depth.

It's what I kept telling the OpenBSD and grsecurity people. These tactical defenses that ignore root causes usually get bypassed over time. Maybe a pile of them will stop attackers. Maybe attackers just don't care due to market share. In any case, it's best to either address the root causes or make mechanisms so strong you can almost guarantee they'll contain problems.

To me it seems like every time intel tries to create a security safeguard, it almost always without exception ends up being a new attack-vector instead (see "x86 considered harmful").

I'd love to run simpler versions of the modern intel cpus stripped of all this insecure bloat.

Surely I can't be the only one?

(Mill CPU team)

I'm putting together a security white paper at the moment. We've been quiet because we've been real busy and we have more new stuff to talk about if only we had time to write it up for public, so watch this space :)

We can't even get newer x86/64 boards without closed binary blobs everywhere. Look at Librem. I think the last truly open laptop in the x86 world was the X200 Thinkpad.

All the newer opensource laptops are all ARM based, which is just going to mean a lot of fragmentation until things like the ARM config/device-tree stuff catches up to EFI.

There's potentially room for a cloud or datacenter company to pull it off by paying Intel to do it via semi-custom business they have. Or AMD who started that model in x86. Just the high-performance HW and microcode without any other crap with other stuff in trust boundary offloaded to coprocessors we can build and verify. We can also add in some real security like in Watchdog or CHERI CPU's while we're at it. CHERI already runs FreeBSD so we'd have that immediately if we ported their toolchain to x86.

What has tsx got to do with security? It's for efficient transactional memory.


There are few "open" hardware/firmware efforts going on now.

I attended a talk on DrK by Yeongjin a few weeks back at Georgia Tech. Keep up the awesome work guys, and welcome to the front page of HN ;)

(This popped up on proggit the other day, but got deleted for some reason: https://www.reddit.com/r/programming/comments/58fpi6/aslr_pr... )

For a system as complex and intricate as a modern processor, it seems impossible to avoid a userspace application from figuring out at least some basic information about the kernel's state. It would be better to focus on avoiding actual privilege escalations.

I would have loved for this kind of research to be my job. Should have done better in my classes :(

It's never too late to set a goal. If you're looking for more knowledge, this [1] is a great place to start, as well as just watching interesting DefCon talks on YouTube. :-)

[1] https://ocw.mit.edu/courses/electrical-engineering-and-compu...

Scary :(

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact