Hacker News new | past | comments | ask | show | jobs | submit login
ASLR on the Line: Practical Cache Attacks on the MMU [pdf] (vu.nl)
120 points by panic on Feb 15, 2017 | hide | past | favorite | 48 comments



The paper is extremely well written and their attack is very elegant, so all due credit there, however, I've never understood why ASLR was taken particularly seriously in the first place. It's always been obvious to anyone familiar with modern CPU architectures that strong ASLR is fundamentally impossible, and a vast amount of attacks have demonstrated that over time.

ASLR is a mitigation measure meant to considerably increase the effort an attacker has exert at a relatively small performance cost, nothing more. It does that quite well. Personally, I'm just happy it's easy to disable for performance reasons and I hope there won't be any attempt to mitigate this attack at the hardware level on general purpose CPUs.


Two responses, one specific and one general.

1. ASLR changed the way exploits are written, not just by requiring new techniques but (often) by requiring attackers to find additional bugs to chain to make exploits work. In the history of memory corruption countermeasures leading up to ASLR, nothing else did that. Arguably, even NX didn't; NX also drastically changes exploits, but not in ways that require you to search for and stockpile memory disclosure bugs.

2. Durably increasing attacker cost has value. So: obscuring the source code for a target isn't a meaningful countermeasure; once you've reversed a target, it stays reversed. But requiring vulnerability chains for reliable mass exploitation (or even just requiring lots of engineering for each bug to obtain reliable exploitation) does make a difference. There's a reason why Chrome and iOS bugs are so expensive now.


I agree, on both counts. For some applications (web browsers top that list nowadays) ASLR has a lot of demonstrable value as a mitigation measure. I could easily imagine security-critical systems where even proper strong ASLR (with randomized page tables or even fundamentally different caching and MMUs) would make sense.

However, it's always important to keep things in perspective. Hardware solutions to this attack would impose major performance penalties across the board, so I hope to never see them in mainstream CPUs. Software solutions to make the attack even harder to pull off might make sense for specific applications, but any ASLR implementation (either OS-wide or at the compiler level) needs to be easy to turn off.


> I'm just happy it's easy to disable for performance reasons

If you're disabling ASLR over this post, please don't. Especially for performance reasons - ASLR is very very cheap, and the impact on performance tends to be in places you won't care much about.

The paper is fine, they did cool work. More importantly, ASLR is totally valid still - their attack scenario is an attacker who already can execute arbitrary code on your system, even if it's in a sandbox, which is NOT The scenario ASLR was designed to defend against.

Nothing about this attack is going to impact the effectiveness of ASLR in the cases it was designed for.


I'm definitely not disabling ASLR over this post, that would be utterly silly. I've been disabling ASLR for many years in the context of specific performance-sensitive applications (mostly computational science stuff) in environments where the security concerns aren't relevant. ASLR does have a very real, measurable performance cost, but you're right in saying that it's fairly small and in most (though not all) cases you won't notice it.

I am, however, very adamant that it should remain easy to disable. Any security-performance compromise must remain under developer control, because there is no globally optimal decision on when to make which tradeoff.


Developers generally can't make informed decisions about security. It should remain in developer control, I guess, but most developers have no business disabling it.


That's why it's fine to have ASLR on by default. Most developers don't even seem to be familiar with how ASLR works, and very few actively disable it in practice. Yet for some of us, the ability to do so is quite indispensable.


If you can get popped through your web browser, what's the point of taking any ASLR penalty on a desktop system?


I don't understand what you're getting at. One program is vulnerable, therefor make all programs vulnerable?


The most commonly used program with the easiest attack vector, yes. If they can get in through your browser it doesn't matter how secure the other processes were.


That makes no sense.


> their attack scenario is an attacker who already can execute arbitrary code on your system, even if it's in a sandbox, which is NOT The scenario ASLR was designed to defend against. Nothing about this attack is going to impact the effectiveness of ASLR in the cases it was designed for.

This is not the case.


What? Which part is not the case? This attack does require executing arbitrary JS on the victim system. This attack is not viable if you're exploiting some RCE against a C program that doesn't evaluate code.


Browser engines are common venues for ROP attacks. ASLR is absolutely intended to provide protection in that context.


ASLR is a mitigation for ROP given certain constraints - the ability to arbitrarily execute code, allocate memory, would break those constraints. This is true of any mitigation technique, there are capabilities an attacker can have that will make it less relevant or irrelevant.

ASLR is still totally viable for defending against an attacker who can not already execute arbitrary code. This article makes it sound like ASLR has been completely defeated, but it's only defeated in a context where the attacker has significant control - even if that's a likely scenario.


You've contradicted yourself. You initially said:

> Nothing about this attack is going to impact the effectiveness of ASLR in the cases it was designed for.

Then you said:

> This article makes it sound like ASLR has been completely defeated, but it's only defeated in a context where the attacker has significant control - even if that's a likely scenario.

That is indeed a highly likely scenario, so it's simply wrong to say "nothing about this attack is going to impact the effectiveness of ASLR".


Not really seeing the contradiction. ASLR is still effective where it was always intended to be effective. ASLR is ineffective given an attacker with more control than ASLR was designed to be useful against.


ASLR is effectively security by obscurity (albeit a quite good application thereof).


No, it's not. Security by obscurity refers to the cost increase you can inflict on an attacker by depriving them of knowledge of the design or implementation of a system. By the definition you're using, any cryptographic feature would count as "security by obscurity".


ASLR makes it hard to know where code is located in the address space. That seems like obscurity doesn't it?


The term is somewhat more restricted than its literal meaning would imply. "Security through obscurity" specifically refers to obscurity in the design or implementation of a system. Obscuring explicitly secret values used by the system is not "security through obscurity."

Consider: keeping crypto keys secret is a fundamental requirement of basically every secure system. That means they're "obscured," but it's not security through obscurity.

When it comes to ASLR, the locations of code within the address space are in pretty much the same role as crypto keys. The problem isn't that it's security through obscurity, it's that there are ways to leak the contents of those "keys," and they aren't very large.


The same way crypto obscures your text.


"security through obscurity" refers only to violations of Kerckhoffs's principle.

https://en.wikipedia.org/wiki/Kerckhoffs's_principle


Entropy isn't obscurity.


I like this clarification the best.


No, the 'algorithm' for ASLR is public. All that an attacker is missing is the seed that determines how it is instantiated.

ASLR doesn't violate Kerckhoffs' principle.


Very cool from technical perspective, but sad for web security. Takeaway for user:

"Q: How can I protect myself as a user against the AnC attack?

A: You unfortunately cannot as AnC exploits the fundamental properties of your processor. You can however stop untrusted JavaScript code from being executed on your browser using a plugin such as NoScript."

Here's a source code (not linked in paper): https://github.com/vusec/revanc


This might cause a lot of infections around the world for systems without the other code execution prevention systems...

Older versions of Android don't have much protection against Stagefright or other such attacks if ASLR is circumvented. The same can apply to older versions of Windows (7 or earlier) without EMET installed.

It took the most popular desktop Linux operating systems (like Ubuntu) a very long time before they even shipped ASLR. I hope further defense in depth will be implemented now that ASLR is proven weakened.


Some quotes from the summary article which is being discussed here: https://news.ycombinator.com/item?id=13648122

In this project, we show that the limitations of ASLR is fundamental to how modern processors manage memory and build an attack that can fully derandomize ASLR from JavaScript without relying on any software feature.

...

Recently, browser vendors have broken the precise JavaScript timer, performance.now(), in order to thwart cache attacks. We built two new timers that bypasses this mitigation in order to make the AnC attack work.

...

Some processor vendors agreed with our findings that ASLR is no longer a viable security defense at least for the browsers.


> at least for the browsers

Indeed this seems to be key here, the browser is a fairly large sandbox to play in. For "traditional" server side daemons ALSR is certainly a worthwhile layer in the security of a system.


This is a true tour-de-force in computer security.

The way they exploit the MMU page table walk is just exquisite.


There are lots of things browsers can do to mitigate this, but it requires a change from the simplistic worldview of trusting a random websites javascript and assuming a software emulator won't reveal information about the host. If this paper finally prompts more concern about side channel attacks that would be great.

Thing they could do include:

- running the javascript engine on a specific core and evicting other processes from that core, which is a common technique in real time systems (pinning and isolating);

- hardware cache partitioning, then locking PTEs and TLBs for that core (eg Intel xCAT);

- separate websites by levels of trust, and run less trusted code more conservatively. Time monotonicity can be preserved without enabling high resolution linearity.

(Intel CAT hasn't made it to the kernel mainline yet, nearly though. https://github.com/01org/intel-cmt-cat )


Great paper, but I must say that I disagree with the characterization of ASLR as "a first line of defense" in the paper, and then repeated in most of the coverage of it.

When you reach the point that you're trying to mitigate the effects of memory corruption, it should mean that your first N lines of defense have already failed.


If I understand correctly they are able to determine the starting address of the heap or data segment in javacscript even with ASLR. But is it useful? Do you have to exit the JS sandbox to exploit it?


This is a component to be used in conjunction with another attack. It does the recon to determine the current memory layout. Once this is known a more traditional sandbox escape (like a content parser vulnerability) will be much more useful.


Here's a web page about it: https://www.vusec.net/projects/anc/


The paper says that this requires shared memory support for web workers which is an RFC. Has it actually been implemented? I can't really find any info on it.


Yes, but not enabled by default yet (and hopefully never). From the paper:

>SMC builds a high-resolution counter that can be used to reliably implement AnC in all the browsers that implement it. Both Firefox and Chrome currently support this feature, but it needs to be explicit enabled due to its experimental nature. We expect shared memory between JavaScript web workers to become a default- on mainstream feature in a near future.

Note that the web workers based timer isn't required for their attack. They also discus an alternative strategy for accurate timings by looping calls to performance.now():

>The idea behind the TTT measurement, as shown in Figure 4.4, is quite simple. Instead of measuring how long a memory reference takes with the timer (which is no longer possible), we count how long it takes for the timer to tick after the memory reference takes place. More precisely, we first wait for performance.now() to tick, we then ex- ecute the memory reference, and then count by executing performance.now() in a loop until it ticks. If memory reference is a fast cache access, we have time to count more until the next tick in comparison to a memory reference that needs to be satisfied through main memory.

and it's the TTT strategy that they use against Firefox.


Another reason to not run javascript from anywhere on the internet, like ads.

No javascript for you! I will not look at your ads.


Question: How much overlap is there between using ASLR and writing code in Rust? In other words, do the safety mechanics of Rust also prevent the kind of vulnerabilities that ASLR protects against? If so, does it do so to the extent that ASLR would be redundant for code written in rust?


ASLR is only relevant if you're exploiting a memory-safety bug. It is therefore "useless" in a safe Rust program (just as it's "useless" in Python, Java, JS, etc), but this is assuming:

* all the underlying unsafe routines are correctly implemented (unlikely in theory, false in practice)

* the compiler is correctly implemented (objectively false)

* the OS doesn't have a weird bug/hole that pwns safe programs (not very informed here, would be interested to hear examples of this)

* C libraries the Rust code FFI's into are correct (libc, openssl, gtk, etc...)

All safe languages gain defense-in-depth benefits from ASLR (insofar as anything benefits from ASLR), because our whole software stack is riddled with bugs.


Rust does not protect against attacks on ASLR.


Have they published the proof-of-concept code? I understand how you can use timing measurements to determine layout in theory, but what javascript statements are you running that can reveal where msvcrt.dll or kernel32.dll is loaded in chrome.exe?


Given there's an actual non-zero performance impact on ASLR, I wonder if there's any remaining benefit to have it (also from a maintenance perspective).


The non-zero performance impact on ASLR is basically 0, and impacts link-time performance more than anything else, which most devs should not care about. So before you start going for the tiniest of microoptimizations maybe profile.

Yes, there is absolutely reason to use ASLR. This attack assumes that an attacker can get the target program to eval arbitrary code. This is not something most programs do. If you are remotely exploiting a vulnerable SSH, or some service, this attack provides nothing to you. ASLR is still very effective for this use case.

This is why I find this paper so frustrating.


this result has brought the attack surface from impractically small to maybe hard but definitely reasonable; disabling ASLR altogether means reasonable becomes easy.


Given this attack can be run within large userspace programs that are run directly from the network (js via browsers, or within VM boundaries), I don't think that "reasonable" would be the right word.

The constraints are pretty small in these scenarios. Once a POC is released it becomes widespread pretty quickly. There's no special size/constraint that effectively stops this attack being run on a large scale.


Not much to add but this paper was delightfully written and a pleasure to read.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: