Hacker News new | past | comments | ask | show | jobs | submit login
TRRespass: Rowhammer against DDR4 (vusec.net)
119 points by mdriley 19 days ago | hide | past | web | favorite | 52 comments

IIRC, there was a proposal some time ago to use performance counters to detect a rowhammer attempt (high number of cache misses) and stop it (by pausing the offending process until the DRAM refresh can catch up). Did anything come out of it?

Maybe you refer to ANVIL. It was demonstrated to not be completely effective

Thanks for the keyword! Searching for "rowhammer anvil" led me to https://web.eecs.umich.edu/~genkin/papers/another-flip-rowha... which links to https://lwn.net/Articles/704920/ which is the article I was thinking of. I can't find much else about that kernel patch other than https://patchwork.kernel.org/patch/9400475/ and https://patchwork.kernel.org/patch/9401819/ so that approach was probably abandoned.

I wish we could get rid of this dependency on the natural world that computers seem to have. Sadly I cant of a way to implement computers outside of reality and have some safe interface to them.

I always see computers as instruments for creating an approximation of a Platonist mathematical space from a physical space. Theoretically, digital computing can be done to any precision we want, until the non-ideal physical properties and constraints kick in, such as time, space, processing power, physics, in other words, leaky abstraction...

Check out Permutation City by Greg Egan for some inspiration ;)

The dependency comes from pushing the physical limits until the abstraction leaks. Older memory technologies were not vulnerable to rowhammer. But they were also a lot slower.

Is this unreliability being driven by newer DRAM being faster, or by newer DRAM being denser?

Denser & lower voltage

That's it, we're going back to core memory.

If you thought "cold boot attacks" were a problem with DDR, using memory that still holds data 50 years later might be an issue.


Core memory has temperature issues.

Unfortunately, it looks like any interface is a potential for a side channel…

This is why I'm pleased to announce my new Petahertz Rock Computer line. Not only are they the fastest computers in the world, but they are also the most secure. No data will leak from a system implemented on a rock computer, ever.

Buy one now and it will ship with a complementary write only ssd for all your secure storage needs!

Terms and conditions apply.

Is it? You can fix rowhammer by simply re-mapping all memory banks after executing every instruction.

That might mitigate Rowhammer, but does it really remove all side channels?

Perhaps photonic computing?

I think there are still lots of advances in computing needed to eventually have security from the semiconductor level up.

There's an SCP article somewhere in this...

Theoretically, if we accepted lower performance, could we design our hardware and software to actually be secure? The number of exploits over the last two years is making my head spin

For this case we could probably fix it without any real performance impact if we actually prioritized it.

Put 14 more bits for a counter in each row, and adjust the DDR interface semantics to guarantee time for the extra refreshes, and I think you avoid all the problems this attack depends on.

Total cost <0.1%

Don't cheap out with a tiny hash table stuck to the side or whatnot.

This brings up something I've often wondered about: we can basically guarantee that a very short snippet of code (say, a few lines) is secure, right? However, there is no real way to guarantee that full programs are secure. Can we not truly guarantee that some portion of code is secure, or is it simply the amount of code that makes it impractical (and financially infeasible)? If the latter, what is the practical boundary?

> we can basically guarantee that a very short snippet of code (say, a few lines) is secure, right?

That's the problem, we can't do that. Between hardware flaws like rowhammer, meltdown, spectre, etc. it's not possible to say that a piece of code isn't going to trigger something intentionally or not.

You can say that something very very likely doesn't trigger any known vulnerability, but only while also specifying the exact hardware to run it on and probably other external variables.

> Between hardware flaws like rowhammer, meltdown, spectre, etc. it's not possible to say that a piece of code isn't going to trigger something intentionally or not.

I guess I was thinking software vulnerabilities specifically (although in the context of this article I should have thought about the hardware side more). Or to be even more specific, flaws that are from that program in particular.

You can formally prove that a program is a refinement of a specification, or that a subroutine always obeys certain invariants (when executed honestly.) See Dafny and ADA-SPARK for examples.

It reduces to theorem proving with a SAT solver like Z3, if your language is Turing-complete and your invariants are abitrarily general. SAT solvers are fast and powerful these days, but the usual issues are:

1. It's too cumbersome to define invariants or write the type of specs that can check program correctness.

2. Things get exponential as state space increases.

Modern checked languages are finally addressing #1, though they will sometimes have blind spots (e.g. Dafny can't reason about concurrency, TLA+ (which can) is very abstract and high level.)

Writing programs as bundles of isolated, formally verified modules, then proving the correctness of the program with a high-level spec that assumes module correctness, lets you scale to real, large programs. This is how Dafny works, for example -- functions declare pre and post-conditions, and so Dafny can throw away its reasoning about callee function's internals when checking a caller.

This strategy really is overlooked! It's powerful enough that you can eliminate memory/argument checks at compile time, ditch all your unit tests and KNOW your program implements a spec. It should be mandatory for all safety-critical systems that can't rely on hardware fail-safes (e.g. airplanes.)

It just takes 3x as long.

Software side there's a lot that can be proven and said about small sections of programs. I think a sibling commenter talked about an ISO standard related to that (And I think it covers some hardware bits). My layman's understanding of it is that it's a way to specify as many assumptions about how the code will be run, where, and what side effects it will have and no others. That makes for a really nice set of assertions but the end effect is that you can't say anything about certain kinds of programs; i.e. This program is safe, That program can't be safe, and then these programs are unknowable. Godel should never have been allowed to publish anything.

Making those proofs is nice and all, but they rarely take side channels or processor bugs into account.

we can basically guarantee that a very short snippet of code (say, a few lines) is secure, right?

I’m not sure that’s true, but even if it is, you have to ask: Which few lines?

Let’s say we have the technology that can analyse (say) five lines of code and prove them completely correct and secure. That doesn’t mean you can prove a 500 line program to be secure by running the analysis on each 100 sets of five consecutive lines. Code in one place can affect the logic elsewhere. So now to prove a 500 line program correct, we’d need to analyse every possible combination of 5 lines chosen from the 500 => 255244687600 different proofs to check!

I guess that forms the core of the issue, it's inherently exponential (or factorial?).

It's not just that. Before you can prove anything "correct" you need to define what "correct" means. Now you have the meta problem of proving that your definition of "correct" is correct.

Turing complete.

I think what you are asking for is Common Criteria.


That seems to have the smell of incompetent bureaucracy…

In the context of Rowhammer, we do already, right? Rowhammer susceptibility is considered a defect in memory modules, it's a deviation from the functional specification of the component.

See also this quote in the article. "We followed a multi-party coordinated vulnerability disclosure involving the main memory vendors"

You can (almost fully) solve this by using a rack of low-power computers, each running a single task, instead of a one faster one.

There is always a trade-off between security and performance. Many vulnerabilities could be fixed if we accepted low performance

In this case, you’d refresh more often.

Or if you can spare the cost switch to SRAM instead. Not sure if the density would be comparable enough to be reasonable even with the 100x jump in price (source: pulled out of hat).

At scale the price and density would be correlated so it might be more like 10x.

ECC should be standard and mandatory at this point.

Not everybody's working on web or something running untrusted code. Not fair to force people to pay a financial or performance hit for something they don't need; the world loses if peoples' scientific computations take 20% longer to run because of some ugly patch to an obscure vulnerability in speculative execution, or because the university can afford less computing resources because the only option is expensive ram.

If ECC was made for the same markets as normal memory, it would cost <12% more and make entire computers cost <2% more, with no meaningful performance hit. It's reasonable and should be mainstream.

You want ECC for trusted code, too.

ECC is not a total solution.


How expensive would 16 GiB of refress-less "4 nm" SRAM placed on the CPU package flip-chip style be?

SRAM isn't competitive for bulk storage with access latency >10~50ns. You'd just use DRAM cells and turn the parameters until they deliver your desired performance. The energy efficiency of normal server-DDR4 (non-overclocked, high-density) is magnitudes better than state-of-the-art SRAM, due to leakage. You _only_ have to add more ECC bits and more aggressive feedback in favor of refresh rate vs. response time.

It's not particularly hard to detect patterns that try to provoke rowhammer, and respond with even more aggressive counter measures. DoS vectors on that front are already to be expected, so turning Rowhammer attempts into something akin to a no-worse-than-2x slowdown seems an easy ask.

Ludicrously expensive. The largest SRAMs that are commercially available are around 288 Mbit (32 MB with parity), and cost hundreds of dollars per chip.

I'm not sure you could even physically fit 16 GiB of SRAM onto a CPU with current technology. SRAM cells are much larger than DRAM.

AMD’s Epyc 7742 has 256 MB of SRAM in its central die. That die is massive and the chip costs 7 grand.

Time yet to add support to encrypt memory transparently in the DRAM controller?

That already exists with AMD SME and AMD SEV (https://en.wikipedia.org/wiki/Zen_(microarchitecture)#Enhanc...). But to protect against bit flipping attacks like rowhammer you need authentication, not encryption, and I don't think these features authenticate the encrypted memory (since they would need extra memory to hold the authentication tag).

The parent post is probably thinking of the bit-scrambling involved in the encryption providing some form of protection.

Yea, it should prevent usable bitflips from hapoening. Instead you'd get complete corruption of a block

By authentication you mean some kind of HMAC? I guess that's fair. Still I figured that corrupting 1 bit in the encrypted output would corrupt the entire block & thus be very difficult to exploit.

might want to merge this with https://news.ycombinator.com/item?id=22547324

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact