Hacker News new | past | comments | ask | show | jobs | submit login
RowHammer: A Retrospective (arxiv.org)
65 points by matt_d 16 days ago | hide | past | web | favorite | 19 comments



I'm not a security expert by any means, but row hammer has always struck me as one of the most ingenious hacks ever. I mean if you can flip bits by sending electric current repeatedly through a channel then what can't you do? Nothing is sacred. There is no God


Rowhammer is special because it is active, analog, and remote.

Sure, there have been analog side channel data leaks by observing the power consumption of smart cards, or even analog active attacks by messing with the power supplied to smart cards.

But Rowhammer is active, can be done over the network, and relies on analog properties. It's really something special and, as a software person, quite scary.


Flip bits by remotely executing a web page(which runs on JavaScript) on the target computer

https://ieeexplore.ieee.org/abstract/document/7546493


Well, it's a hardware defect in some memory modules. Same types of tricks can be used to trigger nany kinds of hw bugs in various CPUs and other chips.


This paper is fun [1].

TL;DR: rowhammer via JS/WebGL.

[1] http://web.eecs.umich.edu/~genkin/teaching/fall2018/EECS598-...


Similar stuff can happen with NAND flash but the NAND flash controller can have algorithms to intervene.


NAND always has at least one or two layers of ECC protecting your data, so deliberately induced read disturb errors are a lot harder to turn into a practical exploit—and that's before adding any specific measures to predict and prevent read disturb errors.


I used to work in SSD firmware development. Thing is the newer generations of NAND are quite fragile so the ECC protection is essential and it is a lot easier to trigger read disturb since you can issue millions of reads with ease. Of course various caches in the controller and system can mitigate the trivial access patterns. With read disturb detection algorithms the difficulty is you would need lots of fine grain access statistics to make it optimal.


The "Mitigation" section here gives a friendly explanation:

https://en.wikipedia.org/wiki/Row_hammer#Mitigation

Summarizing:

The DRAM is refreshed every 64ms, which isn't always frequent enough to prevent bit errors. This could be solved by an 8ms refresh interval, but performance would suffer.

Several clever techniques (hacks) have been thought up to provide as-needed refreshing of specific memory rows that are at risk. These have minimal performance impacts, vs a faster refresh rate overall.


in _practical_ summary: No mitigation. unless you degrade performace considerably or buy a very specific, impossible to find cpu+board+ram combination.

also,

> some manufacturers implement TRR in their DDR4 products,[26][27] although it is not part of the DDR4 memory standard published by JEDEC.[28] Internally, TRR identifies possible victim rows, by counting the number of row activations and comparing it against predefined chip-specific maximum activate count (MAC) and maximum activate window (tMAW) values, and refreshes these rows to prevent bit flips.

both sources cites manufaturers thinking about producing this. I can't find a single place offering "ddr4 trr". But they may forget to mention as ram today sells mostly for the color of the heatsink than anything else.


Not to excuse poor hardware, but rowhammer can be pretty easily mitigated in software by using guard rows as mentioned in the paper.

To explain, rowhammer allows you to flip bits in adjacent memory rows. So, we if we just put an unused row of physical memory between two pieces of physical memory we want protected from each other, then rowhammer attacks as normally described are effectively impossible.

To provide a simple case, assume you a totally static system with two processes, A and B, that need 2MB each executing on hardware with an 8KB row size (order of magnitude correct). Then if we just put process A in physical addresses 2MB to 4MB and process B in physical addresses (4MB + 8KB) to (6MB + 8KB), then the physical memory is separated by more than the row size and rowhammer attacks in one process can not affect the other.

The main thing about this solution is that physical memory allocation needs to be allocated in at least row-sized blocks and you eat overhead anytime you need to insert a guard row. However, the overhead can be largely mitigated if you only insert guard rows between the security boundaries of your code instead of on every allocation.

As an example of how we could mitigate overhead by choosing a larger security boundary, say we have VMs on a shared host, but the VMs get 1GB of guaranteed memory. Then when we create a new VM we can just allocate 1GB of contiguous physical memory and insert a guard row after it. Since the memory of the VMs is not interleaved, all the memory of a VM is further than one guard row from the memory in any other VM, so all VMs are safe from rowhammer attacks by any of the other VMs. With this solution, we only eat a 8KB overhead on a 1GB allocation which is a trivial.


As of 2019 do we have practical mitigations against RowHammer? Or do we still pretend that it isn’t something to be worried about, and keep producing DRAM that’s fundamentally broken?


See my other comment, or https://en.wikipedia.org/wiki/Row_hammer#Mitigation

Edit: As gcb0 mentions, these may be theoretical only: https://news.ycombinator.com/item?id=19828147



Return the defective memory.


I've never understood this. Why not just have the memory controller transparently encrypt/decrypt the data using a random key generated at boot (or hell - even a unique key programmed per-controller at the factory)? Then you get actual integrity validation against any kind of error/bit flip, not just Row Hammer. Then you'd also be using ECC for it's purpose of correcting errors rather than hoping it's also catching attacks (which it turns out it can't).


"Why not" turns out to have quite an expensive answer (both in silicon and runtime latency). If you just cipher using a standard algorithm like AES then performance isn't great, but adding integrity protection to it further slows that down. Of course, you can use a lighter cipher, but then you don't have the years of cryptanalysis that gives everyone some assurance.

I think the jury is still out on AMD SME and Intel TME in terms of their respective performance hits.


What is the current state of RawHammer in newly produced memory?


From the article:

Unfortunately, despite the many proposals in industry and academia to fix the RowHammer issue, RowHammer failures still seem to be observable in state-of-the-art DRAM devices in a variety of generations and standards (e.g., DDR4, ECC DRAM, LPDDR3 and LPDDR2 DRAM).

I think newly produced would mean DDR4.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: