The manufacturers obviously don't want to, but the only way to stop this stupidity is to reject/return/refuse the product as defective if it shows this vulnerability. There's been a lot of efforts to downplay it, to the point that even some memory testing tools have made the tests for RH optional. This is ridiculous not just from a security standpoint, but overall correctness. Memory that just doesn't behave like memory should, is not fit for purpose.
Unfortunately RAM defects are often very subtle --- I remember a particularly irritating one which happened only when extracting a certain ZIP file; all the memory testing tools said the RAM was fine even with a few days of continuous running, and the file extracted correctly on a handful of other systems, but on this one it would always end up corrupt. Attaching a debugger or otherwise attempting to trace the cause naturally made it disappear. It was only swapping the RAM with a new module that fixed it.
So it seems more of an issue with a change in acceptable tolerances - what is fine for normal usage might not be secure. Also, I might have been mistaken but the author's response implied that no brand of RAM was not vulnerable.
But, with that said, memory is extremely low margin (dram is a commodity market) and is priced part based on yield. Manufacturers adding rounds of QA likely to fail many chips, pushing yield down and promoting process changes is extremely expensive. I also want to pressure manufacturers to make more reliable, correct product, but that has a cost. Even if a law was passed that all commodity dram had to be RH-hard, price would jump. In any other scenario, RH-hard dram would be a speciality need and priced accordingly.
And even ECC dram has tolerances and potential for silent corrupting failure. Most aren't perfect.
If I remember correctly from the original Rowhammer paper, anything from ~2009 or earlier is not affected. Price for RAM was not particularly high in those times either, and I'd be willing to pay the same (inflation-adjusted) amount today if it meant I didn't have to worry about these problems.
In any other scenario, RH-hard dram would be a speciality need and priced accordingly.
It shouldn't be, because the access pattern could show up incidentally in other ways and cause problems like silent corruption.
One possible explanation of how it is possible, the parasitic coupling effect: https://github.com/google/rowhammer-test/blob/master/docs/re...
It is a trade-off of density and coupling: the closer you get, the higher the parasitics. But if you stay far apart, you have low density (expensive). You also likely need different caching, error correction, and refresh settings.
The concept of parasitic write induced loss is really old in memory. Happens in NAND flash too. It is perhaps surprising that attacks like rowhammer never happened before. Or maybe we never knew?
Huh? Rowhammer is an access pattern that may reveal certain defects in RAM.
If the data stored in the physical memory has been encrypted using a randomly generated secret key by the memory controller on the CPU, it should be impossible to generate an exact target data pattern in the victim even when the aggressor and victim are in the same enclave.
That in itself isn't sufficient to guard against all attacks, because the memory enclaves don't provide data integrity, only encryption. So if all you're trying to do is to change a victim's boolean that controls whether you have e.g. root access, then changing that boolean from false (0) to true (any non-zero value) is going to succeed with high probability despite the encryption.
Still, maybe there's an angle here that can help put an end to Rowhammer once and for all in a few hardware generations?
> That [encrypted memory] in itself isn't sufficient to guard against all attacks
You might want to read about authenticated encryption . The authenticity of the decrypted data often lies in the block cipher mode (for example OBS) rather than the block primitive, where Tresor uses the AES-NI instruction. I would imagine SGX does something along the same lines.
The reason I'm doubting that there's integrity protection for data while in RAM is that it's information-theoretically impossible to do that without memory overhead.
This doubt seems to be confirmed by the descriptions of the ELDB/ELDU/EWB instructions: they perform cryptographic authentication while copying pages to and from the enclave. There don't seem to be any integrity checks while data is live in the enclave -- and that's what matters for Rowhammer protection.
> Fortunately, while the attack would be extremely difficult to prevent, it also looks to be very difficult to actually pull off in the wild. (...) the VU Amsterdam team said a successful attack in a noisy system can take as long as a week. (from https://www.theregister.co.uk/2018/11/21/rowhammer_ecc_serve...)
Well, if you don't know that you are under attack, taking a week isn't exactly an advantage. And if the attack can be divided among many agents, even if not in parallel, can make it harder to find out that you're under attack.
Rowhammer has been around for four years, yet is hasnever been seen in practice. Ever. Can we stop with reporters passing basic academic research as a current threat. It's just research pr0n, sci-fi hacking. It's not even remotely closely to being a threat.
In a former project (with a custom board and FPGA, no processor), we encountered random bugs. We put many checks and finally came to the conclusion that our DDR modules were flipping bits randomly, but only on normal load. All test benches were running fine. Putting the modules in a PC did not show any problem.
How can you tell your boss "all DDR modules are faulty but run fine in a PC" and not seem crazy?
It was when I read about rowhammer attacks that I made the link. Changing the addressing schema completely solved the issue.
All these hardware related failures/attacks may not be a threat (yet), but for me, the underlying sand castle we are building things on is very worrying.
Can you elaborate at all? Do you mean you changing the FPGA to not write to exactly the same spots over and over? And just set up something up similar to wear leveling?
With this, the slowly moving indexes became big jumps. This avoids reading the same row over and over again (mixed with other requests).
Didn't you notice a huge drop in performance producing row-buffer conflicts over and over ?
After I implemented the change and had a stable system, I went as far as implementing a switch of addressing pattern so that, all other things being equal, I could trigger the bug or not. And it worked as expected.
So yes, i am very confident in the analysis.
Cause both are impractical. Using a minutes-long RAMpage attack to root an already compromised device is impractical. There are easier ways to do that. And it's easier to update Chrome than the Android OS, for Drammer, so that's been mitigated ages ago. Nobody uses a three-year-old Chrome on Android. Get real ... and spare me the comment about how some users don't update apps, cause they do, Google makes sure to spam users about it daily.
(Note that I'm not commenting on the attack vector at all, only about the Android update settings)