Buy one now and it will ship with a complementary write only ssd for all your secure storage needs!
Terms and conditions apply.
I think there are still lots of advances in computing needed to eventually have security from the semiconductor level up.
Put 14 more bits for a counter in each row, and adjust the DDR interface semantics to guarantee time for the extra refreshes, and I think you avoid all the problems this attack depends on.
Total cost <0.1%
Don't cheap out with a tiny hash table stuck to the side or whatnot.
That's the problem, we can't do that. Between hardware flaws like rowhammer, meltdown, spectre, etc. it's not possible to say that a piece of code isn't going to trigger something intentionally or not.
You can say that something very very likely doesn't trigger any known vulnerability, but only while also specifying the exact hardware to run it on and probably other external variables.
I guess I was thinking software vulnerabilities specifically (although in the context of this article I should have thought about the hardware side more). Or to be even more specific, flaws that are from that program in particular.
It reduces to theorem proving with a SAT solver like Z3, if your language is Turing-complete and your invariants are abitrarily general. SAT solvers are fast and powerful these days, but the usual issues are:
1. It's too cumbersome to define invariants or write the type of specs that can check program correctness.
2. Things get exponential as state space increases.
Modern checked languages are finally addressing #1, though they will sometimes have blind spots (e.g. Dafny can't reason about concurrency, TLA+ (which can) is very abstract and high level.)
Writing programs as bundles of isolated, formally verified modules, then proving the correctness of the program with a high-level spec that assumes module correctness, lets you scale to real, large programs. This is how Dafny works, for example -- functions declare pre and post-conditions, and so Dafny can throw away its reasoning about callee function's internals when checking a caller.
This strategy really is overlooked! It's powerful enough that you can eliminate memory/argument checks at compile time, ditch all your unit tests and KNOW your program implements a spec. It should be mandatory for all safety-critical systems that can't rely on hardware fail-safes (e.g. airplanes.)
It just takes 3x as long.
I’m not sure that’s true, but even if it is, you have to ask: Which few lines?
Let’s say we have the technology that can analyse (say) five lines of code and prove them completely correct and secure. That doesn’t mean you can prove a 500 line program to be secure by running the analysis on each 100 sets of five consecutive lines. Code in one place can affect the logic elsewhere. So now to prove a 500 line program correct, we’d need to analyse every possible combination of 5 lines chosen from the 500 => 255244687600 different proofs to check!
See also this quote in the article. "We followed a multi-party coordinated vulnerability disclosure involving the main memory vendors"
You want ECC for trusted code, too.
It's not particularly hard to detect patterns that try to provoke rowhammer, and respond with even more aggressive counter measures. DoS vectors on that front are already to be expected, so turning Rowhammer attempts into something akin to a no-worse-than-2x slowdown seems an easy ask.
I'm not sure you could even physically fit 16 GiB of SRAM onto a CPU with current technology. SRAM cells are much larger than DRAM.