Sure, there have been analog side channel data leaks by observing the power consumption of smart cards, or even analog active attacks by messing with the power supplied to smart cards.
But Rowhammer is active, can be done over the network, and relies on analog properties. It's really something special and, as a software person, quite scary.
TL;DR: rowhammer via JS/WebGL.
The DRAM is refreshed every 64ms, which isn't always frequent enough to prevent bit errors. This could be solved by an 8ms refresh interval, but performance would suffer.
Several clever techniques (hacks) have been thought up to provide as-needed refreshing of specific memory rows that are at risk. These have minimal performance impacts, vs a faster refresh rate overall.
> some manufacturers implement TRR in their DDR4 products, although it is not part of the DDR4 memory standard published by JEDEC. Internally, TRR identifies possible victim rows, by counting the number of row activations and comparing it against predefined chip-specific maximum activate count (MAC) and maximum activate window (tMAW) values, and refreshes these rows to prevent bit flips.
both sources cites manufaturers thinking about producing this. I can't find a single place offering "ddr4 trr". But they may forget to mention as ram today sells mostly for the color of the heatsink than anything else.
To explain, rowhammer allows you to flip bits in adjacent memory rows. So, we if we just put an unused row of physical memory between two pieces of physical memory we want protected from each other, then rowhammer attacks as normally described are effectively impossible.
To provide a simple case, assume you a totally static system with two processes, A and B, that need 2MB each executing on hardware with an 8KB row size (order of magnitude correct). Then if we just put process A in physical addresses 2MB to 4MB and process B in physical addresses (4MB + 8KB) to (6MB + 8KB), then the physical memory is separated by more than the row size and rowhammer attacks in one process can not affect the other.
The main thing about this solution is that physical memory allocation needs to be allocated in at least row-sized blocks and you eat overhead anytime you need to insert a guard row. However, the overhead can be largely mitigated if you only insert guard rows between the security boundaries of your code instead of on every allocation.
As an example of how we could mitigate overhead by choosing a larger security boundary, say we have VMs on a shared host, but the VMs get 1GB of guaranteed memory. Then when we create a new VM we can just allocate 1GB of contiguous physical memory and insert a guard row after it. Since the memory of the VMs is not interleaved, all the memory of a VM is further than one guard row from the memory in any other VM, so all VMs are safe from rowhammer attacks by any of the other VMs. With this solution, we only eat a 8KB overhead on a 1GB allocation which is a trivial.
Edit: As gcb0 mentions, these may be theoretical only: https://news.ycombinator.com/item?id=19828147
I think the jury is still out on AMD SME and Intel TME in terms of their respective performance hits.
Unfortunately, despite the many proposals in industry and academia to fix the RowHammer issue, RowHammer failures still seem to be observable in state-of-the-art DRAM devices in a variety of generations and standards (e.g., DDR4, ECC DRAM, LPDDR3 and LPDDR2 DRAM).
I think newly produced would mean DDR4.