Second: While the problem looks real enough, the tests to demonstrate it are not realistic. Hammering the same rows with consecutive reads does not happen in the real world due to caches which the get around via flushes. I'd like to see more data on how bad the abuse needs to be to cause the problem. Will 2 reads in a row cause errors? 5? 10? 100? They never address how likely this is to be a real-world problem. I don't doubt that it is, but how often?
Secondly, the DRAM makers don't currently provide enough information to reliably know what neighbors to refresh. I suppose they could have used their guesses to test on the FPGA rig but given the rest of the paper I'm reasonably satisfied that they have correctly identified the problem and that their solution would work.
I can see "exploit resistant memory" being a selling point, maybe soon.