I've said it before and I'll say it again: Rowhammer is a functional correctness problem that the memory industry has been trying to hide ever since it was discovered. Authors of memory testing tools have been convinced to ignore it, with the rationalisation that "almost all memory would test as defective." AFAIK it's only after ~2009 that RH became a problem; anything older is not affected due to its lower density.
On the bright side, since the first Spectre timing in JS has been so reduced in resolution that the last time I read about it, the rate at which you can read memory (keep in mind that where you read is random, and how large the whole address space is) was extremely low --- something like a few bytes per hour; and that was after the researchers had already done a ton of preparatory work to set up everything just right. (Otherwise, because like with anything timing/cache-based, simply running something else may already change the timing and possibly invalidate some of those bytes being "read". Then comes the question of where in memory you're actually reading --- a 64-bit address space is huge --- and what significance those bytes have. It could be a private key, it could be random bits; the point I'm trying to make is, being able to read memory is just one of the requirements to an actual attack, and there are many more hurdles an attacker has to overcome.)
IMHO it's something to worry about if you have JS on by default and are being subject to a very targeted attack. If you have JS off by default and aren't someone of particularly high interest, there is much less to worry about.
How do you measure correctness? By reliability?
What trade-off would you make to improve density, throughput, or latency?
Why do you think that trade-off wasn't made?
The fact that the correctness of software since the beginning of computing depends on the correctness of the hardware means that it's not a "trade-off", memory that doesn't behave like memory should is simply defective. The $$$-chasing manufacturers would like you to think otherwise, however, and IMHO there's been a huge cover-up --- which the manufacturers certainly are trying hard at; just imagine recalling every single DRAM produced in the last 10 years.
Security is only one important piece of the whole story. Imagine computations being subtly incorrect (that includes things like "IsUserRoot()" being occasionally wrong, but it affects correctness in general.) That undermines everything about what computers are supposed to do. I only wish there was far more outrage about Rowhammer than Spectre/Meltdown (I remember some people suggesting recalling all CPUs made in the last 2 decades...), because while timing side-channels are "only" a security concern that simply did not receive much attention until recently, Rowhammer and similar corruptions affect all computation. You don't have to be attacked, all that needs to occur is some computation happens to have a "fragile" access pattern that flips bits somewhere, and weird undefined behaviour appears. I mentioned this in a comment I made 4 years ago about the same thing: https://news.ycombinator.com/item?id=9175734
But I don't think you satisfactorily answered some of my other questions. What trade-offs do you think were made which shouldn't have been made?
How would you improve density, throughput, or latency while maintaining 100% correctness in the face of unknown (or perhaps unintended) access patterns?
Pretty much all hardware has a given bit error rate. The problem with rowhammer is that the BER of affected memory can be changed by orders of magnitude by access patterns.
"An attacker therefore requires some kind of foothold in your machine in order to pull this off."
Right, but these days browsers are handing over footholds to anyone with a webserver! It used to be that you worried about pop-ups because they were annoying. Now it seems you need to worry about the modern-day equivalent because they could at least theoretically ruin your digital and perhaps real life.
It is bad for providers using non-dedicated cloud infrastructure too: some of these flaws allow breaking out of the hypervisor's protections so an attacker can in theory read from other VMs on the same infrastructure not just other processes on your (virtual)server.
> I've been using AMD CPUs exclusively on the desktop
That doesn't protect you all that much. While this particular flaw seems from current reports to be Intel specific, some of the past ones affected AMD and Arm designs also, and maybe there are some AMD/Arm/other specific attacks waiting to be found too.
I think this is called "security through minority" which is a special case of "security through obscurity" :)
And maybe there is no such a flaw hiden in AMD, the certain thing is that if bad people start to use this in JS then all the people that have Intel CPUs like I have will have to do something about it , disable JS and suffer a lot of breakage or install some software patch that will decrease the performance even more on this machines.
It will be interesting how this will affect the decision for what hardware to buy for cloud computers, AMD and ARM could gain new customers.
Don't underestimate the ability of AMD's engineers to make similar mistakes.
The exploit specifics may be different, but it's foolish to think you are safe just for not knowing whether something bad is possible or not.
You can bet the people you should be afraid of know more than you.
Not only is code that isn't run (or written) the fastest code, its also the most secure. And on the web, its the most compatible. Funny, that.
as stated by other posters though, these attacks are sso resource intensive in the sense it would be cheaper to buy some browser o-day or some other exploit to get similar results that i don't see anyone wanting to practically use such a channel to get to someone. that's like trying to convince someone their wallet belongs to you, despite the fact you can hypothetically buy a gun and tell them it is with much less effort / time invested (and thus costs)
I'm not aware of any Spectre attacks in the wild that have been documented. (Of course it's possible they happened, but if they have at least nobody talked about it.)
Would the act of switching to a noticeably lower performance state when operating on sensitive data reveal information to other threads?
Huh? When a known vulnerability is packaged in a script-kiddie/shady marketeer friendly manner, they do get orders of magnitude worse...
Well, that’s lovely. Turning off JS just got more important.
The more who turn it off and complain about useless appification, the more likely the trend can change. There are already plenty of reasons besides security to turn it off.
That’s a pretty special case, however. On any other device, uMatrix + NoScript works great.
What needs happen for engineers to realize that process isolation is something that needs to be taken serious at the lowest possible level (hardware + OS), rather than through some magic abstraction layer (VMs, hypervisors, containers, etc).
But I suspect you aren't referring to encryption but instead to cryptocurrencies?
Spectre also caused performance drops, could Intel have made these flaws intentionally - just to boost performance?
AFAIK Intel, ARM and AMD all have shipped chips that are vulnerable to a handful of speculative execution attacks. Microcode can work around these issues by doing things like changing the micro-ops that instructions decode to or disabling processor subsystems - though what they can or can't disable is something I suspect only the vendor knows, it's common for vendors to include many of what are referred to as 'chicken bits' that are essentially panic switches to disable a feature or subsystem if it turns out to be broken.
Some ARM CPUs, and IBM, both POWER and mainframe CPUs, have Meltdown bugs. All 4 of these vendors of high performance out-of-order speculative execution CPUs including AMD have Spectre bugs.
Makes me wonder when we will start seeing physical two-tier architectures, a complete but slower, safety centric system for handling sensitive data on both OS and application level and a souped up "you have been warned about potential leaks" OOE system acting as a general purpose accelerator addon.
The difference to existing security enclave concepts would be that the less safe side would be the add-on.
Since no non-Intel CPU expert on Earth looking at those designs didn't think those things were a problem for decades, that's a good reason to assume Intel engineers couldn't either.
Not all of these vulnerabilities are equally serious in nature.