Hacker News new | past | comments | ask | show | jobs | submit login
Intel CPUs afflicted with simple data-spewing spec-exec vulnerability (theregister.co.uk)
162 points by Nux 21 days ago | hide | past | web | favorite | 54 comments



will make existing Rowhammer and cache attacks easier

I've said it before and I'll say it again: Rowhammer is a functional correctness problem that the memory industry has been trying to hide ever since it was discovered. Authors of memory testing tools have been convinced to ignore it, with the rationalisation that "almost all memory would test as defective." AFAIK it's only after ~2009 that RH became a problem; anything older is not affected due to its lower density.

On the bright side, since the first Spectre timing in JS has been so reduced in resolution that the last time I read about it, the rate at which you can read memory (keep in mind that where you read is random, and how large the whole address space is) was extremely low --- something like a few bytes per hour; and that was after the researchers had already done a ton of preparatory work to set up everything just right. (Otherwise, because like with anything timing/cache-based, simply running something else may already change the timing and possibly invalidate some of those bytes being "read". Then comes the question of where in memory you're actually reading --- a 64-bit address space is huge --- and what significance those bytes have. It could be a private key, it could be random bits; the point I'm trying to make is, being able to read memory is just one of the requirements to an actual attack, and there are many more hurdles an attacker has to overcome.)

IMHO it's something to worry about if you have JS on by default and are being subject to a very targeted attack. If you have JS off by default and aren't someone of particularly high interest, there is much less to worry about.


> I've said it before and I'll say it again: Rowhammer is a functional correctness problem that the memory industry has been trying to hide ever since it was discovered.

How do you measure correctness? By reliability?

What trade-off would you make to improve density, throughput, or latency?

Why do you think that trade-off wasn't made?


If memory is operating correctly, then the value last written to any location should always be the one which is read back. Any deviation from that means there is something wrong, in the same way that a calculator which intermittently produces 1+1=3 would be considered broken.

The fact that the correctness of software since the beginning of computing depends on the correctness of the hardware means that it's not a "trade-off", memory that doesn't behave like memory should is simply defective. The $$$-chasing manufacturers would like you to think otherwise, however, and IMHO there's been a huge cover-up --- which the manufacturers certainly are trying hard at; just imagine recalling every single DRAM produced in the last 10 years.

Security is only one important piece of the whole story. Imagine computations being subtly incorrect (that includes things like "IsUserRoot()" being occasionally wrong, but it affects correctness in general.) That undermines everything about what computers are supposed to do. I only wish there was far more outrage about Rowhammer than Spectre/Meltdown (I remember some people suggesting recalling all CPUs made in the last 2 decades...), because while timing side-channels are "only" a security concern that simply did not receive much attention until recently, Rowhammer and similar corruptions affect all computation. You don't have to be attacked, all that needs to occur is some computation happens to have a "fragile" access pattern that flips bits somewhere, and weird undefined behaviour appears. I mentioned this in a comment I made 4 years ago about the same thing: https://news.ycombinator.com/item?id=9175734


I 100% agree with that if memory is operating correctly then you will read exactly what was last written. And I believe there are of course exceptions to that rule: what if you're reading volatile RAM and it was just powered up? What if a cosmic ray literally changed the bit? These are no-brainers to me.

But I don't think you satisfactorily answered some of my other questions. What trade-offs do you think were made which shouldn't have been made?

How would you improve density, throughput, or latency while maintaining 100% correctness in the face of unknown (or perhaps unintended) access patterns?


> If memory is operating correctly, then the value last written to any location should always be the one which is read back. Any deviation from that means there is something wrong, in the same way that a calculator which intermittently produces 1+1=3 would be considered broken.

Pretty much all hardware has a given bit error rate. The problem with rowhammer is that the BER of affected memory can be changed by orders of magnitude by access patterns.


We all know cosmic rays and other stuff can flip bits. So your reads are only correct 99.99..9% of the time. Deciding how many nines you want is a trade off. Welcome to the real world.


Confusing a random natural event which is literally orders of magnitude lower with specific access patterns that can rapidly cause errors is pure disinformation.


Yeah I don't know what they mean by "Functional Correctness" in this situation

https://en.wikipedia.org/wiki/Correctness_(computer_science)


I've been using AMD CPUs exclusively on the desktop for the last 3-4 builds I've done and it sure feels nice all of a sudden. I recognize it's luck rather than skill of course, but I'll take what I can get because this one is a doozie!

"An attacker therefore requires some kind of foothold in your machine in order to pull this off."

Right, but these days browsers are handing over footholds to anyone with a webserver! It used to be that you worried about pop-ups because they were annoying. Now it seems you need to worry about the modern-day equivalent because they could at least theoretically ruin your digital and perhaps real life.


> these days browsers are handing over footholds to anyone with a webserver

It is bad for providers using non-dedicated cloud infrastructure too: some of these flaws allow breaking out of the hypervisor's protections so an attacker can in theory read from other VMs on the same infrastructure not just other processes on your (virtual)server.

> I've been using AMD CPUs exclusively on the desktop

That doesn't protect you all that much. While this particular flaw seems from current reports to be Intel specific, some of the past ones affected AMD and Arm designs also, and maybe there are some AMD/Arm/other specific attacks waiting to be found too.


> That doesn't protect you all that much. While this particular flaw seems from current reports to be Intel specific, some of the past ones affected AMD and Arm designs also, and maybe there are some AMD/Arm/other specific attacks waiting to be found too.

I think this is called "security through minority" which is a special case of "security through obscurity" :)


Decreasing the likelihood of being compromised is still one layer of security. It's only bad if you depend on it.


That was the case with the whole "MacOS doesn't have viruses" thing.


Dont forget about meltdown.


Only AMD has avoided Meltdown, both ARM in some CPUs, and IBM, both POWER and mainframe, have Meltdown bugs.


It does seem to be a(nother) good argument for a heteogenous CPU marketplace though.


>and maybe there are some AMD/Arm/other specific attacks waiting to be found too.

And maybe there is no such a flaw hiden in AMD, the certain thing is that if bad people start to use this in JS then all the people that have Intel CPUs like I have will have to do something about it , disable JS and suffer a lot of breakage or install some software patch that will decrease the performance even more on this machines.

It will be interesting how this will affect the decision for what hardware to buy for cloud computers, AMD and ARM could gain new customers.


> I've been using AMD CPUs exclusively on the desktop for the last 3-4 builds I've done and it sure feels nice all of a sudden

Don't underestimate the ability of AMD's engineers to make similar mistakes.

The exploit specifics may be different, but it's foolish to think you are safe just for not knowing whether something bad is possible or not.

You can bet the people you should be afraid of know more than you.


> Right, but these days browsers are handing over footholds to anyone with a webserver!

Not only is code that isn't run (or written) the fastest code, its also the most secure. And on the web, its the most compatible. Funny, that.


amd is subject to the same issues regarding preempting execution. it's just not so in the spot light because there's no one who did a POC for it yet. Arguably with their new neuralnetwork powered branch predictor it might be more difficult to fool it, the underlying issue of the architecrutal model still exists, which is that you can learn about micro-architectural state via architectural behaviour which should never be possible within such an abstraction. these timers and behaviours also exist on AMD by design, and thus will also have similar holes.

as stated by other posters though, these attacks are sso resource intensive in the sense it would be cheaper to buy some browser o-day or some other exploit to get similar results that i don't see anyone wanting to practically use such a channel to get to someone. that's like trying to convince someone their wallet belongs to you, despite the fact you can hypothetically buy a gun and tell them it is with much less effort / time invested (and thus costs)


Spectre attacks have so far only been observed at the nation-state level in the wild. If you're truly paranoid about being in the first wave of victims, disable Javascript and other active content on non-white listed pages. At some point processors are going to have to drop into a non-speculative security mode when dealing with vulnerable data like passwords and handshakes, and pop back out of it when they're done.


> Spectre attacks have so far only been observed at the nation-state level in the wild.

Citation needed.

I'm not aware of any Spectre attacks in the wild that have been documented. (Of course it's possible they happened, but if they have at least nobody talked about it.)


That's not enough. You have to disable speculative execution on untrusted code, not on sensitive data. There is no solution on shared infrastructure because the untrusted code is another VMs trusted code.


He was talking about desktop computing. For server side, of course you need dedicated machines in your own locations. You basically have no security against targeted attacks in shared hosting because you don't even have access to cameras or the ability to vet the personal with access to your hardware.


> At some point processors are going to have to drop into a non-speculative security mode when dealing with vulnerable data like passwords and handshakes, and pop back out of it when they're done.

Would the act of switching to a noticeably lower performance state when operating on sensitive data reveal information to other threads?


This is bad advice, attacks never get worse.


>attacks never get worse

Huh? When a known vulnerability is packaged in a script-kiddie/shady marketeer friendly manner, they do get orders of magnitude worse...


"worse" in the sense of less effective


Aha!


use a second process and IPC to communicate, done


> This security shortcoming can be potentially exploited by malicious JavaScript within a web browser tab,

Well, that’s lovely. Turning off JS just got more important.


I have basically zero sympathy for people who enable JS in their browsers and then complain about their privacy being violated. I mean, how can someone expect privacy when doing the exact opposite of best practices?


Some people need most of the internet to work correctly. Like it or not, JS has become a core requirement for modern web functionality and that is unlikely to change any time soon.


I have been browsing with JS off by default for over a decade and a half now. The list of domains for which I allow JS on has accumulated less than 100 entries. I do not use "appsites" much, and the rest of the document-centric Web is perfectly usable without any scripting.

The more who turn it off and complain about useless appification, the more likely the trend can change. There are already plenty of reasons besides security to turn it off.


The only device on which it is truly impractical to disable JS is my company-owned iPhone. It does not allow me to install apps from the App Store, so while I can flip the switch to disable it in Settings>Safari>Advanced>JavaScript, doing so is inconvenient since I cannot whitelist the few domains that I’m okay running JS on.

That’s a pretty special case, however. On any other device, uMatrix + NoScript works great.


The title and opening paragraph are misleading; this only leaks physical page mapping information, not data. So this can allow ASLR to be bypassed, but isn't a vulnerability in the same class as Spectre.


How big of a fuckup do we need to see to realize that running megabytes of arbitrary code from a dozen different domain for every website we visit is a bad idea?

What needs happen for engineers to realize that process isolation is something that needs to be taken serious at the lowest possible level (hardware + OS), rather than through some magic abstraction layer (VMs, hypervisors, containers, etc).


So, with surf, it's easy to browse without JS (set the default to False in config.h). It's way better than the default 'on'. If something does not work, I either hit ctrl-shift-s to enable JS for that WebKit process, or (more often) just axe the window.

http://surf.suckless.org/


Surf is great if you want one very slim browser for reading and some other fat browser for webapps. I've been really impressed how much more advanced surf is compared to dillo.


Browsing without JS is a major pain if you're enabling it for only certain domains. If you don't do that, you don't add much security anyway.


It's per-process. Under most circumstances, each surf window/tab is it's own WebKit process. So for example, I might decide to enable it for a banking session (although it's a perfect example of somewhere JS should not be necessary).


I'm beginning to think crypto operations should no longer be implemented in code on general purpose cpus.


I’m not an expert in this but considering the purpose of these exploits is to gain knowledge of memory layout to then execute exploits against DRAM, simply securing your CPU isn’t going to be enough.


Exactly. So dont keep the extremely valuable stuff in RAM and reduce the attack surface.


Your comment is unclear and I'm not sure that the replies understood your intent. Encryption operations being implemented in special hardware doesn't help solve SPOILER or rowhammer/spectre. Though perhaps Secure Enclave-style protection of keys mitigates the impact somewhat. AES-NI makes things faster but it just doesn't sound like it's on-topic here.

But I suspect you aren't referring to encryption but instead to cryptocurrencies?


With Intel AES-NI, it’s implemented in hardware! As if we can trust that, considering we can’t see schematics or anything. Someone should come up with an external crypto processing unit that’s attached over USB and offers a standardized interface that operating systems can use.


That's actually what coprocesors over PCIe were supposed to enable but they were just pricy and general CPUs became faster and faster.


"..it's not something you can patch easily with a microcode without losing tremendous performance"..

Spectre also caused performance drops, could Intel have made these flaws intentionally - just to boost performance?


Speculative execution in general is a sizable performance boost that brings in the risk of all sorts of attacks like this. You could of course try to implement it all in a safe way - and of course not every form of speculative execution in existing processors is unsafe - but in practice lots of vendors messed it up and Intel seems to have just produced the least-safe speculative execution implementation. I'd argue that speculative execution's risks were not particularly scary when it became a common technique (a very long time ago!) because things like shared hosting on anonymous machines with hypervisors or multiprocess browser sandboxes storing important data were nearly impossible to predict.

AFAIK Intel, ARM and AMD all have shipped chips that are vulnerable to a handful of speculative execution attacks. Microcode can work around these issues by doing things like changing the micro-ops that instructions decode to or disabling processor subsystems - though what they can or can't disable is something I suspect only the vendor knows, it's common for vendors to include many of what are referred to as 'chicken bits' that are essentially panic switches to disable a feature or subsystem if it turns out to be broken.


> AFAIK Intel, ARM and AMD all have shipped chips that are vulnerable to a handful of speculative execution attacks.

Some ARM CPUs, and IBM, both POWER and mainframe CPUs, have Meltdown bugs. All 4 of these vendors of high performance out-of-order speculative execution CPUs including AMD have Spectre bugs.


> Speculative execution in general is a sizable performance boost that brings in the risk of all sorts of attacks like this.

Makes me wonder when we will start seeing physical two-tier architectures, a complete but slower, safety centric system for handling sensitive data on both OS and application level and a souped up "you have been warned about potential leaks" OOE system acting as a general purpose accelerator addon.

The difference to existing security enclave concepts would be that the less safe side would be the add-on.


Thanks for a really good explanation, really interesting to know and basically answered my question!


>Spectre also caused performance drops, could Intel have made these flaws intentionally - just to boost performance?

Since no non-Intel CPU expert on Earth looking at those designs didn't think those things were a problem for decades, that's a good reason to assume Intel engineers couldn't either.


This overplays things a bit, Intel vulnerabilities like L1TF and Meltdown are directly related to optimization choices they made that people like AMD did not.

Not all of these vulnerabilities are equally serious in nature.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: