Hacker News new | comments | show | ask | jobs | submit login
CPU hardware vulnerable to side-channel attacks (cert.org)
139 points by contrarian_ 6 months ago | hide | past | web | favorite | 82 comments



Side-channel attacks are notoriously difficult to prevent:

https://en.wikipedia.org/wiki/Tempest_(codename)

I suspect these sorts of attacks will exist as long as people try to share running untrusted code on the same hardware.


It's not just untrusted code, it's all code.

"Trusted code" is one exploit away from being untrusted code. And solving that requires not accepting untrusted input, which makes most general purpose computing useless.

Has there been any research into solving this problem at the hardware level? I'm imagining something like having hundreds or thousands of distinct processors on a PC all isolated each running only one process.

It sounds extreme, but over time I've basically learned to treat "optimization" as a synonym for "introduces side channel attacks", and without something that can protect against a large majority of these kinds of exploits, computers are only going to get less secure.


Even with distinct processors, there would still be something in charge of who gets to use what hardware resource. I suspect that would be where the attack vector would shift.


At least some of those resources can only deal with encrypted data to reduce the amount of leakage to acceptable levels.

Or maybe we can contain the problem by having a kernel that manages access to all resources, but keeps individual processes on nearly separate hardware.

As you can probably tell, I have no idea what I'm talking about here, but the more of these big side channel attacks that come out, the more I'm feeling that there is no way to securely share a machine among multiple processes without just giving up and letting them all have access to one another.


I think that it's been proven that any computational system with shared resources is susceptible to side channel attacks. The only generic way to solve them is to share each resource equally across all its consumers (e.g. each thread gets 1/num_of_threads cpu time).


I wish desktop chips would trade a little performance for more simplicity and robustness. Even at half-speed, they are still pretty damn fast, and a lot of the bleeding-edge stuff was pushed over to the GPU a long time ago.

The economics are also a little off. If this were something like ARM64, the eventual replacement chip would be a few bucks instead of a few hundred. In that situation, I wouldn't get too upset about it. It would be like, "Oh well, I guess I have an excuse to upgrade my CPU now."


Hear! In them olden days of yore, writing x86 assembly, I had a decent mental model of what my code was doing when.

I no longer do.


"These days I spend more time optimizing code for ARM chips than for Intel chips. The real reason for this isn't any sort of assessment of current or future importance. The real reason is that ARM publishes the detailed pipeline documentation that I need, so squeezing out every last cycle for ARM is fun, while Intel hides the pipeline documentation, so squeezing out every last cycle for Intel is painful."

https://blog.cr.yp.to/20140517-insns.html


On one hand, great news for Intel: everyone needs a new chip. On the other, terrible news for Intel: AMD's offerings are pretty competitive these days. It'll be interesting to see how this plays out.


Meltdown: affects only Intel CPUs, is the 'biggy' short-term, but patches to mitigate it are landing as we speak and patches to eliminate it using trampolines are described in the paper. So no need to replace your Intel CPUs unless you can't live with the performance drop, but a good reason for buying AMD going forward.

Spectre: hits AMD too. Hits everyone today. Hope for a new arch (bias: I am a Mill CPU dev ;) )


> bias: I am a Mill CPU dev

This has been in the back of my mind during all of this.

Can you [0] outline to what extent Mill would or would not be affected by both Spectre and Meltdown?

[0] or is anyone from Mill planning to


We are immune. Give us a day or two to do a proper assessment and write it up.


Very much looking forward to this!


personally I'm thinking about buying some GreenArray 144 forth cpu


Where and when can I buy a Mill CPU? This has been name-dropped in a few threads but best sources say it's been "in development" since 2003 which sounds like vaporware to me.


Mill feels like the GNU Hurd of CPUs.


Hurd on Mill is the Hurd on Mill of... oh wait...


It only exists on paper.


Is Mill ever going to come out with real hardware that "normal" people can use? Or is it just an R&D project, and focused on selling the IP?


Can't it be fixed with a microcode update?


The low hanging fruit to mitigate Spectre would be to flush cache lines which were touched by false-speculation branch executions. Of course there'd be possible interaction with other, truly executed code hitting the same cache line.

Iff the microcode had instructions for cache line state manipulation it would be possible to emit µOps flushing cache lines touched in the other branch when merging with the true branch.

However this mitigation would only be possible if there were µOps for doing that and the instruction decoder was powerfull enough to do this kind of thing. Eventually we'll likely see silicon in which cache lines get additional status bits that keep track by which OOE engine the data was fetched and after branch merging flush all cache lines not matching the taken branch.


I'm not sure that'd help - as someone else observed in another thread, loading anything into a cache requires evicting something else, and that eviction can be measured. You could do something like maintaining a full duplicate of the cache and tracking deltas, but I feel like that's back in "rearchitect the CPU from the ground up" territory.


That is why there is a big fuss about it


maybe future processors will have more microcode for other subsystem ..


nope.


Why not? What can?


It's very likely that because this is a performance driven optimization that exposes the security hole it is at a layer beneath the microcode from how the instruction pipeline/branch prediction was working to begin with. The wiki page for Instruction Pipelining is pretty good https://en.wikipedia.org/wiki/Instruction_pipelining. There was already problems in multicore systems for false sharing/other cache issues. As far as my understanding goes microcode is only useful for the logical wiring of the various subsystems of the CPU together to perform the requested operation, but it is not generally 'inside' the actual operations (ex: at a certain point shift left one bit turns into currents through gates, and there's no microcode at that level). If you bring it to a higher level of abstraction on a particular motherboard you might be able to flash the firmware of the BIOS, and the memory controller has a known issue you may be able to use the BIOS to access it differently but for a certain class of issues if you can't change how you signal the memory controller to work around it and the memory controller is not a replaceable part you might be screwed and need a new motherboard that just doesn't have the issue which seems to be closer to what we have here.


Do Intel even sell any non-vulnerable CPUs?


First couple of generations of Atoms, before they went out-of-order.


Maybe they have some surplus 8085s from a trade-in program?

[I remember hearing about an engineer for railway signaling systems buying up all the 8085s he could get to use in new systems, because that was the last CPU where he felt confident he understood all the bugs. Alas, I can't find a reference any longer]


I'm guessing Itanium is not vulnerable.


Maybe the CPU is the wrong place to perform sophisticated optimizations. IMHO, the CPU should be far more simple and the intelligence should be in the compiler.


This has been tried several times and it just doesn't look like it works in practice. Delay slots, Itanium, etc.

The CPU is a bit like a JIT in that it can see how the program is really running and optimise for those conditions, which the AOT compiler cannot. Your AOT compiler may not know you're going to take a branch more times than not, but your CPU may be able to work that out at runtime. And then tomorrow you may never take the same branch and it'll work that out as well for the same code.


The compiler only knows about the program and maybe statistical information about the data.

The CPU knows about the actual data currently being processed.

Therefore, the CPU can do more by using branch prediction and speculative execution. It is more expensive in terms of energy per computation but so far it was worth it. The CPU can also optimize old code on-the-fly.


Itanium tried this and Intel got burnt by the experience. Not saying you’re wrong, just saying Intel won’t believe you for a while yet.


Did Itanium really fail because of the technology or because of x86's momentum and lock-in effects?

AFAIK Itanium has explicit software control of speculative loads.


As far as I remember, both. It was only slowly executing x86 code and the compilers didn't deliver what was expected. It was a gamble on non-matured technology. They didn't fail completely, though, HP was selling systems using Itanium processors for a long time.


Other 'the compiler will sort it out' approaches have also failed though, so it seems likely.


The conversion dex2oat that is performed when an application is installed on an Android device generates machine code that’s very specific to its CPU. High level optimization is performed in previous passes. AFAIK this approach is successful.


...but those CPUs are still speculative out-of-order super-scalars aren't they?

We're talking about removing those features, on which our entire computing ecosystem is built, and expecting the compiler to be able to pipeline every execution unit individually.

dex2oat is where the work could be done yes, but we just don't appear to know as a field how to fill in processor pipelines like that - we just don't have that knowledge to do it, and nobody seems to be able to figure it out despite trying several times.


>...but those CPUs are still speculative out-of-order super-scalars aren't they?

Not universally, even in current-generation devices e.g. the Cortex A53 and A55 are in-order and were explicitly mentioned as safe by ARM.

Snapdragon 625/626 is an octa-core Cortex A53 at 2.2GHz in a lot of the current mid-range devices from almost every major phone manufacturer (Xiaomi, Samsung, Moto, Huawei, Asus, Lenovo, ...)


Even those don’t expose more than delay slots do they? We have trouble even using those effectively.


There is a third way: have the CPU JIT the code to a very different architecture according to completely programmable firmware, just like Transmeta did.

I don't know whether the Transmeta CPUs are vulnerable to Spectre and Meltdown, but fixes to both would be one firmware update away - and most probably with little to no performance impact.


Wasn't that the goal of Itanium which failed?


You could have invented RISC 40 years ago!


I can't figure out whether this means that OS-level mitigation of the problem doesn't prevent all avenues for exploitation. The headline implies it (and probably made it to the front page based on that implication) but TFA doesn't make it clear whether that's true.


Any branch on attacker-controlled data can be speculatively bypassed. You would have to at the very least recompile all applications to attempt to mitigate the two Spectre variants discussed so far.

Unless there is some way to turn off speculation entirely, but that would hurt performance badly.


So why are the OSes bothering to patch anything?


There are (at least) two separate things going on. Meltdown, the flaw exclusive to Intel and some ARM CPUs, is very easy to exploit and is the one being patched by OS vendors.

Spectre is a whole other can of worms, on the one hand it's more tricky to exploit, on the other hand there might not be an easy fix and people are speculating (no pun intended) that it will have to be dealt with in hardware.


Spectre works in javascript due to how aggressive the JIT is.

Chrome and Firefox are already working on solutions as you cannot exploit the JIT if it generates code that ruins your timing as far as I'm aware.

So that solves the problem for most people, but all other environments that allow execution of untrusted code also need to be updated.


the pun is apparently at least part of where the name come from, as it exploits speculative computation


In the old days this would be no biggie as every non-trivial site would have at least half a dozen different kinds of Unix workstations anyway. SPARC, MIPS, PA-RISC, AXP, m68k, POWER...


In the good old days everybody just used HTTP, FTP and telnet and NIS. Old archs also has other bugs we forgot about / never discovered.


The point being: monocultures are fragile.


Spectre impacts current iterations of: x86, ARM and Power. It's most likely impacts other archs that have variants of CPUs with out-of-order execution.


Spectre is harder to exploit - it’s the Intel-specific one that’s the biggie.


You think it'd be less work having to coordinate with even more vendors than listed in the advisory?


so i bought my cpu a few months ago. will intel replace it for free?


After 15 years of class action appeals, you’ll get a voucher for $15 off of a retail-packaged processor.


Exactly the right amount of cynicism for the situation. Thank you!


Meanwhile the lawyers and original plaintiffs will be able to buy their own sex islands...


... with sex dolls... with Intel Inside stickers on them?!

;)


Doubtful - Intel's press release is carefully worded to suggest that the chip is working as designed, and that others are to blame if exploited. I'm sure they see the lawsuits on the horizon


The wording "working as designed" makes about as much sense here as a lock maker claiming a lock was designed to be easily pickable.

Intel's outside lawyers are going to have a great 2018. And '19 and '20. And AMDs. The plaintiff's lawyers too.

Don't forget expert witnesses. Damn, for anyone with technical expertise in the area now's the time to polish up the resume and start shopping it to the large firms on both sides.


not that this comparison works well, but Master Lock actually does rate their locks, but essentially based on crappyness (because even their best locks are trivially picked, and known easy targets)


It's those nasty hackers and awful security researchers who are to blame...


did you pay with a credit card?


this is our golden opportunity to switch to something less privacy evading and bug-ridden.

(and it will incidentally also prove that the market really doesn't work very well, because most people will still buy intel)


The problem is that there are not many alternatives.

AMD claims their current CPUs are not affected, but they still have the PSP, AMD's equivalent to Intel's ME. I suppose it has not been probed as thoroughly as ME because of Intel's bigger market share.

ARM CPUs are - according to Intel - also vulnerable, which disqualifies almost all other competitors.

I had hoped that the Longsoon chips would amount to something; I vaguely remember Richard Stallman used a notebook with Longsoon processor, but none of the vendors I checked at the time had even heard of it. And if you are paranoid enough, a Longsoon-based system might just replace the NSA with their chinese equivalent.

The only viable alternative from a technology point of view that I am aware of is the Talos Raptor workstation. Unfortunately, it is rather expensive. Okay, for a high-end workstation, the price is not unusual. But compared to the price of a regular office PC, it is very expensive.


I'm also looking at Raptor Hardware and I would love to hear the experience from somebody that uses it.

As well, I am not sure that POWER9 is immune to these attacks. And then, well, you still cannot buy their products, as far as I understand it from their page [1].

Does anybody know more about the Raptor systems?

[1]: https://raptorcs.com/


Anyone know the implications to the security of the crypto currencies? Are they highly prone to theft now?


I can't think of any cryptocurrency-specific attack vectors off the top of my head. But if malicious Javascript code can use Spectre to escape the browser sandbox, then wallet private keys are potentially vulnerable, just like everything else on the machine.


My guess is at least one exchanger will run away with coins and blame it on the CPU vulnerability being exploited.


If your crypto coin lives on a box connected to the internet you're living dangerously. That said, I live dangerously too.


Not if you use a hardware wallet (Ledger or TREZOR). If you're using a web (Coinbase) or desktop wallet, you might be at risk. In reality, it's pretty unlikely to be exploited.

Someone could steal your login credentials for any web service, but the risk is mitigated if you use 2FA, or some sort of IP whitelisting.


What happens when you use that wallet by plugging it into the computer, or by entering its information on a computer to perform a transaction?

People say "Hardware wallet" like it's a magic incantation.


If you use a hardware wallet, the private keys only exist on the device, and therefore cannot be stolen using this attack vector. The Ledger uses a secure element, which is an entirely separate MCU, and (as far as we know) can't be 'hacked' by any script kiddies.

However, if you for some reason decided to store the wallet seed on your computer, it's no longer secure.


An attacker can still use the hardware wallet to transfer money as long as it's plugged in. To prevent this you need external hardware with a display and external input (that cannot be reflashed through usb (!)).


Wrong. The hardware wallet requires physical access to sign transactions with the private key (you have to press the button to acknowledge the transaction).


Peer comment has hit the nail on the head: what you're describing is the functionality provided by hardware wallets -- external physical input/display.


Might have implications for some of the CPU only mined altcoins if they're slowed down.


disable high resolution clocks for non-sudo programs?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: