So you mean, if I am a state actor able to kidnap the child of an Intel high level employee... say I m Joe Biden, I can ask Intel to... remote unlock my CPU and read arbitrary memory block ?
Or you mean Intel had to physically handle your CPU with a debug cable or whatever ?
Cause I really dont feel it s okay that the only safety we have from a newly discovered exploit is that there needs to be another newly discovered exploit :D
It is public knowledge that US intelligence agencies actually just hijack computers and equipment on their way to the customer and install hardware backdoors there (Snowden et al., 2014).
It is also known that they have had backdoors in commercial systems as they came off the shelf, but I think usually those were CIA owned and controlled companies like the crypto AG phones.
What is unknown (pure speculation) is whether, for example, Intel CPUs come backdoored straight from the factory floor? On the one hand, that would be a powerful capability to have, but on the other hand, the risk of exposure and subsequent damage to the US economy, prestige, etc. would be non-zero. So it's hard (for a plebian like me, anyway) to estimate how those costs/benefits might be weighed up by the US government.
>What is unknown (pure speculation) is whether, for example, Intel CPUs come backdoored straight from the factory floor?
There is also a third possibility, that some intelligence agency invested a ton of cash into finding abusable exploits in these systems giving them the same access a backdoor would provide.
Also from the Snowden leaks, we know that they have programs with budgets in the millions into finding similar exploits and that there were similar programs outside Snowden's clearance. And though a bug may cause the same damage to the economy, it wouldn't hurt US prestige in the same way.
If I'm the CIA I have dozens of highly placed agents or at least informants at Intel. Not necessarily placing backdoors, but finding and not fixing exploits and sending them back to the CIA for later use. It would be extremely cheap, hell if I'm China, Russia, the UK, or Israel I'm doing the same thing.
> On the one hand, that would be a powerful capability to have, but on the other hand, the risk of exposure and subsequent damage to the US economy, prestige, etc. would be non-zero
If I were a three-letter agency, I'd bribe/blackmail somebody into inserting intentionally vulnerable code. After all, sufficiently advanced malice is indistinguishable from incompetence.
We've often seen that the code inside firmware, secure environments like trustzone, etc tend to lack many of the mitigations for the classic vulnerabilities. Just rewrite one of the ASN.1 parsers in the ME (I'm sure there's at least one), "forget" a bounds check in some particularly obscure bit, and you'd have a textbook stack smash.
How would an OS seed an RNG in the cloud? How would you seed an RNG on a headless server in a VM? What about when that VM is copied, possibly while running, in order to duplicate server functionality? There are vulnerabilities and threats here that your comment does not take into account.
But really, it's not about operating systems not using RDRAND at all - it's fine to use it as one of the entropy sources; what you don't want to do is use RDRAND directly instead of CPRNG.
Remotely? I think Intel would need to produce a backdoored ME firmware, get the system vendor to incorporate that into a system update and then convince the target to flash that. In that sense I don't know that they'd technically need physical access, but it doesn't really meet most people's description of a remote attack.
Or you mean Intel had to physically handle your CPU with a debug cable or whatever ?
Cause I really dont feel it s okay that the only safety we have from a newly discovered exploit is that there needs to be another newly discovered exploit :D