Edit to add: https://www.intel.com/content/dam/www/public/us/en/security-... is a discussion of a previous Intel security issue which includes a description (on page 6) of the different unlock levels. This apparently requires that the CPU be in the Red unlock state, which (in the absence of ME vulnerabilities) should only be accessible by Intel.
The concept of defense in depth literally relies on each barrier being independent and robust. That’s why you see hardening of Linux’s hibernate even though the common refrain is “well if you have physical access the game is lost”. There are things that even root can’t do even though “hey if you have root the game is lost”. The point of the game is to never lose even in very adverse environments.
- Police confiscate your laptop on some bogus pretext, then return it to you saying you're free to go.
- You open the laptop and find nothing that shouldn't be there. You wipe it, reinstall the OS and continue using the laptop.
- Surprise! The CPU now works for the police, so after some time it installs a rootkit or whatever.
Dunno if the microcode is big enough to do this kind of attacks, and perhaps some other firmware is easier to program.
But if someone waves this off saying that's not how police works in the US, well the world is larger than the US and it all definitely happens in other countries, only without CPU rootkits so far.
There's not really a meaningful difference between these! If there's an exploitable ME vulnerability then the police can absolutely own your system in an undetectable way regardless of whether or not this feature exists. If we were in a different universe where the ME enabled whether or not the CPU was in debug mode but wasn't responsible for any other security features then we'd care about this a great deal more, but as long as compromising the ME already gives you a way to permanently backdoor the system it's doesn't make any real difference.
I’m interested to see what people are able to reverse engineer with these sorts of tools. It wasn’t even that long ago that ucode wasn’t even encrypted with integrity. I don’t think AMD started doing that until around 2010.
I’m also curious which hardware versions this works on, since it’s not obvious it’s universal. I’ll be amused if it’s some forlorn low power chip from 10 years ago.
whether they're encrypted or not doesn't really matter. what actually matters is whether they're signed or not. There was a talk given in 2017 about trying to modify the microcode in AMD processors, but they were using processors from a decade ago (AMD K10, introduced 2007). That makes me think that processors made in the past decade are probably using signed microcode.
>Note that Intel started to cryptographically sign microcode updates in 1995  and AMD started to de-ploy strong cryptographic protection in 2011 .
Can one do that on user space app?
Or you mean Intel had to physically handle your CPU with a debug cable or whatever ?
Cause I really dont feel it s okay that the only safety we have from a newly discovered exploit is that there needs to be another newly discovered exploit :D
It is also known that they have had backdoors in commercial systems as they came off the shelf, but I think usually those were CIA owned and controlled companies like the crypto AG phones.
What is unknown (pure speculation) is whether, for example, Intel CPUs come backdoored straight from the factory floor? On the one hand, that would be a powerful capability to have, but on the other hand, the risk of exposure and subsequent damage to the US economy, prestige, etc. would be non-zero. So it's hard (for a plebian like me, anyway) to estimate how those costs/benefits might be weighed up by the US government.
There is also a third possibility, that some intelligence agency invested a ton of cash into finding abusable exploits in these systems giving them the same access a backdoor would provide.
Also from the Snowden leaks, we know that they have programs with budgets in the millions into finding similar exploits and that there were similar programs outside Snowden's clearance. And though a bug may cause the same damage to the economy, it wouldn't hurt US prestige in the same way.
Russia and China would get Windows source access for instance.
If I were a three-letter agency, I'd bribe/blackmail somebody into inserting intentionally vulnerable code. After all, sufficiently advanced malice is indistinguishable from incompetence.
We've often seen that the code inside firmware, secure environments like trustzone, etc tend to lack many of the mitigations for the classic vulnerabilities. Just rewrite one of the ASN.1 parsers in the ME (I'm sure there's at least one), "forget" a bounds check in some particularly obscure bit, and you'd have a textbook stack smash.
But really, it's not about operating systems not using RDRAND at all - it's fine to use it as one of the entropy sources; what you don't want to do is use RDRAND directly instead of CPRNG.
But for whatever reason they have been very tight-lipped.
(Barely-public documents like patent filings don't count.)
The original 6502 had many undocumented instructions. Most of them not very interesting, but certainly undocumented. http://nesdev.com/undocumented_opcodes.txt
Here is a detailed examination of undocumented Z80 behavior. http://www.z80.info/zip/z80-documented.pdf
The 8080 had undocumented instructions too.
So now we are all the way back to pdp series. The pdp-8 had plenty of undocumented instructions.
Should we go back further? It doesn't stop.
Same was true with pretty much all non microcomputers through the 80s.
Here's a typical datasheet from Rockwell, for example:
It's also worth noting that the architecture diagram on page 5 of this document isn't entirely accurate, either. It's closer to a programmer's model than an RTL description of the CPU; some "hidden" registers used during certain operations (like indirect addressing modes) aren't shown.
Bitsavers has the documentation for the PDP-11/40 I used to have here. http://www.bitsavers.org/pdf/dec/pdp11/1140/. It includes microcode listings both assembled and as flow charts. My PDP-11 had actually been modified before I got it with the microcode ROMs in ZIF sockets, as someone had been writing their own microcode.
In this specific case, users do not have the ability to use this instruction, so there is no reason to document it.
You see it from an "they should inform us" POV, but there is also the "there is no guarantee it will be there" side.
We can debate if some private bank API's are publically accessible (you can still DDOS them). This shit, however, is inside my property, that I bought with my money, and it might end up being used by some malware. It is by definition accessible, and could be a security bomb.
How can any believer in free market defend companies secretly selling to comsumers the equivalent of a dangerous remote control mechanism embedded in their car?
Market only works if sellers aren't lying all the time, and in IT industry we made it the norm.
Your bank doesn't guarantee you that the lock model X45-b from company SUPERLOCK will protect your gold in their vault, even if that's what they use right now, and it's not specified in your contract with them, they just guarantee to protect your gold. This way they can change the lock without being in breach of contract.
Even websites often have an internal API and a different public-facing one.
This is how almost everything works in the digital age.
Should we, as consumers, demand that Ford productize this robot and make it publicly available before we'll buy a Ford car?
Intel isn't alone; search "9c5a203a" for some equally interesting stuff on the AMD side.
I will say, the thing security researchers find are simply amazing to me
I've been down so many "rabbit holes" aka "levels of abstraction" aka "abstraction hierarchies" aka "turtles on top of turtles" aka "things stacked on top of other things" (compare with Monty Python's "society for putting things on top of other things") in my life as a Programmer (and in my secret life as an Philosopher! Shhh, don't tell anyone! <g>. And yes, I know... "Keep the day job"...<g>) that one more ("I'll go down this last rabbit hole, and then I'll be finished, really!") really won't make that much more difference! <g>
Before I get started on that one though, two more quick references about "abstraction hierarchies" aka "rabbit holes".
On the one hand, we have Joel Spolsky's magnificent essay, "The Law of Leaky Abstractions (https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...) and on the other, we have the Biblical story of "The Tower Of Babel".
Now, I am not the member of any religion, I don't endorse any religious text over any other, but any intellectual worth his salt -- should read both of these writings, and compare them.
Once you've done that, then compare what's being said to abstraction hierarchies of all sorts.
We see abstraction hierarchies in code, we see them in language (especially when a language changes over many years, as it changes, fractures and sediments). We see them in Law (Law in the U.S. as it is practiced today can be compared to an abstraction hierarchy -- not at all unlike The Tower Of Babel as a metaphor for this phenomenon, or Joel's essay as a more complete modern-day explanation of it...)
We see them in a number of places and systems.
We even see them in x86-land...
That's because the x86 (as far as all of my research has suggested, and someone correct me if you think I am wrong, with links/references please!) -- is really a RISC core with a separate microcode execution engine (the level of abstraction above the RISC core), and has been since at least the Pentium Pro (P6, 1995):
http://i.blackhat.com/us-18/Thu-August-9/us-18-Domas-God-Mod... (Or Google "Christopher Domas God Mode Unlocked") (Also, see the patents listed in this PDF)
https://troopers.de/events/troopers16/655_the_chimaera_proce... ("We take a 12-core processor, inject microcode to simulate a PowerPC (2 cores) and a MIPS processor (2 cores), restrict 2 cores to i386 and leave 4 cores to amd64.")
So with all this in mind, let's turn back and look at VISA:
>"The Intel technology is called Visualization of Internal Signals Architecture (VISA), and is used for manufacturing-line testing of chips.
However, Maxim Goryachy and Mark Ermolov, security researchers with Positive Technologies, said in a Thursday Black Hat Asia session that VISA can be accessed – and subsequently abused — to capture data from the CPU using a series of previously-disclosed vulnerabilities in Intel technology."
You know, that sort of reminds me of lyrics from The Who's "You Better You Bet":
"I showed up late one night with a neon light for a VISA,
But knowing I'm so eager to fight can't make letting me in any easier..."
Hmmm, now who is "me" in the above context?
Could it be... (With my apologies to SNL's Dana Carvey as "The Church Lady")... a TLA? (Three Letter... er, group? <g>)
Hey, if any power-that-be is offended, then I'd like them to know that I only quote things that I find on some places on the Internet -- on other places on the Internet....
In other words, I am only the messenger... one of millions and billions, that also quote things that they find on the Internet -- on other places on the Internet...
(Side note: There should be a Monty Python sketch for that topic! <g>)
In other words, "don't shoot the messenger...(s)" <g>...
1. Like unofficially turning on ECC support on non-ECC chips assuming the hardware allows for it?
2. Or increasing total supported RAM size on lower end chips?
3. Unofficial open-source microcode updates(performance/security) for older unsupported chips?
Here is the timestamp
And it does seem plausible that intel would pick proximal instructions for 'read' and 'write' of uarch state.
This remained unclear to me. Other comments say the CPU needs to be in red unlocked state, whatever that is.
The screenshot shows UEFI. So one could guess the CPU is in such state before the operating system gets loaded. But the operating system typically loads a microcode update, after that the CPU should no longer be in unlocked state.
So for "everyone can fiddle with the bits", that would require to run a modified bootloader first. Which should not be possible thanks to secure boot.
So yes, maybe it's a small step to another highly complex exploit. But it's not that everybody running a securely booted operating system (which should be everybody) can start fiddling with bits.
Edit: There is a difference between your own bits and someone else's bits. Yes, on your own machine you can run a modified bootloader and that's a good thing. Putting a modified bootloader to another machine should not be possible (until someone breaks secure boot, but I don't think that has publicly happened)
That would be an incorrect guess. Getting into this mode requires the cooperation of the ME. If you're Intel then that means you use parts with different fusing and boot some magic ME firmware that lets you do this. If you're not, you exploit a vulnerability in the ME (something that's currently only possible on older hardware) to do so. You can't enable this by simply replacing the bootloader.
Can't the bootloader be signed with MOK (machine owner key ?) that may allow this to work ? (Phys access required, no doubt)
Alternatively if the bootloader doesn't load microcode, just prevent the OS from loading microcode and the system will be in the attackable state ?
It's deeper than secure boot kind of stuff.
Microcode? No it's not. But as long as you only load bootloaders or operating systems that are signed, it doesn't matter that they could fiddle with the bits as long as the signature guarantees they don't (in any undesired way).
Or ME? Well, that seems to be a complete security nightmare.
Firmware is both ME firmware and x86 firmware — the UEFI implementation, FSP, everything else that runs on early boot.
Intel and AMD almost certainly have compilers/assemblers of sorts that handles turning the microcode assembly-of-sorts into the correct bits, but they’re not portable.
Sidenote: Ken Shirriff has reverse engineered the ARM1’s microcode, but it’s a “horizontal” microcode (bits control CPU blocks directly) while x86 uses “vertical” (RISC-like) microcode.
: https://www.youtube.com/watch?v=4oFOpDflJMA (slides: https://hardwear.io/netherlands-2020/presentation/under-the-...)
Interesting comment on twitter to the instruction of the original post
And of course nothing about RISC-V implies that production implementations will be open source at all.
There's also the "unpublished" stuff that occasionally leaks here and there; not something which I suspect will ever happen with Apple, but you never know...
>The demonstration website can leak data at a speed of 1kB/s when running on Chrome 88 on an Intel Skylake CPU. Note that the code will likely require minor modifications to apply to other CPUs or browser versions; however, in our tests the attack was successful on several other processors, including the Apple M1 ARM CPU, without any major changes.