Which machine language is the microcode written in?
Is it even possible to fully decode that language with publicly available information/tools?
Given that microcode is an internal mechanism of CPUs, I would expect its language to be impossible to decode for regular people because there is zero knowledge on how it works?
And even if there is some knowledge on it, won't Intel change the machine language around a lot among CPU generations because the lack of public usage means it can be changed constantly, thus rendering the existing knowledge useless quickly?
> Which machine language is the microcode written in?
The mirocode is generally a sequence of uOps.
But in Intel's case, there seems to be a more complex mechanism, called XuCode, that generates the uOps sequence.
The XuCode ISA seems to be based on x86-64, as Intel says [1]:
> XuCode has its own set of instructions based mostly on the 64-bit Instruction Set, removing some unnecessary instructions, and adding a limited number of additional XuCode-only instructions and model specific registers (MSRs) to assist with the implementation of Intel SGX.
PS: Decoding of the XuCode microcode can potentially give precious information about uops encoding
PS2: You can find more information on uops encoding in another work from the same team [2].
- Intel CPUs with SGX have an additional CPU mode that understands and runs XuCode, "XuCode is implemented as a variant of 64-bit mode code, running from protected system memory, using a special execution mode of the CPU."
-- I know that "ring" terminology is used to describe CPU modes, e.g. calling a hypervisor setup ring -1, SMM ring -2, and the Intel management engine ring -3. Seems like this mode is something like ring -2.5.
- "It is authenticated and loaded as part of a microcode update and is installed into a Processor Reserved Memory (PRM) range, typically allocated by system firmware. The memory range itself is protected from software and direct memory accesses by the Processor Reserved Memory Range Registers (PRMRRs)."
So the BIOS steals a bit of your RAM (which the ME already does), sets it up to be the PRM, and a microcode update unpacks XuCode now contained in the microcode data, and puts it in this PRM. I guess some SGX instructions are essentially a specialized form of INT instructions that "exception out" specifically to this special CPU mode/PRM space.
So I'm under the impression XuCode is essentially called by the microcode when certain SGX instructions are encountered.
The microcode is a sequence of fixed-length microinstructions.
Each microinstruction is composed of many bit fields, which contain operation codes, immediate constants or register addresses.
The format of the microinstruction is changed at each CPU generation, so, for example, the microinstructions for Skylake, Tiger Lake, Gemini Lake or Apollo Lake have different formats.
Therefore, someone who discovers the microinstuction format for one of them has to repeat all the work in order to obtain the format for another CPU generation.
the authors show the microinstruction format for Apollo Lake, which is a kind of VLIW (very long instruction word) format, encoding 3 simultaneous micro-operations, each of which can contain three 6-bit register addresses and a 13-bit immediate constant.
For Apollo Lake, the microinstruction encoding is somewhat similar to the encoding of an instruction bundle (containing 3 instructions) in the Intel Itanium processors.
It is likely that in the mainstream Intel Core or Xeon CPUs the micro-instruction format is significantly more complex than this.
The team which reverse-engineered the microinstruction format was able to do this because they have exploited a bug in the Intel Management Engine for Apollo Lake/Gemini Lake/Denverton to switch the CPU into a mode in which it allows JTAG debugging.
Using JTAG they could read the bits from some internal busses and from the microcode memory. The bits read were intially meaningless, but by executing many test programs and comparing what the CPU does, with the bits read at the same time via JTAG, they eventually succeeded to guess the meaning of the bits.
For most Intel CPUs, there is no chance to switch them into the debugging mode, unless you receive a secret password from an Intel employee (which probably is within the means of some 3-letter agencies).
Once switched into the debugging mode, it is possible to do things as complex as making the CPU to replace the normal execution of a certain instruction with the execution of an entire executable file hidden inside the microcode update (and you can also update the microcode directly, bypassing the normal signature verification step).
However, for most motherboards, it is likely that switching to the debugging mode also requires physical access to change the connection of some pin, not only the secret password, though exceptions are known, where the MB manufacturers have forgotten to disable the debugging mode on the PCB.
Yeah but Intel's engineers aren't going to just change the machine language around for funzies. I'd expect it to be semi-stable because if it ain't broke, there's no reason to go in and change it.
They might not change existing stuff, but they may very well constantly add new instructions, that doesn't break their usecase, but it will break the usecase of the public trying to decode it easily.
> Which machine language is the microcode written in?
Seems logical that it would be mostly be standard machine code, since there are instructions which translate 1:1 to microcode (I assume) no use translating everything, that’d require more space and be harder on everyone for little reason.
Though there might be privileged instructions which would only be available in microcode (would be rejected by the frontend), and which you would have to reverse-engineer separately.
The thing with high-performance x86 is that it is not really an microcoded design. The CPU is RISC-like OoO thing that can execute some pretty substantial subset of x86 ISA after some simple conversion (ie. to some fixed-width format). Thus it makes sense for such design to not have separate microcode sequencer but run the "microcode" that emulates instructions that are not directly supported through the same execution engine (with the low-level "micro" ISA having extensions to manipulate the CPU state that is not directly modifiable by user code, like filling TLBs and changing shadow descriptors).
Getting a dump means getting access to a memory controller of sorts and asking it to read you back the contents of addresses, right?
But you’re really getting what the memory controller decides to give you. There could be more indirection or sneakiness, right? Ie. I could design a memory controller with landmines, as in “if you ask for 0x1234 I will go into a mode where I send back garbage for all future reads until power is cycled.”
> But you’re really getting what the memory controller decides to give you.
Yes, here the memory is read through a debug bus.
> I could design a memory controller with landmines, as in “if you ask for 0x1234 I will go into a mode where I send back garbage for all future reads until power is cycled.”
Yes, it basically looks like a backdoor, and you can do it the other way around:
The memory read through the debug bus is exactly the content of the ROM, but the memory controller is made so that when the processor reads a specific address or data it doesn't return the value in memory but something else.
This way even a person who would use a visual or an intrusive memory extraction method would not notice the backdoor.
The only way to discover it is to do a full inspection of the logic, which probably nobody will do.
> Is this a thing?
Yes, sometimes some addresses in a memory system are effectively not readable (write only).
As for example with some memory-mapped configuration registers, a 0-value may be returned instead of the register contents.
But your question sounds to me more about mechanisms to hide a backdoor.
Regarding hardware backdoors, they are always theoretically and practically possible, and almost always undetectable.
Since nothing prevents the designer from introducing logic that has malicious behaviour and it's nearly non-observable.
This is the problem with theories about backdoors in modern processors.
Without evidence, these theories fall into the realm of conspiracy theories.
But it's almost impossible to have evidence and no-one can say that it doesn't exist.
Even if they released absolutely everything, there's no way to verify that the chips they actually make and sell conform to a design that they release without inspecting the actual chip. If you're really paranoid, you'd have to inspect every chip, and that's usually a destructive operation.
And the fab, or a rogue employee, or anyone/anything on the critical path to manufacturing, could decide to alter the design. Eg Stuxnet style where a worm gets in the fab via a contaminated usb key, a 3rd party could get to airgapped systems. With a sufficiently advanced attack, not Intel, not the fab, no one would know that a backdoor has been put in except the attacker himself.
And here's the million dollar idea, to verify you'd need to destructively inspect your chips at EOL to verify you haven't been screwed over. Anyone wants to start a business?
> And here's the million dollar idea, to verify you'd need to destructively inspect your chips at EOL to verify you haven't been screwed over. Anyone wants to start a business?
it only protects against backdoor injection by the fab (or the company that produces your masks)
And there are other solutions such as logic-locking.
The idea of logic-locking is to add XOR gates (or a more complex type of gate) to the circuit on well-chosen logic paths. To make the circuit behave correctly, it's required to know the value to be sent to each inserted XOR.
These values may be generated by an RNG circuit that is seeded by a secret key.
At manufacturing time the key is kept secret, so it's not possible for the fab to reverse engineer your circuit logic to introduce a backdoor.
Once production is complete, the key is loaded into circuits for sale
I’m usually not into security research but this is fascinating, thank you.
Someone screwing some C code and creating a vulnerability isn’t that interesting to me, but going on such low level and fundamental stuff makes me giddily, if not a bit scared.
Last time something like this caught my eye was that iMessage[1] NSO thing, not because they managed to escaped the sandbox, but because of the insanely clever way they did it.
Ahh, I knew someone would post a reference or video of xorxorxorxor, or Mr. Domas.
I was so excited to try out sandsifter I spent an entire night shift compiling everything on a raspberry pi or rockchip.
Sandsifter is an x86 instruction dumper. Sometimes I wonder if the head injuries I took as a teen affected me in a subtle way, because I did watch the entire sandsifter presentation where he repeatedly says x86.
He also has a cool presentation about ring -1, -2, and beyond, of which the reader for that stuff was what I figured was used to find the actual OP.
Yes, see page 96 of Bunnie Huang's "Hacking the Xbox" where he tells the story of what happens when the machine seems to boot something else than the ROM.
There hasn't been any obvious reason to keep this secret behind encryption, so now there's a little buzz in the air if something newsworthy will be revealed once people start analyzing the microcode and diffs between microcode updates.
> If I'm understanding correctly, this allows us to view (previously obfuscated) code that runs on certain (recent-ish) Intel processors?
Yes, but this "code" is the Intel microcode.
In a modern processor, instructions are translated in a sequence of micro-operations (uOps) before execution; These uOps are small instructions that the processor can execute with more ease.
Ultimately, this allows to build more performant processors.
But some instructions require translation into a uOps sequence that is too complex to be handled like other instructions.
Modern processors therefore feature a "microcode sequencer", and the "microcode" is the configuration of this component.
And this work allows us to interpret a previously misunderstood part of the microcode.
> What are the consequences of this?
There are no real direct consequences for users.
But this helps to better understand how modern Intel processors work; Especially security researchers will be able to better understand how some security instruction works (mainly the SGX extension). In the long term, they may find Intel errors (as has already happened previously) which will be fixed in next Intel processor generation.
Although security issues may be detected in Intel processors, this will probably have no impact for normal users, but it could affect some companies.
Cool, I’m into cheap auditable hardware! This could maybe turn out like when they discovered Linksys was breaking the GPL which ended up opening up an entire class of hardware to hack on.
As someone who just makes Crud apps can someone please ELI5 this. Why is this a big deal and why are people freaking out about intel chips becoming obsolete overnight ?
A CPU executes the machine instructions of your program (you might think of this as "assembly" programs, like "put a zero in register 12" or "jump to address 0xFF3392"). There have been architectures where instructions map directly onto transistors, but since System/360 (*) there's an extra level: the CPU has it's own, lower-level programming language that's used to execute the machine instructions. Finding out how a CPU actually works is interesting per se, and very valuable for competitors, but this work might also expose vulnerabilities and/or backdoors, built into the chip itself. It seems to be around 100kB of code, so there's a whole lot of opportunity...
As someone still at the "piecing things together" stage, here's my understanding:
There are a bunch of privilege levels in Intel CPUs (https://en.wikipedia.org/wiki/Protection_ring, relatively boring), used for memory protection and user/kernel mode separation (IIUC, I think I'm correct). They can be controlled by whatever code boots the CPU ("gets there first"), because the CPU boots in the most trusting state.
Over time the set of available levels proved insufficient for security and new levels were added with negative numbers so as not to disrupt the existing status quo. Ring -1 is used for virtualization and can also be controlled by whatever boots first (ie, outer code can create VMs and enter them, but the CPU faults if inner code attempts to access the virtualization instruction set), but Ring -2 and Ring -3 are used by the CPU itself.
Essentially, in the same way whatever the bootloader loads gets control over a bunch of interesting functionality because that code got there first, Ring -2 and -3 are controlled by code running directly on the CPU that gained control of the system before the bootloader and in some cases even UEFI was started. The significant elements are that a) this code can theoretically be altered by system (microcode) updates; b) these components run *completely* independently of the operating system - IIRC, the Management Engine is based on Minix running on a tiny 486-class core somewhere on the CPU die; and c) this invisible functionality has the ability to read and write all of system RAM. What's that? A glitch? A couple of bytes of RAM just got modified? That made the running Linux kernel suddenly think a process's effective UID is 0? Must have been the butterfly effect!
A bit of Googling found this overview of Ring -1 to -3 which I'd say is really good, definitely worth clearing your cookies to read if Medium is yelling at you about subscribing:
Microcode isn't persistent - on reboot you'll be running whatever version was in the CPU at manufacturing time. That means there's a path to booting to a known-good environment and updating the firmware, which will then load patched microcode before any attacker-controlled code can run.
As I understand it, you would need to have an existing RCE to exploit the microcode patching process.
h0t_max’s research means that future attacks — once your local machine has been infiltrated by some other means — can do a lot more damage than simply encrypting your filesystem or sending spam. They can rewrite the way your CPU works.
When your OS gets attacked by malware it is attacking the layer above the bare metal on which your OS runs. The base hardware remains untouched. You can at least clean things up by installing a new OS on the bare metal.
If malware attacks the bare metal itself, then you are stuck out of luck.
If there exists a backdoor, its unlikely to be remotely accessible. But privesc for any OS running a vulnerable CPU, absolutely. This would probably look like some hidden instruction to load and execute an arbitrary ELF at the XuCode level, not a magic packet.
Your OS loads a microcode update in to the CPU very early in the boot process, it's not a static firmware. Unless the microcode malware is sophisticated enough to block e.g. windows update, debian mirrors, then updating the system and rebooting to load a patched microcode would be sufficient to flush this out.
It isn't. There is unnecessary hysteria. Encrypting microcode is just usual competitive play. Doesn't mean anything nefarious. If issues are found in the said microcode, that would be a different story.
Microcode does spill the secrets of the hardware, so definitely don’t want competitors looking through.
Old ones though are open book (of a sort).
The 6502 for the apple II was obviously hand generated so it microcode is weird, but the 68k for the original Mac was pretty normal microcode as we would think of it today.
The 6502 is not microcoded, or at least not in the way we presently conceive of microcode. Its operations are driven by an on-board PLA which controls decoding and sequencing with its gates, which also explains its high performance per clock. It would be hard to call those gate connections micro-opcodes since they're not really used in that sense.
The 6502 uses "horizontal microcode", which, as you said, are the individual control lines. x86 uses "vertical microcode" which is a sort-of internal RISC architecture.
I'd like to talk about how we patch this pervading cynicism and
replace it with a model that's actually useful.
It isn't true on face value. Consider the course of the Cov19
pandemic. Two things spread. One was a virus. The other was an idea,
news about a virus. The latter spread much faster than the former. It
spread because people love "doom". Any story about the impending "end
of everything" is lapped up, relished and shared by the public.
There are well understood stages to information propagation. After "Is
this real?" and "Am I affected?" comes the question of "what to do?"
I think this is where your assertion about "care" comes in. Some
events will blow-up and burn out fast. People run around like headless
chickens enjoying the drama about something remote to their lives, but
ultimately nobody can do anything, so focus recedes. Wars and
famines are like that. Curtis calls this "Oh Dearism".
Alternatively, if there is any residual effect, new events connected
to the initial story, it feeds and grows. The initial "All the chips
are broken!" story that would die after a 24 hour news cycle becomes
an ongoing "situation". Then people care, because it's cool to care.
It gets a catchy handle "Evil-Inside" and a news slogan "The chips are
down". And then it won't go back in the bottle.
To reformulate your "Nobody cares" - as a question - "Do people have
sensible cares that are proportionate to the threat?" No. But if the
media tells them to care because cars are running off the road and
there's a massive uptick in cybercrime, which nobody can ignore and is
directly attributable to a single cause, then the "care" (hysteria)
can get worse than the problem (which may have happened with Cov19 to
some degree).
Finally, they may care, but about completely the wrong thing. One
suspects, if it turns out there is indeed some serious drama in Intel
chips, then Intel and the US government will try to paint the security
researchers as irresponsible hackers who unleashed this upon the
world.
I'm creating a startup to do just that. There's both huge upside[$$$], but also some legal risk. If this appeals to you and you're an innovator in the social engineering space lmk.
Are there rules/standards for how these top secret keys are stored? HDCP, Mediavine, keys to the Internet, etc. Sure, you could keep it locked in a Scrooge McDuck security vault, but you need to be able to burn the key into hardware/software, meaning it ultimately needs to be distributed across many machines, greatly increasing the number of people with potential access.
There's both. The encryption (decryption) key has leaked. The original question was about "making your own microcode", for which you would need the (not leaked, and unlikely to leak) private signing key.
Those codes were intentionally zeroed to get around what was (most likely rightly so) considered to be a failure of the launch doctrine to take into account the possibility of the leadership being knocked out which would make a retaliatory launch impossible due to the lack of valid launch codes.
I don't think Intel has such problems and I assume they are keen on keeping their microcode update process from being abused - it is not as if they don't have enough problems as it is.
Has someone tried to write own microcode and load it? Sounds like it should be much faster to run your own code this way than having the official microcode run an interpreter for your x86 instructions.
I guess that the amount of SRAM for microcode is limited so you cannot write a lot of code this way. Also, microcode might be used only for slow, rarely used instructions, and it doesn't make much sense to optimize them.
Judgement is nigh. I'd love to get my hands on one of the decrypted binaries but I expect there are much more capable reverse engineers are already carrying the torch :^)
As we know, processors run a series of instructions, things like "move data," "add," "store data."
Over time, these instructions have gotten more and more complicated. Now there are "instructions" like "Enter Virtual Machine Monitor" which actually complex manipulations of tons of different registers, memory translations, and subsystems inside of the CPU.
And, even simple, primitive instructions like call, jump, and return actually need to check the state of various pieces of the processor and edit lots of internal registers, especially when we start to consider branch prediction and issues like Spectre.
It wouldn't be very plausible to hard-wire all of these complex behaviors into the CPU's silicon, so instead, most instructions are implemented as meta-instructions, using "microcode." "Microcode" is just software that runs on the CPU itself and interprets instructions, breaking them down into simpler components or adding additional behaviors. Most CPUs are really emulators - microcode interprets a higher level set of instructions into a lower level set of instructions.
Historically, Intel and more recently AMD have encrypted this "microcode," treating it as a trade secret. This makes people who are worried about secret hidden backdoors _very_ worried, because their CPU's behavior is depending on running code which they can't analyze or understand. This has led to all sorts of mostly unfounded speculation about secret back doors, CPUs changing the behavior of critical encryption algorithms, and so on and so forth.
Decrypting this microcode will theoretically allow very skilled engineers to audit this functionality, understand the implementation of low-level CPU behaviors, and find bugs and back-doors in the CPU's interpretation of its own instructions.
Replacing this microcode silently would be absolutely catastrophic security-wise, because an attacker could silently change the way the CPU worked, right out from under running software. But, there is no evidence this is possible, as the microcode is digitally signed and the digital signature implementation, so far, seems to be correct.
There are SO many easier attack vectors for the urna eletrônica that you don't need to worry about this. I'm not implying there is anybody actually attacking them, but if I were to commit election fraud I wouldn't look at low level microcode backdoors.
Because you can't cause the cpu's microcode to be updated with your hacked code by magic, and what you can do with a hacked cpu directly, and without being instantly oticed, is actually pretty limited. The way you'd get use out of it would be for some tiny single bit flip or extra op change in the cpus behavior to invoke some other larger thing, like run an execurable that wouldn't otherwise have been run, or make a region of memory readable or writable that wouldn't normally, or allow an instruction that wouldn't normally etc.., and in all of those cases you need something else installed and ready to make use of the cpu hack. But if you can arrange to insert that other stuff, then the job is already done and playing with something exotic like this is just stupid.
If you have the means to use this, then you already have the means to do 500 other simpler and more useful things.
It's like replacing the locks on someone's car so that you can make your own key work in it too. Sure, cool, but you already stole their entire car or at least had access to it in the shop, and could just copy their normal key if you want access again later, for much cheaper, in much less time, and much less risk of detection.
Exploitation of an existing bugdoor is statistically more likely than a backdoor, and does not require updating any microcode. Once identified and exploited, bugs may be removed in a future microcode revision, as with all other forms of firmware and software.
Cuz ultimately everything costs some money to do (and even your own time is money) and we’re all trying to do the most possible with the least amount of money
If an attacker already has sunk costs in successfully compromising both in a different market, then both isn't a new cost, it's two new markets. As for "why both", see recent attempts to audit electronic voting systems.
To my knowledge there is no evidence of widespread voter machine fraud. Perhaps it is unwise to spread ideas suggesting an election system could possibly be compromised
Absence of evidence is not evidence of absence. Not that I believe that widespread fraud has happened before, but electronic voting is inherently flawed, there is no reason why a coordinated attack couldn't happen in the future.
I agree with you, and that's why I didn't spread disinformation suggesting an election system may be compromised. I just stated that the attack surface for something like this is quite big.
Edit: I'm also definitely not saying that paper voting is better than urna eletrônica. Both methods have big attack surfaces.
I'm not aware of that cancer that is probably killing me right now.
There is no valid argument for trusting electronic voting machines.
While they are still closed source like now, it is flatly impossible and not even slightly rational to trust them. If they ever become open and auditable and verifyable, well then you are no longer trusting them.
Talking about "large, coordinated" is an irrelevant distraction. Maybe there have been and maybe there haven't been any such attempted to date, but you have no way to know and so your awareness means nothing, and on top of that, every minute is a new minute.
There is no valid argument for trusting these things as they currently exist.
Whether you think any elections have been changed doesn't even matter.
> There is no valid argument for trusting electronic voting machines.
'Trust' is a strong word. It can be the case that the best electronic vote-tallying system offered to an electoral authority is better than the best paper-based vote-tallying system.
Generally, one should judge the system as a whole, and vote-system and cyber-security experts did judge the 2020 elections and found the allegations of fraud to be without basis, even though many of those same experts are generally critical of electronic voting machines.
When there are no paper ballots or printing of ballot receipts, I take it that the quiet intent is not to have a paper trail so that election fraud can be more easily hidden or for plausible deniability. Often, such methods, are to be used against the votes of minority groups or minority parties.
My guess is that the next discovery will be quite significant, but for the time being, this feature is read-only and restricted to Atom processors only.
Does the disclaimer at the top have any legal merit? If they didn’t include that disclaimer, would they actually be liable for damage or loss caused by its use?
Doubtfully legally "required" to avoid liability, but everything you can do to knock down arguments that you injured a third party by warning them of the danger really takes the air out of lawsuits. You can point to the warning to discourage from filing lawsuits, to encourage dismissal, or to make winning a case more likely.
IIRC IME also does a lot of core functionality like power regulation. Unlike many in this thread probably think, it does provide a lot of core functionality that you probably don't want removed.
The contention, from the User standpoint, however, is the network stack, potential to phone home, and the unrestricted access to the global machine state, combined with the fact, it is not documented or disclosed.
It's one thing to have that and be up front and open on it. Get secretive, and you're creating a massive source of unknown unknowns for everyone involved.
And like it or not, if you won't/can't be transparent about it, either
A) It'd take too long to document, which suggests there may be room for simplification
B) you're doing something that if it saw the light of day, would cause outrage, likely because you shouldn't be doing it
C) You're holding back the state-of-the-art for the sake of securing a revenue stream.
None of these inspires a excess of confidence/trust.
Depending on how microcode is defined, it's arguably existed back to the 40s. Which modern and reasonably performant CPUs are you thinking of that don't have microcode?
Edit: Oh, you mention the encryption. Big companies love obfuscating everything they create, because they're afraid something commercially sensitive will exist there and someone will copy it and outcompete them. I agree that this is ridiculous, but I don't think it's evidence of any sort of nefarious activity.
Well yes anything that's encrypted could potentially turn out to be evidence of a crime when decrypted, but we don't usually assume that the use of encryption is indicative of that
Yes, because large corporations never commit criminal infractions, or admit any wrongdoing. They just happen to settle with regulators as cost of doing business.
Microcode spills lots of the secrets of the hardware design, so you want it encrypted for as many years as possible to keep your trade secrets out of competitors’ hands.
If they have to hide something, it will not be in the microcode. They always expected it to be broken one day. If it is on the die it will take a massive effort to be found and understood, if ever. And it is actually much easier to hide from the employees as if it was in the microcode, many of them could have extracted the keys and sold them to a foreign power.
The second, possibly combined with something dirty.
I can see why they would sign/encrypt it so that things got safer, but then they should have done a much better job of it. If it was encrypted to hide something that could not stand the light of day then that's an entirely different matter altogether.
Agree, though your response doesn’t address if an attacker had the signing key; appears that an attacker with the microcode encryption key, knowledge of how microcode works, how it’s updated, the signing key, and how to generate a valid signed update — they would be able to deploy an exploit to the CPU; obviously this excludes the existing issues with Intel ME.
Yes, presuming physical access as well as access to the signing key, an attacker could certainly deploy malicious microcode. It would be a very scary thing indeed. It seems a reasonable threat model if your adversary is a malicious state actor, or similarly well funded and ambitious organization.
Edit: I assume this threat has existed as long as updatable microcode has.
well, I guess it depends on how the private key is stored in the CPU, how if any data they can inject or cause the CPU (via power rail noise/whatever) to accept an update/etc which allows the private key/or validation routine to be bypassed.
Basically, the easier route is usually to just find a way to bypass the check once, and use that to install a more permanent bypass.
The key in your CPU would be the public key, not the private key. The public key can only be used to verify existing signatures, not create new ones.
It may be possible through power fault injection to flip the bits of the public key such that you could get it to accept microcode signed with your own private key, but I would be very surprised if the public key weren't burned into the structure of the CPU itself in a manner that renders it immune to such attacks.
Of course, power fault injection may still allow you to bypass the verification routine altogether instead of modifying the key it verifies with.
Especially considering how they gained this knowledge:
"Using vulnerabilities in Intel TXE we had activated undocumented debugging mode called red unlock and extracted dumps of microcode directly from the CPU. We found the keys and algorithm inside."
And looking further down, some X86 instructions (that people would usually call low-level) actually trigger execution of an entire ELF binary inside the CPU (implemented in XuCode). Just wow.
Let's be optimistic - say, after lengthy, rigorous expert analysis it
turns out there are no backdoors or prepared traps for potential
malware within Intel CPUs. That's a big boon for security everywhere,
and for Intel's share price.
If on the other hand, it turns out against Intel, the evidence is
literally baked into silicon within billions of devices in the wild
which will become e-waste overnight.
With this TikTok thing in the wind the Chinese will ban Intel... etc.
The stakes are high. The problem was always lack of transparency.
Putting encrypted microcode into consumer CPUs was always a dumb idea.
And why? To protect the media conglomerates. Another reason we need to
re-examine the role and value of "intellectual property" in society.
> billions of devices in the wild which will become e-waste overnight
Not just e-waste, they can also become a huge liability. In a presentation, the authors mention that one of the CPU families which have this vulnerability were used in Tesla cars. Tesla apparently switched to AMD APUs around December 2021.
AMD processors have much the same backdoor-"management" coprocessors. Just about the only processors without this stuff is your own softcore design running on an FPGA.
Okay, but that doesn't really bother me; running arbitrary payloads is the point of a bootloader. The only reason I would be worried by UEFI on RISC-V is if the UEFI firmware in question stays running in the background after the OS boots, and isn't properly inspectable. That might be the case - I have some vague notion of UEFI providing post-boot services, and for all that the EDK version is FOSS you could certainly make a closed version - but I'm not seeing any reason to panic just yet.
UEFI can provide post-boot services, but only in the same sense that a DOS-era BIOS did: by providing memory-mapped code that UEFI applications can call into, exokernel style. The UEFI framework isn't a hypervisor; it has no capacity to have ongoing execution threads "behind" the OS kernel. Rather, the OS kernel just receives a memory+PCIe map from UEFI at startup that contains some stuff UEFI left mapped; and it's up to the OS kernel whether to keep it mapped and make use of it; or just to unmap it.
And no production OS kernel really uses any of the code that UEFI maps, or keeps any of said code mapped. It may keep the data-table pages (e.g. ACPI sections) mapped, for a while; but usually only long enough to copy said data into its own internal data structures.
(Which is a shame, actually, as an OS kernel that did take advantage of UEFI services like UEFI applications do, would be able to do things that modern kernels currently mostly can't — like storing startup logs and crash dumps in motherboard NAND space reserved for that function, to provide on-device debugging through a separate UEFI crash-log-viewer app in case of startup disk failure, rather than needing to attach a separate serial debugger.)
On X86/AMD64 there is also ACPI and SMM (System Management Mode), which overlap in parts. ACPI calls into code provided by whichever bootloader, running post-boot above to operating system, or behind its back, so to speak. It's complicated.
Mostly for dynamic powermanagement and voltage regulation, monitoring temperatures, hotplugging, etc. That's just the way it is (now).
I remember when this first made the rounds over a decade ago with some mainboards from CENSORED Apple supplier which were pretty usable, except when running under something else than Microsoft Windows. IIRC it wasn't malice, only
implemented in a way which made this hard to interface with from Linux, or the likes. Of course undocumented. Later, but about the same timeframe the first so called open source/security 'zealots' damning ACPI in general for those reasons.
It may be called differently on other platforms, but similar/equivalent mechanisms exist on almost everthing which is not in a museeum or landfill.
In most computers with Intel/AMD CPUs, and especially in most laptops, the UEFI firmware remains running concurrently with the operating system.
Before giving control to the operating system for the first time, the UEFI firmware can configure various peripherals to generate SMI (System Management Interrupts) on various types of events and it can ensure that the SMI requests will be handled in the future by itself and not by the operating system. The UEFI firmware can lock this configuration, so that the operating system will not be able to change it.
When a SMI request happens, the UEFI firmware handles the event with the CPU in SMM (System Management Mode), which is a mode with more privileges than any operating system or hypervisor and which has access to everything, including to things that are protected from accesses by the operating system, e.g. a memory area reserved for SMM use.
The ARM CPUs may also have a mode equivalent with the Intel SMM, named EL3 (Exception Level 3), so, on ARM CPUs that implement EL3, the UEFI firmware can also run concurrently with the operating system or hypervisor, overriding them whenever it wants.
In theory, the UEFI firmware should use SMM only to handle benign events, for which Microsoft was too lazy to write handlers and Intel obliged by creating the ugly SMM, passing this event handling task to the motherboard manufacturers (forcing thus also the other operating systems to use the BIOS/UEFI handlers, even if they could have handled those events better themselves), such as powering up and down peripherals or changing the clock frequency of the CPU.
Nevertheless, the writer of the UEFI firmware could easily do much more than that, e.g. inspecting the content of the memory, the data written or read on storage devices or sent and received through the network. (Full memory, storage and network traffic encryption could prevent this.)
The SMM could also be used to implement a remote control of the computer that cannot be detected or prevented by the operating system, but for this an even more powerful facility has been introduced by Intel and AMD, many years later after SMM, in the form of the auxiliary CPU from ME/PSP.
A malicious actor can do a lot of things in UEFI, but they can't decrypt my disk, they can't boot into my OS, and they can't mess with my userland environment. If Johnny Blackhat fancies a game of Doom over TTY on my desktop's UEFI environment, he can be my guest.
It doesn’t sit on the network (unlike the ME) so an attacker needs to have access to the host already to be able to exploit any vulnerabilities on the PSP.
This is not about the management engine. Microcode is part of the actual core processor itself, but an updatable layer. One sort off correct mental model might be to think of x64 hardware as being a RISC-ish processor that runs microcode that runs your code.
I think in the original analogy, the actual robbery is just used as an event which may occur without our knowledge. Your analogy is better, the mapping makes more sense.
Something like: The locksmith has made a copy of your keys without notifying you. They could hypothetically use those keys to enable a robbery, but you won't know definitively either way until you find something stolen. But it is a pretty weird thing for them to do, right?
> Putting encrypted microcode into consumer CPUs was always a dumb idea.
Serious question: which consumer CPUs have unencrypted microcode? AMD processors have had theirs encrypted for a decade, Intel for even longer, and nothing I've seen indicates that any ARM models have unencrypted microcode updates floating about.
Basically none these days. But therein lies the problem. The actors can't be trusted. In part because the state-actors in which they reside can't be trusted and they can (and do) issue gag-orders to shove backdoors down their throat.
But the problem with backdoors has always been that other people will likely eventually figure out how to use them too.
If you can't audit the microcode, you have a massive gaping security problem.
It may be a serious question, but I don't see how it relates to the part you quoted. It still is a dumb idea, regardless of the answer to your question.
Certainly. Intel (amongst others) supply consumer grade CPUs for a
"multimedia" market, to service music and movie playback.
The movie and music industry apply intense pressure and lobbying
against the computing industry to protect their profits. They see
users having control over media, and thus ability to copy, recode,
edit or time-shift content as a threat, which they dub "piracy".
Under this pressure to implement "Digital Rights/Restriction
Management" within their hardware, semiconductor manufacturers have
been making microprocessors increasingly hostile to end users, since
if users were able to access and examine device functions they would
naturally disable or reject what is not in their interests.
Hiding undocumented functionality to take over, examine or exfiltrate
data from systems via backdoors etc is a natural progression of this
subterfuge and tension between manufacturer and user. This situation
of cat-and-mouse hostility should not exist in anything recognisable
as a "free market", and hence I deem it "protectionism".
Now the real problem is that these same "multimedia" processors are
used in other areas, automotive, aviation, health and even military
applications. The "risk-tradeoffs" bought by big media bleed into
these sectors.
Therefore it's clear to me that measures for the protection of
"intellectual property" are directly at odds with computer security
generally, and are increasingly leading to problems for the sake one
relatively small sector. Sure, the digital entertainments industry is
worth many trillions of dollars, but pales within the larger picture
of human affairs. At some point we may have to choose. Hollywood or
national security?
You are absolutely right, and media companies are the evilest of evil. But it's a little unclear why Intel felt it had to cave in. What would have happened if it had told Hollywood to go love itself somewhere? It's not like movie producers are remotely capable of making chips.
If could have been pushed down the chain. The media companies wanted to provide streaming so they talked with Microsoft. Microsoft realized it would be a valuable feature for thier OS and if they resisted but Apple didn't they would lose market share as HD streaming wasn't available. Then they said the same to Intel. If you don't do it people will buy AMD chips for HD streaming.
So basically it happened because users see the feature and don't realize how hostile it is until they actually try to access their media in an "unsupported" way.
Of course if everyone else resisted streaming services would likely have launched anyways, but that is classic prisoners dilemma.
It’s not like Intel have no competition. They probably figured that if they didn’t cooperate with the media industry, AMD or someone else would, and eat their lunch when suddenly Sony movies can only be played on AMD devices.
Well Tbf to intel. The only people I know who use a bluray drive in a pc for 4K content, rip the content anyways bypassing DRM. The masses (in the pc watching moving via a Blu-ray drive space) have moved onto streaming anyways.
wdym cave in, it's not like Intel integrated technologies pushed on them. Intel developed HDCP and co-developed AACS (the Blu-ray encryption scheme). I dunno how much they had their hand in AACS 2, but considering it's literally built on SGX I'm going to claim without evidence that Intel played a significant role in creating AACS 2.
> What would have happened if it had told Hollywood to go love itself somewhere?
Intel's executives would show up to play golf, and find out that nobody wanted to play with them. They might even be no longer welcome at the country club. Power structures coalesce, because powerful people identify with each others' authoritarian desires.
I think you could refine your point here.. markets include adverse negotiation, secure applications are a thing, and for that matter, weapons making is a thing. You can't "wish away" that part.
I have a colleague who makes secure DRM for FAANG since long ago, and has a large house and successful middle-class life from it.
ok - second example is a California woman who went to law school, passed the California Bar Exam (2-day, super difficult) went to work as a junior attorney at a law firm in Los Angeles and worked her way up for more than ten years to get a promotion. That law firm is intellectual property and yes, Digital Rights Management too.. and she now has a successful middle class life with a house. No one had money to buy her a house, she worked for that and she did not have the ability to define the rules.
Where is the commerce ? the market? the reality, in your dream solutions ?
edit as a sound designer and open software advocate, with a book published by MIT Press, please know that some people in the USA try to make the arts life, and some of the emphasis here is the empty hands and pockets that follow. Let's agree, no mafia sound sellers, but also "no" to complete free flow of all digital material no matter what.
Intel's SGX was at least partially intended to implement DRM [0], and its various-ish predecessor technologies such as old-school Palladium/TPM was expected to do the same [1] (but was ultimately cancelled because of wide backlash).
Thanks, didn't know these are pretty hardcore protection. Just curious maybe these protection layers and protetion-protection layers may introduce more vectors for attackers?
For now, the decryption keys have been obtained only for older Atom CPUs (e.g. Apollo Lake, Denverton, Gemini Lake).
While these are reasonably widespread in the cheapest personal computers and servers, the impact of examining their microcode is much less than if the decryption keys for some of the mainstream Core or Xeon CPUs would have been obtained.
In a more decent world, the manufacturers of CPUs would have been obliged to only sign their microcode updates, without also encrypting them, to enable the owners of the CPUs to audit any microcode update if they desire so, in order to ensure that the update does not introduce hidden vulnerabilities.
No, the microinstruction formats are very different, so also the microprograms must be different.
Even the newer Atom cores, e.g. the Tremont cores from Jasper Lake/Elkhart Lake/Snow Ridge/Parker Ridge/Lakefield have a different microinstruction format than that which has been published now.
For every generation of Intel CPUs, reverse-engineering the microcode requires all the work to be done again, what has been discovered for another generation cannot help much.
Even if, well... some intelligence service has developed microcode-level "implants"/malware, they wouldn't necessarily want it to be part of the standard build that gets shipped to all customers, precisely because of the risk of exposure. It's at least as likely that they'd install it on targets as updates, having first secured access by other means (as they'd have to anyway to exploit a backdoor that did ship with the chips). It's possible that some agencies would even be able to access signing keys if that helped simplify "implanting" procedures -- though it might not be necessary, if (like the OPs here) they can arrange microcode-level execution by other means.
This is only the microcode, not the deeper level minix, accessing all the IO and memory, where all the backdoors are suspected. Think of run-time patched CPU code to patch HW errors.
The microcode runs on the CPU, and has the (theoretical) ability to directly modify anything running on the CPU simply by executing different instructions. The Management Engine (the component that runs the Minix kernel) is on the external chipset, and has no direct visibility into the state of the CPU. The OS that runs on the Management Engine has also been decodable for years, and while bugs have been identified nobody has found anything that's strong evidence of a back door.
It's never been publicly clear how much is done in hardware, and how much is actually done via microcode. This blows the doors open and reveals that there's actually a lot more being done in microcode than previously suspected. What's pivotal, is how much more possible this makes microcode-based attacks against Intel-based systems.
I don't think it's true that there's more being done in microcode than previously suspected? The microcode updates for various CPU vulnerabilities clearly demonstrated that microcode is able to manipulate internal CPU state that isn't accessible via "normal" CPU instructions.
This makes microcode more auditable to most people. Relying on the “security” of most people being unable to inspect their microcode is not really a good position to have.
It is true ("lot more being done in microcode than previously suspected"), but only for Atom cores. Which are much simpler than "Core" cores, and, to be honest, it should be expected for small, simple and slow core - as microcoded execution is much simplier and make sense for these cores.
On the other hand, most of 12-series CPUs contains them :-(
If you build a simple, lower-end chip, you do so by giving it fewer hardware resources - fewer things where the implementation is via gates. That means that, in order to implement the same instruction set, you have to implement more via microcode.
I think that's what's being referred do. Microcoded execution is much simpler in terms of the hardware that you have to implement.
I have a distinct feeling that neither of them have even the slightest clue what they are talking about. If they did understand processor design and had seen pages like this[0], they’d realize that a Goldmont cpu is not dramatically different from a other Intel cores and “microcoding” had virtually no bearing on “why it’s cheaper to make”. The real reasons it’s cheaper:
- only three instruction decoders
- max IPC of 3
- no AVX, AVX2, AVX512
- vector registers are limited to 128bit
- generally tops out below 2ghz as base clock with turbo boost into the mid 2s at best
- very basic integrated GPU on non-server versions running at 200-600mhz
Less registers, less IPC, lower clock frequencies, and less execution units = cheaper cost.
So explain how microcode had something to do with that again?
Lets take it to extreme: you need very little resources (transistors) to implement Turing Machine, but providing enough microcode (which is very simple structure on silicon - it is ROM, which is trivial and very dense comparing to complex logic) you could emulate any complex ISA on this machine.
For real world example, you could have divider as very complex logic device or as small patch of ROM implementing "school" algorithm in terms of (hardware implemented) additions, shifts and subtractions.
> For real world example, you could have divider as very complex logic device or as small patch of ROM implementing "school" algorithm in terms of (hardware implemented) additions, shifts and subtractions.
Can you show one example of a commercially produced general purpose CPU that does that, let alone a fairly mainstream x86_64 one? In a purely theoretical sense, what you say is plausible, but in the “real world” it’s simply not done.
I guess I wasn’t explicit, I meant something made in the past decade or two. Going back to 3 decades ago, there’s a handful I can think of, going back almost a half century like the 8086/88, there were even more.
Basic CPU design in 1990 undergrad had everything as microcode, so I’m not sure there was much of any question or any conspiracy other that EE is much harder than CS and attracts far fewer people to it, and thus less information gets spread around, I guess.
If taken seriously this should have implications on purchasing decisions and I'm not sure what would happen if some (government) organizations would ask Intel to take back their hardware because there's un-patachable RCE.
I mention the car recalls or the old Intel chip recalls or any recalls really because that past situations that could be comparable (I'm not implying this will lead to car recalls, only that that's past events that could be looked at for predicting how this might develop).
So after analysis from the community and experts we will finally get rid of the whole backdoor-conspiracy bandwagon? Or will they just move on to another aspect or even simply wave it off as an orchestrated and constructed fake? I mean those people come up with a lot weirder things to advocate for their beliefs.
This isn't a correct characterization of the suspicion that Intel microcode has backdoors in it. The suspicion isn't just based on distrust of authority, like flat Earth, etc, but also on the org having means, method, and opportunity to remotely modify the operation of a CPU. And it operates within the domain of the USG, who have demonstrated a keen interest in weaponizing 0-day exploits.
What better way to acquire a novel 0-day than to simply write one known only to you and distribute it from the source? This is a good plan, but it comes with a substantial risk to Intel, or any company who wishes to maintain a trust relationship with its customers.
That said, I don't think anyone doing this is stupid, and for safety they would not install microcode malware on everyone, just some. This means we will find nothing in general CPUs, and anecdotal reports finding "something" can easily be dismissed as malicious or noise.
The truly paranoid need not worry, even if the microcode is seen to be harmless, there is always the possibility that hardware you buy is interdicted, modified, and sent onward, such that your paranoia can remain intact.
I don't think they will. They want to believe there exists a backdoor or that they are constantly being spied on and they will make up a narrative on how that happens regardless of if that explanation is true or if it is even physically possible.
While it really wouldn't surprise me if there was a back door, or some incompetence (real or orchestrated) that functions as one, I also don't think "they" need microcode level backdoors given the state of software security, and the amount of information we give away freely.
Im pretty sure it’s indisputable that we _are_ all constantly being spied upon and tracked, both by multiple nation-states as well as a ton of private companies. Believing there is an undiscovered backdoor is absolutely a reasonable position to take.
You're getting unfairly dismissed, so let me take some of the downvote burden.
In this thread, I see implications about big media controlling microcode (which doesn't seem to be impacting piracy — if anything, it's easier than ever before), about governments imminently finding backdoors and trashing an entire generation of chips, and other extreme outcomes that seem wholly out of step with reality. (Not in the "Everyone else is dumb but we few are smart" way; in the "I have not interacted with a business or government ever" way)
I'm sure the 2600 crowd will have their next "Yes but what about the *Long form* birth certificate"-style goalpost shift if there are no backdoors found.
Two Hidden Instructions Discovered in Intel CPUs Enable Microcode Modification
https://news.ycombinator.com/item?id=27427096