Hacker News new | past | comments | ask | show | jobs | submit login
Undocumented x86 instructions in Intel CPUs that can modify microcode (twitter.com/_markel___)
443 points by BlackLotus89 on March 20, 2021 | hide | past | favorite | 134 comments



The followup tweet indicates that the CPU has to be in an unlocked state before this is possible, which on a typical system requires there to be a Management Engine vulnerability first. Given what we currently know, this is going to be interesting for people interested in researching the behaviour and security of Intel CPUs and might well lead to discovery of security issues in future, but in itself I don't think this is dangerous.

Edit to add: https://www.intel.com/content/dam/www/public/us/en/security-... is a discussion of a previous Intel security issue which includes a description (on page 6) of the different unlock levels. This apparently requires that the CPU be in the Red unlock state, which (in the absence of ME vulnerabilities) should only be accessible by Intel.


Plot twist: there’s been a few ME vulnerabilities. And who knows what other purposeful ME backdoors there may or may not be.


No, no! You've got it wrong. It's a backdoor when Huawei does it. When Intel does it, it's just a vulnerability/oversight/mistake. /s


Sure, which is why this is useful to researchers. But the access someone needs to your system in order to exploit the ME vulnerabilities is sufficiently extreme that if someone achieves it you probably have other things to worry about.


I can't get with justifying one serious security problem using another. Attackers increasingly combine local privilege escalations to move laterally, but only need one RCE to get in.


Windows can be booted into test keys mode, which allows the loading of unsigned drivers. We don't consider that a security issue because the privileges you need in order to switch to that mode are equivalent to the privileges you get by switching to that mode. It's the same here - the ME is the root of trust on Intel platforms. If you're in a position to execute arbitrary code on the ME then you've already got the ability to compromise the rest of the system enough to run arbitrary code on the host CPU, and being able to modify microarchitectural state doesn't give you additional privileges.


I’m not saying that’s the case here, but that’s the general problem with the line of reasoning that “hey if you already have permission X then doing Y is the least of your concerns”.

The concept of defense in depth literally relies on each barrier being independent and robust. That’s why you see hardening of Linux’s hibernate even though the common refrain is “well if you have physical access the game is lost”. There are things that even root can’t do even though “hey if you have root the game is lost”. The point of the game is to never lose even in very adverse environments.


The assumption on Intel is that there are no barriers once you're in the ME. You can't defend against a hostile ME. The security model is already violated. Maybe there should be a barrier between the ME and the CPU, but as can be seen here Intel feel that the ME should be in a position to put the CPU in debug mode so shrug.


Where can I read about "hardening of Linux’s hibernate"? I'm curious



I suppose one possible scenario is:

- Police confiscate your laptop on some bogus pretext, then return it to you saying you're free to go.

- You open the laptop and find nothing that shouldn't be there. You wipe it, reinstall the OS and continue using the laptop.

- Surprise! The CPU now works for the police, so after some time it installs a rootkit or whatever.

Dunno if the microcode is big enough to do this kind of attacks, and perhaps some other firmware is easier to program.

But if someone waves this off saying that's not how police works in the US, well the world is larger than the US and it all definitely happens in other countries, only without CPU rootkits so far.


Situation without this CPU feature: Cops compromise the ME, disable Boot Guard, compromise your firmware, backdoor your OS directly Situation with this CPU feature: Cops compromise the ME, disable Boot Guard, compromise your firmware, backdoor your CPU so it can later backdoor your OS

There's not really a meaningful difference between these! If there's an exploitable ME vulnerability then the police can absolutely own your system in an undetectable way regardless of whether or not this feature exists. If we were in a different universe where the ME enabled whether or not the CPU was in debug mode but wasn't responsible for any other security features then we'd care about this a great deal more, but as long as compromising the ME already gives you a way to permanently backdoor the system it's doesn't make any real difference.


You mean NSA and the same, but online?


There are more productive ways to view the probabilities of security. Low probability may imply low risk but is not guaranteed to imply low priority to fix.


Am I correct in surmising that a successful run of me_cleaner will prevent the abuse of these instructions?


No. me_cleaner reduces the amount of code running on your ME, and as such reduces the attack surface presented by the ME. But anyone with physical access (which is required for the interestingly exploitable ME vulnerabilities) is in a position to just put whatever ME firmware they want on your system.


Yeah, like a rootkit on the cpu.


Or the only thing to worry about


This seems like yet another thing on the list of “x86 hardware issues that sound worse than they are”.

I’m interested to see what people are able to reverse engineer with these sorts of tools. It wasn’t even that long ago that ucode wasn’t even encrypted with integrity. I don’t think AMD started doing that until around 2010.

I’m also curious which hardware versions this works on, since it’s not obvious it’s universal. I’ll be amused if it’s some forlorn low power chip from 10 years ago.


>It wasn’t even that long ago that ucode wasn’t even encrypted with integrity

whether they're encrypted or not doesn't really matter. what actually matters is whether they're signed or not. There was a talk given in 2017 about trying to modify the microcode in AMD processors, but they were using processors from a decade ago (AMD K10, introduced 2007). That makes me think that processors made in the past decade are probably using signed microcode.



Yeah, although I didn't find the original paper. Reading into it more, they mention when AMD and Intel started signing their microcode.

>Note that Intel started to cryptographically sign microcode updates in 1995 [15] and AMD started to de-ploy strong cryptographic protection in 2011 [15].


This would still break SGX/remote attestation, no? The chip can correctly say it's running some piece of assembly but if "ret" has been redefined to do whatever I want...


The keys are stripped out if you put it into the required unlock mode.


That would be a good thing, given what those features are usually used for (DRM and other user-hostility).


I would love ability to redefine call/ret to profile an app without compiler generate diff code.


Can’t you do that through ebpf?


My understand of ebpf can hook into the function calls in linux kernel.

Can one do that on user space app?


Yes, that is my understanding.


This is what I'm interested in since they pushed that so heavily.


So you mean, if I am a state actor able to kidnap the child of an Intel high level employee... say I m Joe Biden, I can ask Intel to... remote unlock my CPU and read arbitrary memory block ?

Or you mean Intel had to physically handle your CPU with a debug cable or whatever ?

Cause I really dont feel it s okay that the only safety we have from a newly discovered exploit is that there needs to be another newly discovered exploit :D


It is public knowledge that US intelligence agencies actually just hijack computers and equipment on their way to the customer and install hardware backdoors there (Snowden et al., 2014).

It is also known that they have had backdoors in commercial systems as they came off the shelf, but I think usually those were CIA owned and controlled companies like the crypto AG phones.

What is unknown (pure speculation) is whether, for example, Intel CPUs come backdoored straight from the factory floor? On the one hand, that would be a powerful capability to have, but on the other hand, the risk of exposure and subsequent damage to the US economy, prestige, etc. would be non-zero. So it's hard (for a plebian like me, anyway) to estimate how those costs/benefits might be weighed up by the US government.


>What is unknown (pure speculation) is whether, for example, Intel CPUs come backdoored straight from the factory floor?

There is also a third possibility, that some intelligence agency invested a ton of cash into finding abusable exploits in these systems giving them the same access a backdoor would provide.

Also from the Snowden leaks, we know that they have programs with budgets in the millions into finding similar exploits and that there were similar programs outside Snowden's clearance. And though a bug may cause the same damage to the economy, it wouldn't hurt US prestige in the same way.


If I'm the CIA I have dozens of highly placed agents or at least informants at Intel. Not necessarily placing backdoors, but finding and not fixing exploits and sending them back to the CIA for later use. It would be extremely cheap, hell if I'm China, Russia, the UK, or Israel I'm doing the same thing.


You don't even Intel employees for that. State actors typically get source access for 'verfication' purposes.

Russia and China would get Windows source access for instance.


Budgets in the millions, eh? So two full-time engineers?


Sorry, I misremembered the details. Around $400 million a year for the one program we have details on.


> On the one hand, that would be a powerful capability to have, but on the other hand, the risk of exposure and subsequent damage to the US economy, prestige, etc. would be non-zero

If I were a three-letter agency, I'd bribe/blackmail somebody into inserting intentionally vulnerable code. After all, sufficiently advanced malice is indistinguishable from incompetence.

We've often seen that the code inside firmware, secure environments like trustzone, etc tend to lack many of the mitigations for the classic vulnerabilities. Just rewrite one of the ASN.1 parsers in the ME (I'm sure there's at least one), "forget" a bounds check in some particularly obscure bit, and you'd have a textbook stack smash.


You don’t need to bribe anyone; Intel is a US company, so A TLA can just discretely explain to them how export restrictions work.


Subtly change the RNG implementation so there’s a predictability only you know.


That’s one of the reasons why operating systems provide a proper CPRNG instead of trusting RDRAND.


How would an OS seed an RNG in the cloud? How would you seed an RNG on a headless server in a VM? What about when that VM is copied, possibly while running, in order to duplicate server functionality? There are vulnerabilities and threats here that your comment does not take into account.


You can use virtio_random.

But really, it's not about operating systems not using RDRAND at all - it's fine to use it as one of the entropy sources; what you don't want to do is use RDRAND directly instead of CPRNG.


Remotely? I think Intel would need to produce a backdoored ME firmware, get the system vendor to incorporate that into a system update and then convince the target to flash that. In that sense I don't know that they'd technically need physical access, but it doesn't really meet most people's description of a remote attack.


Don't use CPUs from companies that have employees that can be kidnapped.


Don't use CPUs from companies that have employees.


Don't use CPUs from companies.


Don't use anything


Don't be so paranoid. Just build your own CPU from discrete NAND gates like everyone else.


A pen and paper is everything you need.


But what if it’s some of that fancy paper or a pen that records everything you write! Might need to manufacture your own pen and paper now...


Hold up there you don't know if someone's been tampering with the trees, you'll have to grow your own forest first


The “trusting trust” flaw still applies, better sequence your own species of tree.


And trust the same chemical processes that eventually resulted in the mess we’re in? Start a new universe.


Don’t


Doesn't seem like that would leave many possibilities... ;)


I guess out of business CPU companies are one easily accessible category.


While true, it's pretty likely their IP was sold to still-in-business companies. ;)


Nothing against the original post (which just says what they found), but this seems to be really overblown. Yes, of course Intel has instructions to update the micro code, since that's a thing that they do. Neither is it particularly surprising that they didn't bother to document operations that only they would ever have reason to use (in their eyes). If, as sibling comment notes, you have to be in a specific unlocked state to use this instruction, it should be perfectly safe assuming someone hasn't compromised other layers of security. So yes, this is certainly interesting, and it could be used as part of a chain of exploits to do some really nasty things, but by itself this seems like barely news?


Should products be fully documented so that consumers can make an informed decision?


I don't know. It's Windows vs Linux all over; I think Open is better, but Intel has never (AFAIK) pretended to do that. If a customer knows "Intel/Microsoft controls this", I personally feel like that's an acceptable tradeoff. And for people who find that unacceptable, there's Debian on POWER9. But if you're picking Intel, it's not like these opcodes change anything; we already have ME sitting there controlling the machine.


Did they release specs so that customer could build their own ME to manage the CPU? That would be interesting.


They don’t even release instructions on how to turn it off.


I’m not sure if this is sarcasm or not. If you are genuinely curious, the answer is “no.”


Why would they do that?


I can't imagine why someone would hide details of the product they are selling. That is dishonest and should be illegal. Imagine if someone was selling food and would omit some ingredients from the list or you buy a house with a basement and you don't get a key nor info what's inside.


Quite the contrary, I suspect that telling the details would be illegal. The ME engine predates Snowden and is widely believed in the security community to be subject to NSA meddling. It may be that Intel is the recipient of a NSL preventing them from disclosing too many details.

But for whatever reason they have been very tight-lipped.


Does all the software you buy disclose all its functionality?


Yes! God, I miss tge days when things actually shipped with manuals on how everything works. I have a fiew old pieces of hardware I keep manuals around for just to remind me of what the writing style is supposed to look like.


Intel has a 5066 page manual available for download here:

https://software.intel.com/content/www/us/en/develop/downloa...


5066 pages does say nothing about completeness. It's undisputed that many aspects of the CPU are undocumented, considered a business secret.


manual != fully documented...


Back in the day they were though. Full schematics, microcode listings, and a flow chart to understand the microcode.


Can you give an example of a processor whose microcode was publicly documented?

(Barely-public documents like patent filings don't count.)


For the 6502 there were manuals that detailed how many clock ticks every instruction takes and what happens during the ticks, which is pretty much the same.


I don't know what era you are pining for.

The original 6502 had many undocumented instructions. Most of them not very interesting, but certainly undocumented. http://nesdev.com/undocumented_opcodes.txt

Here is a detailed examination of undocumented Z80 behavior. http://www.z80.info/zip/z80-documented.pdf

The 8080 had undocumented instructions too.

So now we are all the way back to pdp series. The pdp-8 had plenty of undocumented instructions.

Should we go back further? It doesn't stop.


It’s not like these instruction were ‘secret instructions’ reserved for use by a secret cabal. They’re just leftovers from ‘don’t care’ parts of the microcode. I remember there’s quite a few that simply hang the processor because their microcode doesn’t include the ‘increase the program counter’ instruction so it turns into an infinite loop.


The PDP-8 didn't really have undocumented instructions. It's just that the OPR instruction was "execute immediate as microcode word". There were a lot of ways you could flip those bits to get different effects, and people gave some of them their own mnemonics in third party assemblers, but the effects of the bit patterns were fully publicly documented by DEC and distributed with the machines.

Same was true with pretty much all non microcomputers through the 80s.


Every 6502 family datasheet that I've read described how many clock cycles an instruction took (since that was critical for some applications), but not what happened during those cycles. That was considered an implementation detail -- and did encompass some externally observable behavior, particularly in terms of what happened on the address/data busses during multi-cycle instructions.

Here's a typical datasheet from Rockwell, for example:

http://archive.6502.org/datasheets/rockwell_r65c00_microproc...

It's also worth noting that the architecture diagram on page 5 of this document isn't entirely accurate, either. It's closer to a programmer's model than an RTL description of the CPU; some "hidden" registers used during certain operations (like indirect addressing modes) aren't shown.


That’s .001% of information that describes a processor. Anyway people measure it and provide it today too.


What is the other 99.999% of information? Yes, there was some undefined behavior that was not in the official manuals but certainly not a 100000 times what was in there.


Pretty much every Dec and IBM machine up until the 80s.

Bitsavers has the documentation for the PDP-11/40 I used to have here. http://www.bitsavers.org/pdf/dec/pdp11/1140/. It includes microcode listings both assembled and as flow charts. My PDP-11 had actually been modified before I got it with the microcode ROMs in ZIF sockets, as someone had been writing their own microcode.


In most cases there is a public API which is documented and a private API which is not.

In this specific case, users do not have the ability to use this instruction, so there is no reason to document it.


Why is it acceptable for companies to keep such information away from a consumer?


Because it is not part of the public-accessible product and the company is free to change it at will from one version of the product to the other, or even between multiple batch of the same version.

You see it from an "they should inform us" POV, but there is also the "there is no guarantee it will be there" side.


What do you mean it's not publically accessible? A bank's vault is not publically accessible because there is a meter of steel, concrete, and armed guards.

We can debate if some private bank API's are publically accessible (you can still DDOS them). This shit, however, is inside my property, that I bought with my money, and it might end up being used by some malware. It is by definition accessible, and could be a security bomb.

How can any believer in free market defend companies secretly selling to comsumers the equivalent of a dangerous remote control mechanism embedded in their car?

Market only works if sellers aren't lying all the time, and in IT industry we made it the norm.


You're talking security, he was talking official documentation and thus guarantees about it being there that way.

Your bank doesn't guarantee you that the lock model X45-b from company SUPERLOCK will protect your gold in their vault, even if that's what they use right now, and it's not specified in your contract with them, they just guarantee to protect your gold. This way they can change the lock without being in breach of contract.


Whether or not it is acceptable it is certainly the standard method used.

Even websites often have an internal API and a different public-facing one.

This is how almost everything works in the digital age.


Here's how Ford tests its car seats: https://www.theverge.com/tldr/2019/1/11/18178402/ford-robot-...

Should we, as consumers, demand that Ford productize this robot and make it publicly available before we'll buy a Ford car?


If they shipped the robot in cars because that was cheaper than removing it, then in an ideal world they should document that, IMO. Obviously we don't live in an ideal world.


Look up "Intel VISA" if you want to go down one of the many rabbitholes of undocumented x86... it makes me sad that there are whole subsystems in the hardware whose documentation is not publicly available; not from the security perspective, but from the "I bet someone could do some really interesting things with this functionality" perspective, like what LOADALL enabled (unreal mode, real-mode paging, etc.) decades ago.

Intel isn't alone; search "9c5a203a" for some equally interesting stuff on the AMD side.


Welp, I went down that rabbit hole.

I will say, the thing security researchers find are simply amazing to me


Sure, I'll bite! <g>

I've been down so many "rabbit holes" aka "levels of abstraction" aka "abstraction hierarchies" aka "turtles on top of turtles" aka "things stacked on top of other things" (compare with Monty Python's "society for putting things on top of other things") in my life as a Programmer (and in my secret life as an Philosopher! Shhh, don't tell anyone! <g>. And yes, I know... "Keep the day job"...<g>) that one more ("I'll go down this last rabbit hole, and then I'll be finished, really!") really won't make that much more difference! <g>

Before I get started on that one though, two more quick references about "abstraction hierarchies" aka "rabbit holes".

On the one hand, we have Joel Spolsky's magnificent essay, "The Law of Leaky Abstractions (https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...) and on the other, we have the Biblical story of "The Tower Of Babel".

Now, I am not the member of any religion, I don't endorse any religious text over any other, but any intellectual worth his salt -- should read both of these writings, and compare them.

Once you've done that, then compare what's being said to abstraction hierarchies of all sorts.

We see abstraction hierarchies in code, we see them in language (especially when a language changes over many years, as it changes, fractures and sediments). We see them in Law (Law in the U.S. as it is practiced today can be compared to an abstraction hierarchy -- not at all unlike The Tower Of Babel as a metaphor for this phenomenon, or Joel's essay as a more complete modern-day explanation of it...)

We see them in a number of places and systems.

We even see them in x86-land...

That's because the x86 (as far as all of my research has suggested, and someone correct me if you think I am wrong, with links/references please!) -- is really a RISC core with a separate microcode execution engine (the level of abstraction above the RISC core), and has been since at least the Pentium Pro (P6, 1995):

https://stackoverflow.com/questions/5806589/why-does-intel-h...

http://i.blackhat.com/us-18/Thu-August-9/us-18-Domas-God-Mod... (Or Google "Christopher Domas God Mode Unlocked") (Also, see the patents listed in this PDF)

https://arxiv.org/pdf/1910.00948v1.pdf

https://troopers.de/events/troopers16/655_the_chimaera_proce... ("We take a 12-core processor, inject microcode to simulate a PowerPC (2 cores) and a MIPS processor (2 cores), restrict 2 cores to i386 and leave 4 cores to amd64.")

So with all this in mind, let's turn back and look at VISA:

https://threatpost.com/undocumented-intel-visa-tech-can-be-a...

>"The Intel technology is called Visualization of Internal Signals Architecture (VISA), and is used for manufacturing-line testing of chips.

However, Maxim Goryachy and Mark Ermolov, security researchers with Positive Technologies, said in a Thursday Black Hat Asia session that VISA can be accessed – and subsequently abused — to capture data from the CPU using a series of previously-disclosed vulnerabilities in Intel technology."

You know, that sort of reminds me of lyrics from The Who's "You Better You Bet":

"I showed up late one night with a neon light for a VISA,

But knowing I'm so eager to fight can't make letting me in any easier..."

Hmmm, now who is "me" in the above context?

?

Could it be... (With my apologies to SNL's Dana Carvey as "The Church Lady")... a TLA? (Three Letter... er, group? <g>)

Hey, if any power-that-be is offended, then I'd like them to know that I only quote things that I find on some places on the Internet -- on other places on the Internet....

In other words, I am only the messenger... one of millions and billions, that also quote things that they find on the Internet -- on other places on the Internet...

(Side note: There should be a Monty Python sketch for that topic! <g>)

In other words, "don't shoot the messenger...(s)" <g>...


Is there a way to use this for consumer good?

1. Like unofficially turning on ECC support on non-ECC chips assuming the hardware allows for it?

2. Or increasing total supported RAM size on lower end chips?

3. Unofficial open-source microcode updates(performance/security) for older unsupported chips?


Reminds me of GKH story about not even kernel devs truly understanding CPUs.

Here is the timestamp

https://youtu.be/t9MjGziRw-c?t=585


It's absolutely incredible that I can't buy a processor and be able to run a command on the bare metal. We don't even know how many levels of indirection between x86 machine code and actual hardware there are.


What is bare metal?


The instruction in question looks to be encoded as 0f 0a. Which, sure enough, is missing from the official reference - https://files.catbox.moe/ktzqfg.png


According to https://www.geoffchappell.com/notes/windows/archive/linkcpu.... this was at one point a proposed cache flush instruction (as the mnemonic suggests). More detailed information here:

https://www.os2museum.com/wp/curious-instructions/


the other appears to 0f 0e. at least if you go off this tweet. https://twitter.com/eigma/status/1373155650432290819 . 0f 0e is a 3DNow! instruction that AMD supported but not intel, which would explain why it has a mnemonic in Ghidra even though its undocumented for Intel processors.


Interesting. Not sure how trustworthy that account is. The only 2-byte sequence documented by amd and not by intel that starts with 0f is, in fact, 0f 0e; that is FEMMS, which does match up with the length of the obscuring bar in the first reply.

And it does seem plausible that intel would pick proximal instructions for 'read' and 'write' of uarch state.


Only reason I consider it trust worthy is that it was retweeted by Mark Ermolov, which makes me think they got it right.


As someone who isn't versed in computer hardware engineering, can nothing be done here? Why is it there is a new backdoor every few months? Is it an architecture issue, and we simply demand too much for too cheap? How do we lock this down?


This post is a nothing-burger. That an instruction exists that permits microcode updates is not new knowledge. Obviously there is one, if you can update microcode. All that's new here is reverse-engineering has filled in one of the holes in the Intel SDM.


The previous publicly known mechanism required signed updates. This mechanism allows anyone to twiddle the bits.


> This mechanism allows anyone to twiddle the bits.

This remained unclear to me. Other comments say the CPU needs to be in red unlocked state, whatever that is.

The screenshot shows UEFI. So one could guess the CPU is in such state before the operating system gets loaded. But the operating system typically loads a microcode update, after that the CPU should no longer be in unlocked state.

So for "everyone can fiddle with the bits", that would require to run a modified bootloader first. Which should not be possible thanks to secure boot.

So yes, maybe it's a small step to another highly complex exploit. But it's not that everybody running a securely booted operating system (which should be everybody) can start fiddling with bits.

Edit: There is a difference between your own bits and someone else's bits. Yes, on your own machine you can run a modified bootloader and that's a good thing. Putting a modified bootloader to another machine should not be possible (until someone breaks secure boot, but I don't think that has publicly happened)


> So one could guess the CPU is in such state before the operating system gets loaded.

That would be an incorrect guess. Getting into this mode requires the cooperation of the ME. If you're Intel then that means you use parts with different fusing and boot some magic ME firmware that lets you do this. If you're not, you exploit a vulnerability in the ME (something that's currently only possible on older hardware) to do so. You can't enable this by simply replacing the bootloader.


> So for "everyone can fiddle with the bits", that would require to run a modified bootloader first. Which should not be possible thanks to secure boot.

Can't the bootloader be signed with MOK (machine owner key ?) that may allow this to work ? (Phys access required, no doubt)

Alternatively if the bootloader doesn't load microcode, just prevent the OS from loading microcode and the system will be in the attackable state ?


By "anyone", I mean not Intel. Previously only Intel had the private key for signed microcode updates, and no one else even knew the format of microcode. Even with control of the machine you couldn't play with the microcode or know what it did. This allows read/write access for researchers.

It's deeper than secure boot kind of stuff.


Firmware is not protected by Secure Boot. It can be protected by things like Boot Guard, but at least that one (Intel's) requires pairing the board and the CPU, so it can only be done in laptops and other prebuilt OEM systems.


What do you mean by firmware here?

Microcode? No it's not. But as long as you only load bootloaders or operating systems that are signed, it doesn't matter that they could fiddle with the bits as long as the signature guarantees they don't (in any undesired way).

Or ME? Well, that seems to be a complete security nightmare.


Microcode is protected by its own signing thing, IIUC.

Firmware is both ME firmware and x86 firmware — the UEFI implementation, FSP, everything else that runs on early boot.


If you’re not doing this as a hack but to get more control over your own processor, you can just turn off secure boot, it’s a bios setting.


Sure. But preferably install your own keys and sign bootloader and operating system yourself if you run untrusted code on the machine.


You can't install your own microcode signing key. Only intel's is provided, and it's in unmodifiable mask ROM.


Now the question is "What does the microarchitecture really look like so we can write our own x86 microcode?"


The problem with microcode is that it’s specifically tailored to each specific microarchitecture. The microcode for, say, Skylake, won’t work on Haswell. And it definitely won’t work on any AMD CPU.

Intel and AMD almost certainly have compilers/assemblers of sorts that handles turning the microcode assembly-of-sorts into the correct bits, but they’re not portable.


I'd settle for a single specific microarchitecture, just to see what was possible.


Oh for sure. Even if it’s the P6’s microcode (the first with updatable code), I’d take it. I’ve always wondered what microcode looks like in these processors, and how it functions. There is “p6tools”[0][1] which looks interesting. But if Intel released info on it, that’d be really cool.

Sidenote: Ken Shirriff has reverse engineered the ARM1’s microcode,[2] but it’s a “horizontal” microcode (bits control CPU blocks directly) while x86 uses “vertical” (RISC-like) microcode.

[0]: https://github.com/peterbjornx/p6tools

[1]: https://www.youtube.com/watch?v=4oFOpDflJMA (slides: https://hardwear.io/netherlands-2020/presentation/under-the-...)

[2]: http://www.righto.com/2016/02/reverse-engineering-arm1-proce...


There was an interesting black hat talk about automatically finding undocumented instructions. [0]

Interesting comment on twitter to the instruction of the original post[1]

[0] https://www.youtube.com/watch?v=KrksBdWcZgQ&t=3

[1] https://twitter.com/eigma/status/1373155650432290819


Can it be used to make RDRAND always return the same number?



So, which CPUs are actually open-hardware and available to consumers with mainboards that support them? Is RISC-V going to be this?


None. Well, yeah, I think SiFive has HDL sources for quite a lot of their stuff up on GitHub, but it's not like you can just compile them into production silicon at home, heck, you can't even verify that the silicon was actually compiled from that source. (Yes there's research into verifiable silicon, but it's not like everyone has ultra high end electron microscopes and whatnot at home lol)

And of course nothing about RISC-V implies that production implementations will be open source at all.


Good thing Apple is moving to M1; no more of that Intel crap. and stick with AMD for Linux.


I wonder what interesting things are in Apple's completely undocumented chips? There is that famous saying about known unknowns and unknown unknowns...


No more or less so than intel or amd. Or is the problem that the entire implementation is not public? Because even risc5 doesn't require that.


That isn't true. AMD aren't great at it, but Intel publish thousands and thousands of pages of manuals. They don't have the secret sauce in them as per se but Apple literally will not publish jack shit about anything inside M1.


"Intel x86 documentation has more pages than the 6502 has transistors": https://news.ycombinator.com/item?id=13126248

There's also the "unpublished" stuff that occasionally leaks here and there; not something which I suspect will ever happen with Apple, but you never know...


Untrue. There're more documentation on Intel's CPU than Apple provide on an entire M1 laptop.


.. you mean the ARM specifications?


Out of the frying pan into the fire.


do you think the M1 doesn't have security bugs?


Apple's M1 too supports Spectre.

>The demonstration website can leak data at a speed of 1kB/s when running on Chrome 88 on an Intel Skylake CPU. Note that the code will likely require minor modifications to apply to other CPUs or browser versions; however, in our tests the attack was successful on several other processors, including the Apple M1 ARM CPU, without any major changes.

https://security.googleblog.com/2021/03/a-spectre-proof-of-c...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: