
Intel x86 Root of Trust: Loss of Trust - bcantrill
http://blog.ptsecurity.com/2020/03/intelx86-root-of-trust-loss-of-trust.html
======
einpoklum
The "trust" here is a complete misnomer. "Trusted computing" should be called
"traitorous computing", where your computer has a module which might be
controlled by (fundamentally antagonistic) remote third-parties. _They_ can
trust your system to _betray_ you in their favor.

Traitorous computing should not exist and a pox be upon the heads of everyone
who let such modules make it into our computers.

~~~
derefr
Who is "you" in this scenario?

I want to be able to secure my computer (an ATM, say) against people with
physical access to it. A root of trust (that the original purchaser of the
device controls) allows for that.

Or, to be slightly more dark, I, as an enterprise IT administrator, don't want
the employees fucking around with the hardware I deploy, even when they have
all day to poke and prod around. _I 'm_ the root user of those workstations,
not them. I need to be able to enforce enterprise security policies on them,
and I can't do that if they can "jailbreak" the company's computers. (They
want to run arbitrary code for personal reasons? They can do it on their own
arbitrary personal devices, then, for which I have conveniently provided them
a partitioned-VLAN guest network to join.)

~~~
to11mtm
> I want to be able to secure my computer (an ATM, say) against people with
> physical access to it. A root of trust (that the original purchaser of the
> device controls) allows for that.

Unless you're running something like a Raptor Talos II, You don't really have
the root of trust. You have a branch off of the manufacturer's root.

That may be better for some enterprises, but in this modern age is that really
enough? Consider how the PLA was involved in the hacking of Experian/Equifax.

Until you can review the code yourself and verify the binaries, you don't
really have the root of trust. Someone else does. (I'm barring other types of
shenanigans here, but it's the next logical step.)

~~~
derefr
I mean, for the kind of highly-trusted "ruggedized" scenario represented by an
ATM, one would hopefully get their hardware from a manufacturer that exists
under a political regime they have no enmity with, or are perhaps even
allegiant to. (That's half the reason many US government officials and
contractors used Blackberries: the US government could—given the political
realities of the time they live in—trust a device whose chips were verifiably
made in Canada.)

For the workstation scenario, though, you don't really care who has the
"ultimate" root, just so long as you can get whoever that is to help _you_ to
stop a particular class of attacker (e.g. your own employees, contractors, and
any "visitors" in the building) from getting root. It's fine if the PLA has
root on the boxes, because the boxes aren't actually storing trade secrets or
anything; the point of having pseudo-root on the boxes is, in fact, to enforce
a security regime that ensures your employees _don 't_ store any trade secrets
on the boxes!

See also: being an "organization owner" in an enterprise SaaS service. Sure, I
can't stop Google from snooping my GSuite data—but I'm also _paying_ them to
host that data for me, and e.g. selling it would be a violation of the
contract. Even though they _can_ , in theory, do it, they're economically
incentivized against doing so (and doubly so, because if they did it once and
got found out, they'd never make any GSuite money again.)

> Unless you're running something like a Raptor Talos II, You don't really
> have the root of trust.

Mind you, there are "multiply-descendant root-of-trust" setups that are quite
common these days. In modern Apple devices, you've got an Intel processor
doing most stuff, but then the Apple-controlled T2-chip domain doing
encryption stuff, with its own boot chain completely isolated from the Intel
one.

~~~
oneplane
You still are just a trust leaf, not the root. The root is a ROM you cannot
read or change on the Intel side, so no trust control there (only delegated
which for some people is no trust at all). With no ability to verify it, an
exploit like this would not be something you can detect and as such breaks the
trust chain.

~~~
dboreham
If you're talking about SGX the root is really the Intel-run remote
attestation servers, no?

~~~
oneplane
Yes and no, the SGX flow models a dual-root method where two PKI roots must be
trusted for success, and at the same time the SGX is only as safe as the
microcode is, which can be overridden at this point. SGX with CPU-signature
and Intel-signature can still be faked when you can compromise the CPU and SGX
and still run the signing routines.

Edit: I wonder if there is any feasible way one could do this without trusting
the CPU at all, but I suppose that completely defeats the point.

~~~
dboreham
Hmm. I didn't realize the key, or at least the signing function, was in the
microcode. So if the microcode can be changed by an attacker, they can have it
sign <fake code signature> but go ahead and run <evil code> instead?

~~~
oneplane
Yes, and that was supposed to be protected by CSME and ACM, but because those
can be circumvented, so can microcode upgrades.

Normally a microcode update would be verified as well, but since it has been
hacked in the past and CSME verification can now be bypassed it would probably
be a matter of time.

It used to be the case that an internal ROM with a unique on-time programmable
entry that can only be read internally was safe enough, but with decapping,
gliching and breaking PKI chains that is getting weaker and weaker.

------
blendergeek
It looks like finally users may have complete control over their Intel
computers without Intel having the final say. I, for one, am quite happy about
this.

~~~
lxgr
This sentiment seems to be rooted in a misunderstanding of what trusted
computing is trying to achieve on a fundamental level.

The idea is not to "take control over people's computers", i.e. your trust in
your own computer. It is rather to enable somebody to gain some level of trust
in the computations that are happening on somebody else's computer.

Yes, this technology is commonly used for DRM, and that was one of its
earliest applications. But it's not limited to that. Trusted computing can
switch the roles and give you as a user certainty over the computations a
third party provider performs in the cloud on your behalf. The Signal team is
doing a lot of very interesting experiments there [1].

If your concern is a hardware backdoor or something similar, this is less of a
question of trusted computing, and rather one of trust in hardware vendors.
Your hardware vendor can screw you over entirely without TPM, TEE, secure
elements and the like.

On the other hand, Intel's trusted computing platform being horribly broken
does not magically give you FOSS replacements for all the firmware, ROMs and
microcode running on the dozens of peripherals in your computer.

[1] [https://signal.org/blog/secure-value-
recovery/](https://signal.org/blog/secure-value-recovery/)

~~~
conradev
I think we need to come up with solutions to problems like key escrow (i.e. in
Signal's case) that don't require trusted computing because a single root of
trust for hardware is a single point of failure and depends on trusting the
hardware manufacturer.

There are a lot of possibilities with distributed computing

~~~
josh2600
Do you have any narrative of how to do key recovery safely without an enclave
or a human in the loop?

I’ve spent a lot of time thinking about this and I don’t really know how to do
it without one of those two things.

Edit: like I hear you saying there are possibilities in a distributed
computing world, but I don’t have any idea what distributed computing enables
for key recovery (except possibly k of n schemes but that’s just replication,
not safety).

Edit 2: also, presume that users suck at key management and can’t remember
long password strings, 24 words, or be trusted to store a key for a meaningful
period of time.

~~~
lxgr
It would still be an enclave of sorts, but white box cryptography is generally
trying to achieve a similar goal as trusted computing, without relying on
trusted hardware.

~~~
josh2600
I don’t think the enclave you’re describing exists, nor do I believe there is
an enclave that is untrusted hardware.

Do you have an example of such an enclave and how it would operate without a
remote attention service in the model where a user can trust that a
distributed network they don’t control is safeguarding their key?

~~~
sounds
If homomorphic encryption advances to the point where it's usable, that would
be an example of security on untrusted hardware.

(But I suppose that just proves your point.)

~~~
josh2600
Yes. I think if homomorphic encryption existed in a way that was super fast,
we would be using that. It’s real far as far as I can tell.

~~~
girvo
It's definitely real, but as you stated, incredibly slow (and noisy, though
that's dealt with by refreshing). It has limitations currently, but there's
good work right now lifting those, so it's definitely something to keep an eye
on moving forward.

------
amluto
> Intel CSME firmware also implements the TPM software module, which allows
> storing encryption keys without needing an additional TPM chip—and many
> computers do not have such chips.

And that was the real error. The TPM should be a TPM. It could be on die, but
it should be an entirely isolated device with its own RAM, no DMA, no modules,
and no other funny business.

~~~
lima
Internal TPM is more secure for attestation. You can MitM the LPC bus with an
external TPM, faking PCRs.

~~~
gruez
>You can MitM the LPC bus with an external TPM, faking PCRs.

not an issue if it's on-die, as the parent suggested.

~~~
lima
You are right, of course. My bad!

------
sounds
Intel claims they were already aware of this vulnerability in CVE-2019-0090.
ptsecurity believes there is more work to do here though.

To me it sounds like Intel is not thrilled with ptsecurity's work, and may not
be awarding ptsecurity a bounty or recognition for this. But that's just my
two cents.

\------>8------ quoting from the article ------>8------

We should point out that when our specialists contacted Intel PSIRT to report
the vulnerability, Intel said the company was already aware of it
(CVE-2019-0090). Intel understands they cannot fix the vulnerability in the
ROM of existing hardware. So they are trying to block all possible
exploitation vectors. The patch for CVE-2019-0090 addresses only one potential
attack vector, involving the Integrated Sensors Hub (ISH). We think there
might be many ways to exploit this vulnerability in ROM. Some of them might
require local access; others need physical access.

As a sneak peek, here are a few words about the vulnerability itself:

1\. The vulnerability is present in both hardware and the firmware of the boot
ROM. Most of the IOMMU mechanisms of MISA (Minute IA System Agent) providing
access to SRAM (static memory) of Intel CSME for external DMA agents are
disabled by default. We discovered this mistake by simply reading the
documentation, as unimpressive as that may sound.

2\. Intel CSME firmware in the boot ROM first initializes the page directory
and starts page translation. IOMMU activates only later. Therefore, there is a
period when SRAM is susceptible to external DMA writes (from DMA to CSME, not
to the processor main memory), and initialized page tables for Intel CSME are
already in the SRAM.

3\. MISA IOMMU parameters are reset when Intel CSME is reset. After Intel CSME
is reset, it again starts execution with the boot ROM.

Therefore, any platform device capable of performing DMA to Intel CSME static
memory and resetting Intel CSME (or simply waiting for Intel CSME to come out
of sleep mode) can modify system tables for Intel CSME pages, thereby seizing
execution flow.

\------>8------ quoting from the article ------>8------

------
dmitrygr
It is telling that not a single comment here sees this as a bad thing. Maybe
Intel should take the hint. Users want to own their hardware!

~~~
wmf
People commenting on this thread are a very self-selected group.

~~~
takeda
Any consumer who understand it what it is for would be against it.

TPM is essentially a device that takes control away from the computer owner,
it is protecting some company's software from YOU.

~~~
sabas123
As if that is necessarily a bad thing. It also means that it helps me defend
against software imposing as me.

------
paxswill
I'm curious if Apple's work on hardening their secure boot process on x86
affects this at all? For this unaware, this [0] video covers it over about
seven minutes. Basically they claim to be enabling the IOMMU with a basic deny
everything policy so that when the changeover to executing from RAM occurs and
PCIe devices are brought up the IOMMU is able to deny possible malicious
access to the firmware image.

It _sounds_ from the end of the article that there are separate DMA/IOMMU
processes for the CSME, but I'm not familiar enough with stuff this far down
to know for certain.

[https://youtu.be/3byNNUReyvE?t=124](https://youtu.be/3byNNUReyvE?t=124)

~~~
morpheuskafka
It is their proprietary T2 chip that controls things like FileVault (fulldisk
encryption) and Touch ID. So a vulnerability on Mac would not be nearly as
severe as on Windows, where the this can eventually compromise the fTMP used
for BitLocker encryption (dTMPs wouldn't be vulnerable, but their integrity
protection can be bypassed by messing with their physical connections to the
CPU).

The T2 chip has its own Secure Enclave and immutable BootROM, and it
supposedly verifies the Intel UEFI ROM before it is allowed to load, and then
the CPU reads this from the T2 over SPI. So it would seem that this boot
process is not weakened by a compromise of the Intel key, as only Apple can
sign UEFI updates to be loaded onto the T2 chip.

Source:
[https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...](https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/apple-
platform-security-guide.pdf) (long PDF)

------
osy
Related: I’ve wrote a (maybe not 100% accurate) low level summary of the x86
secure boot model here a while ago [https://osy.gitbook.io/hac-mini-
guide/details/secure-boot](https://osy.gitbook.io/hac-mini-
guide/details/secure-boot)

------
unnouinceput
For my upgrade, due to having lesser vulnerabilities, I decided this year
(after 20 years of only using Intel) to go with AMD. Had my doubts, but this
article made me decide it's time to go AMD route.

~~~
Reelin
Unfortunately, AMD has PSP. [1] ARM has TrustZone. [2] You'd have to get a
system with a POWER9 [3] chip, such as the Talos II from Raptor. [4] That has
quite a price tag though, on account of not being mainstream.

[1]
[https://en.wikipedia.org/wiki/AMD_Platform_Security_Processo...](https://en.wikipedia.org/wiki/AMD_Platform_Security_Processor)

[2]
[https://en.wikipedia.org/wiki/ARM_architecture#Security_exte...](https://en.wikipedia.org/wiki/ARM_architecture#Security_extensions)

[3]
[https://en.wikipedia.org/wiki/POWER9](https://en.wikipedia.org/wiki/POWER9)

[4] [https://www.raptorcs.com/TALOSII](https://www.raptorcs.com/TALOSII)

~~~
dmitrygr
It should be noted that ME and PSP are both (a) a technology to implement a
super-root over your entire system and (b) an implementation of said super-
root environment that you do not control and cannot out out of. Trust Zone is
only (a). Trust zone just defines a technology that may be used to implement
such a thing, but it itself is harmless and does not actually do anything.

There are chips you can buy that do not come with any TrustZone code, and you
may write your own to put in there, if you so wish

------
qubex
Some time ago I had considered (and rejected, due to a bad review due to a
freak bad sample, apparently) equipping my unit with POWER9 systems from
Talos.

I am now reconsidering the idea.

[https://www.raptorcs.com/TALOSII/](https://www.raptorcs.com/TALOSII/)

~~~
guerrilla
For people who know nothing about this and want a tl;dr in video form:
[https://www.youtube.com/watch?v=5syd5HmDdGU](https://www.youtube.com/watch?v=5syd5HmDdGU)

~~~
qubex
And for anybody interested in the previous Hacker News discussion on the
topic:
[https://news.ycombinator.com/item?id=14956257](https://news.ycombinator.com/item?id=14956257)

------
thatiscool
BACKDOOR

5 years of Intel CPUs and chipsets

[https://arstechnica.com/information-
technology/2020/03/5-yea...](https://arstechnica.com/information-
technology/2020/03/5-years-of-intel-cpus-and-chipsets-have-a-concerning-flaw-
thats-unfixable/)

------
londons_explore
TL;DR: There is a tiny window during bootup when any hardware can DMA code or
keys in/out of RAM. That allows complete compromise of all protections offered
by the chipset, including secure boot, TPM key storage, etc. It is not fixable
via a firmware update.

The researchers _have not demonstrated_ a complete end to end attack, but it
seems likely one exists.

While this could likely be pulled off easily as a local attack, in some cases
it might also be possible to do as a remote attack depending on being able to
program other hardware devices to exploit the flaw during a reboot.

------
mindslight
This is great news! Undermining remote attestation is a win for the open web
and free society. And perhaps it means we can get Libreboot on something newer
than Ivy Bridge.

~~~
Karunamon
This was my first thought as well. It seems these management engine tools have
only two uses in the real world: enterprise IT, and various forms of DRM.

Both exist to treat the user as a hostile entity.

~~~
baybal2
Given the presence of 4k web-dls (original, not reencoded content,) somebody
must have the key, or they must have managed to pwn the DRM on an even deeper
level, like tapping the memory (which is even worse.)

Another possibility is still a source leak, where 4k content gets lifted off
Netflixes own internal content storage.

~~~
jandrese
Or a simple HDMI defeat and re-encode. It only takes one guy to put it out on
the net. DRM is an inherently flawed concept.

~~~
sounds
The content is watermarked by the time it is available on HDMI. The guy who
re-encoded it would get a knock on the door.

~~~
kevin_thibedeau
XOR frames captured via two different accounts.

~~~
herogreen
Except if the watermark is terribly designed, that will not work. There is a
_lot_ of information that you can hide from the eye in a video.

------
bogomipz
The post states:

>"This vulnerability jeopardizes everything Intel has done to build the root
of trust and lay a solid security foundation on the company's platforms. The
problem is not only that it is impossible to fix firmware errors that are
hard-coded in the Mask ROM of microprocessors and chipsets."

Could someone explain what a Mask ROM is? This is the first I've heard of
this. How is it different that a regular ROM?

~~~
consp
A Mask ROM is read only memory which is build from the silicon mask which in
very simplistic terms is the image of the chips to be printed to make the cpu.
These 'images' are called masks, giving the Mask ROM. These rom's are cheap,
simple and small but can in no way be changed since that would require
changing the chip. Sometimes it includes also non masked rom which cannot be
altered.

Regular rom can be many things, rewritable like (E)EPROM or flash, disks like
cd(-rom) or chips like PROM (which is write once, read many).

------
stevefan1999
There is no trust. Apple used to trust their phone with bootrom. And guess
what, a buffer overflow and UaE bug called checkm8 was eventually found and is
almost universal to all iDevices

------
pjf
Can someone please explain the implications in layman terms?

------
baybal2
As I pointed before, lifting any "secret" key of any chip is quite trivial to
a semiconductor professional.

It's part of a job of an IC engineer to be able to tap arbitrary metal layer
on the device with microprobes to "debug" it, and this is something quite
routine in a process of a microchip development.

Any such measures can only deter people without access to an IC development
lab.

~~~
yjftsjthsd-h
> Any such measures can only deter people without access to an IC development
> lab.

That's a pretty tiny group, isn't it?

~~~
rrix2
There are at least five eyes in that group, though.

[https://en.wikipedia.org/wiki/Five_Eyes](https://en.wikipedia.org/wiki/Five_Eyes)

------
fps_doug
From a practical standpoint, how easy would it now be to e.g. create a
"signed" bootloader (say, custom GRUB) that will boot on those affected chips
with the default secure boot configuration? Or is this just for information
exfiltration?

------
holtalanm
im guessing not, but does this affect AMD CPUs/chipsets?

~~~
morpheuskafka
No. The ultimate potential of this attack is the complete compromise of all
Intel signing authorities over affected models. Naturally, that signing key
does not have any value on AMD systems, nor can this vulnerability in itself
be used on them.

~~~
holtalanm
I figured as much. thanks!

~~~
morpheuskafka
update - it has now been confirmed that the T2 chip is vulnerable to the
checkm8 vulnerability which made version-agnostic jailbreaks available for all
iOS devices up to A11 CPUs. So, it would seem that Apple is only slightly in a
better position.

AFAIK, the Secure Enclave stores the actual disk encryption keys and Touch ID
data, so that should be safe. But the secure boot validation, firmware
password, startup security policy, etc. can now be bypassed (once a full
exploit to do so is written). Also, it is quite possible that the Intel ME and
UEFI firmware validation can be bypassed by simply disabling that part of the
T2's bridgeOS code.

------
vzaliva
In my opinion, critical code like this must be formally verified.

~~~
vardump
Not sure how formal verification would have helped here. DMA access is allowed
at boot up, game over.

~~~
AnimalMuppet
In principle, you could consider validating the _system_ , not just the
software. It might reveal such a gap.

Note well: I am not claiming that the tools exist currently to do this.

------
shiblukhan
A vulnerability has been found in the ROM of the Intel Converged Security and
Management Engine (CSME).

A reference to the specific vulnerability would be nice. CVE? Conference
presentation? El Reg? Sketchy blogspam? Maybe I've been living under a rock,
but it would still help the reader out.

~~~
notpeter
The article mentions CVE-2019-0090 and Intel acknowledges the author (Mark
Ermolov of Positive Technologies) in their advisory. You haven't been living
under a rock, this is a primary source and the first public suggestion of the
grave severity of the vulnerability.

"CVE-2019-0090 was initially found externally by an Intel partner and
subsequently reported by Positive Technologies researchers. Intel would like
to thank Mark Ermolov, Dmitry Sklyarov and Maxim Goryachy from Positive
Technologies for reporting this issue."

[https://cve.mitre.org/cgi-
bin/cvename.cgi?name=CVE-2019-0090](https://cve.mitre.org/cgi-
bin/cvename.cgi?name=CVE-2019-0090)

[https://www.intel.com/content/www/us/en/security-
center/advi...](https://www.intel.com/content/www/us/en/security-
center/advisory/intel-sa-00213.html)

