
When Lightning Strikes Thrice: Breaking Thunderbolt 3 Security - dafrankenstein2
https://thunderspy.io/
======
tptacek
I skimmed the paper and while the research looks solid, just in terms of the
digging they did and the documentation they're providing, this website
_really_ buries its lede: if you've got a Macbook running macOS, the Macbook
IOMMU breaks the DMA attack, which is the thing you're actually worried about
here.

Additionally, regardless of the OS you run, Macbooks aren't affected by the
Security Level/SPI flash hacks they came up with to disable Thunderbolt
security.

~~~
AceJohnny2
Last time Tunderbolt was broken (Thunderclap [1]), it was found that the Linux
driver didn't activate the IOMMU. I assume that's since been fixed.

[1] [https://lwn.net/Articles/782381/](https://lwn.net/Articles/782381/)

~~~
danieldk
It seems to do that now:

[https://christian.kellner.me/2019/07/09/bolt-0-8-with-
suppor...](https://christian.kellner.me/2019/07/09/bolt-0-8-with-support-for-
iommu-protection/)

~~~
fulafel
What's the relationship of the "bolt" project with the default driver support
in Linux?

------
mehrdadn
> there is no malicious piece of hardware that the attacker tricks you into
> using

> All the attacker needs is 5 minutes alone with the computer, a screwdriver,
> and some easily portable hardware.

Just started reading, but the comparison is already a little bizarre. It
almost seems like the digital version of "This murderer is on the loose and
you're in danger! He doesn't need to inject poison into your food. All he
needs is just 5 minutes in front of you with a knife!"

~~~
ashtonkem
As a general rule, anyone with physical access to your machine already owns
it. Physical security matters, a lot.

That being said, malicious hardware is a problem. A hacked phone charging
terminal at the airport could certainly be a serious problem if there are
enough vulnerabilities in the USB stack.

~~~
mjg59
> As a general rule, anyone with physical access to your machine already owns
> it.

People frequently say this, but never really explain it. As far as I can tell,
it translates to "Nobody cares about physical security" \- except it's clear
that people /do/. Things like Boot Guard are only really relevant to physical
attacks. DMA protection in firmware is only really relevant to physical
attacks. It's extremely obvious that the industry is attempting to avoid short
term physical access to a device being sufficient to compromise it, and
research that demonstrates that it's still possible is valuable.

~~~
maxbond
Physical access is just such a rich attack surface that keeping your computer
away from malicious actors is the right and proper solution.

An extreme example a pentester imparted to me once was, if someone could spend
sufficient time alone with my laptop, they could remove my hard drive and
insert it into an identical laptop with a hardware or firmware backdoor
preinstalled. We were discussing nation-state adversaries, but the general
principle applies.

Another example is attacks on encrypted drives (so-called "evil maid"
attacks). If a computer is booted and the drive is decrypted, an attacker with
physical access could open the computer, remove the RAM, and download it's
contents, thereby stealing the encryption key. If the computer is powered
down, it's still vulnerable to other attacks; enrypted drives necessarily have
cleartext code for accepting the password & decrypting the drive. You could
modify this code to log the decryption key, or broadcast it over your device's
radios.

There's also the classic Windows "sticky key" exploit, where you replace the
sticky key binary with a program that gives you administrator access, reboot
the computer, and then activate sticky keys.

You could install a keystroke logger. You could install a device to record
monitor output. You could log network traffic.

I've yet to find a kiosk environment that I couldn't break out of. Once I was
able to break out of a scanning kiosk environment, and into a Windows desktop,
by turning the quality settings all the way up and crashing the kiosk. That
was one of the more difficult examples; most of the time all you need is to
find a way to right-click. (I had the proper authority to investigate these
kiosks.)

The point is that the list goes on.

It is true, as you say, that there has been progress in implementing
mitigations, and that there are people who care deeply about these issues. A
counterexample might be SIM cards, TPMs, and other HSMs. These systems are
able to provide better guarantees by encapsulating their peripherals and being
willing to self destruct. But that could describe a cell phone, tablet a
laptop, too.

Maybe in the future this "law" won't be so hard and fast.

~~~
mjg59
> Physical access is just such a rich attack surface that keeping your
> computer away from malicious actors is the right and proper solution.

Keeping attackers away from your computer is certainly the best solution, just
as keeping your computer off the network is the simplest answer to avoiding
network security issues. But that's not always an option, so we still need to
care about it.

> An extreme example a pentester imparted to me once was, if someone could
> spend sufficient time alone with my laptop, they could remove my hard drive
> and insert it into an identical laptop with a hardware or firmware backdoor
> preinstalled.

That'll be detected with any properly implemented remote attestation solution
(switching the machine will change the endorsement key, so attestation will
fail)

> If a computer is booted and the drive is decrypted, an attacker with
> physical access could open the computer, remove the RAM, and download it's
> contents, thereby stealing the encryption key.

Removing soldered-on RAM from a motherboard fast enough to maintain the
contents is not a straightforward attack. Not theoretically impossible, but
you're not going to have a good time of it.

> If the computer is powered down, it's still vulnerable to other attacks;
> enrypted drives necessarily have cleartext code for accepting the password &
> decrypting the drive. You could modify this code to log the decryption key,
> or broadcast it over your device's radios.

Will be detected via remote attestation.

> There's also the classic Windows "sticky key" exploit, where you replace the
> sticky key binary with a program that gives you administrator access, reboot
> the computer, and then activate sticky keys.

How do you do that with an encrypted drive? Look, yes, it's not _easy_ to
guard against physical attacks. But some organisations that genuinely _do_
have to deal with state level attackers care about physical security and care
about mitigating it, and we have moved well beyond the "physical access means
you've lost" state of affairs. Finding new cases that allow attackers with
physical access to subvert our understanding of the security boundaries of a
machine is of significant interest.

~~~
maxbond
You raise some interesting points, and have force me to question my
assumptions that this is simply a lost cause.

------
vvanders
Looks like most of these require physical access to the SPI flash and not just
the thunderbolt port unless I'm reading the disclosure wrong.

------
osy
This is the kind of garbage that the infosec community often memes about. A
marketing website, a domain name, a cute logo for a vanity project
masquerading as security research. Basically every one of the "seven"
vulnerabilities boils down to "if someone can flash the SPI of the thunderbolt
controller then xxx" but if they can flash the TB SPI, then they can also
flash the BIOS SPI which has a lot of the same "vulnerabilities" but arguably
is more impactful. The reason they only mentioned TB is because the BIOS stuff
is well known and you can't put your name on it.

Let's break down each of the "vulnerability".

1\. "However, we have found authenticity is not verified at boot time, upon
connecting the device, or at any later point." This is actually false. Like,
the author either didn't experiment properly or is lying/purposely misleading
you. The firmware IS verified at boot for Alpine Ridge and Titan Ridge
(Intel's TB3 controllers). They aren't for older controllers which does NOT
support TB3. When verification fails, the controller falls back into a "safe
mode" which does NOT run the firmware code for any of the ARC processors in
the Ridge controller (there are a handful of processors where the firmware
contains compressed code for). I'm willing to bet the author did not manage to
reverse engineer the proprietary Huffman compression the firmware uses and
therefore couldn't have loaded their own firmware. Because if they did, it
wouldn't have worked. Now the RSA signature verification scheme they use to
verify the firmware does suffer from some weaknesses but afaik doesn't lead to
arbitrary code execution (on any of the Ridge ARC processors). I would love to
be proven wrong here with real evidence though ;)

2\. Basically the string identifiers inside the firmware isn't
signed/verified. This has no security implications beyond you can spoof
identifiers and make the string "pwned" appear in system details when you plug
the device in and authenticate it. Basically if you've ever developed custom
USB devices you can see how silly this is as a "vulnerability."

3\. This is literally the same as #2.

4\. Yes, TB2 is vulnerable to many DMA attacks as demonstrated in the past.
Yes, TB3 has a TB2 compatibility mode. Yes, that means the same
vulnerabilities exist in compatibility mode which is why you can disable it.

5\. This one is technically true. If you open the case up, and flash the SPI
chip containing the TB3 firmware, you can patch the security level set in BIOS
and do stuff like re-enable TB2 if the user disabled it. But if I were the
attacker, I would instead look at the SPI chip right next to it containing the
UEFI firmware and NVRAM variables (most of which aren't signed/encryption in
any modern PC).

6\. SPI chips have interfaces for writing, erasing, and locking. If you have
direct access to the chip you can abuse these pins to permanently brick the
device. Here's another way: take your screwdriver and jam it into the
computer.

7\. Apple does not enable TB3 security features on Boot Camp. I guess this one
is vaguely the only real "vulnerability" although it's well known and Apple
doesn't care much about Windows security anyways (they don't enable Intel Boot
Guard or BIOS Guard or TPM or any other Intel/Microsoft security feature).

Not that it matters but my personal experience with TB3 is that I've done
significant reverse engineering of the Ridge controllers for the Hackintosh
community.

~~~
mjg59
> they can also flash the BIOS SPI

Boot Guard makes that impractical in most cases. The point here is that on
machines that don't implement kernel DMA protection, you're able to drop the
Thunderbolt config to the lowest security level and then write-protect the
Thunderbolt SPI so the system firmware can't re-enable it, making it easier to
perform a DMA attack over Thunderbolt and sidestep the Boot Guard protections.

This isn't a world-ending vulnerability, but it's of interest to anyone who
has physical attacks as part of their threat model.

~~~
osy
Boot Guard is not implemented on most (all?) self built machines and a lot of
pre-builts as well. But even if it is enabled, UEFI variables are not
protected at all. You can disable Secure Boot just by overwriting UEFI
variables and then boot any arbitrary code from USB.

~~~
mjg59
Which will change the measurements in PCR7, which is a detectable event that
will break Bitlocker unsealing.

------
justaguyonline
What would it take to have a Thunderbolt/USB C condom? You know, like those
standard USB adapter that just drops the data leads on a usb charger to make
attacks like this impossible. Maybe we would have to implement a hardware
switch on the device itself?

I'm not going to feel safe charging with a public use charger until I find
some way to insure only power and not data is making it to my device. Even POE
feels like it's safer than modern peripheral standards right now.

(I admit this might not be perfectly linked to the article, it's just a need
I've felt for a while but I can't seem to buy a solution for.)

~~~
dannyw
How about a SSH-like “trust on first use” prompt for all data connections?
Each USB/TB device has its own pub/private keypair.

If you ever plug in a charging cable and get the prompt, you know something is
wrong.

~~~
zokier
That is exactly what TB has. The problem is that the device private key (in
many(/all?) devices) sits in the flash memory completely unprotected so anyone
can clone it.

~~~
redactions
It is not like ssh at all. It is a problem that secrets are kept in the flash
and it is also a problem that those secrets are sent over the untrusted
channel.

~~~
zokier
The key is transferred only on the initial connection, after that a
challenge/response mechanism is used. So from UX point of view it achieves
similar TOFU, even if the technical details vary a bit. Sure, its bit worse
but it is still very much trust on first use.

~~~
redactions
After the device is connected, use looks like a key consistency aware system
like an ssh client. It is as you note very different in the first protocol
run.

To extract the device secret value, an attacker needs to connect the target
device to an attacker device. As you note, the thunderbolt device leaks the
secret value over the untrusted channel. Impersonation of that device after
that moment is trivial as a result.

The entire cryptographic protocol is broken from the start.

~~~
zokier
> To extract the device secret value, an attacker needs to connect the target
> device to an attacker device. As you note, the thunderbolt device leaks the
> secret value over the untrusted channel.

If victin device is connected to attacker host, then only responses to
challenges are potentially leaked. That might allow active mitm, but not
cloning the key. That's the whole reason TFA needed to go poking around in
flash to get the keys.

Not saying that TB is the best security protocol in the universe, but as far
as I can tell the vulnerabilities exposed here are mostly implementation flaws
rather than protocol level issues.

~~~
redactions
ssh uses asymmetric keys and the cache on the client has a three tuple
(host,ip,public key) which allows a client to notice a difference in any of
the three elements. By comparison, Thunderbolt leaks the entire secret as the
first step and subsequent steps use derived values. ssh is secure if the key
doesn't change and isn't compromised through other means. Thunderbolt is not
secure and it fails under a passive surveillance adversary, it also fails for
active adversaries.

I take your point that subsequent secret use in the n+1 protocol run isn't as
bad as the very first run, and as you note, that probably doesn't matter in
the face of an active attacker.

If Thunderbolt had used asymmetric cryptography, I would probably agree with
you that the protocol has the same semantics as ssh. The reason that I
disagree is that it appears to have the same semantics for the user interface
but the underlying protocol differences are what make the protocol unsuitable
for use. It's at least part of why Intel has now retired Security Levels and
is leaning so strongly on kDMA. Security Levels as a protocol is simply not
cryptographically secure for any meaningful definition of secure as the first
step exposes the base secret value.

Note: the attack doesn't require the use of a flash clip, that's just a simple
way to demonstrate device specific state extraction.

------
graton
I wonder if that could be used by used sellers of MacBooks to get into the
computers.

[https://www.vice.com/en_us/article/akw558/apples-t2-security...](https://www.vice.com/en_us/article/akw558/apples-t2-security-
chip-has-created-a-nightmare-for-macbook-refurbishers)

I guess MacBook resellers sometimes get computers where the password has been
set and they can't get into the computers. I imagine they would be motivated
to find anyway they can to unlock the computers.

~~~
tptacek
No; for Macbooks, this work reduces to BadUSB.

------
oicat
There is a nice write-up about this on attackerkb. If you're not familiar with
it it's a community to provide assessments of vulnerabilities and point out
which are worth stopping everything to patch and which are mostly harmless.
It's currently in open beta. Main site:
[https://attackerkb.com/](https://attackerkb.com/) Thunderspy assessment:
[https://attackerkb.com/topics/mPaHZgsUvk/thunderspy](https://attackerkb.com/topics/mPaHZgsUvk/thunderspy)

------
zerof1l
There were news sometime ago that Microsoft did not include thunderbolt in
their surface 3 because it was insecure. I wonder if that's related to this
and whether Microsoft knew about this for a while.

------
mschuster91
> Contrary to USB, Thunderbolt is a proprietary connectivity standard. Device
> vendors are required to apply for Intel’s Thunderbolt developer program, in
> order to obtain access to protocol specifications and the Thunderbolt
> hardware supply chain. In addition, devices are subject to certification
> procedures before being admitted to the Thunderbolt ecosystem.

I thought that this had changed with USB-C?!

------
dafrankenstein2
Easy read on the Wired magazine: [https://www.wired.com/story/thunderspy-
thunderbolt-evil-maid...](https://www.wired.com/story/thunderspy-thunderbolt-
evil-maid-hacking/)

------
dafrankenstein2
This video shows the POC demo:
[https://www.youtube.com/watch?v=7uvSZA1F9os](https://www.youtube.com/watch?v=7uvSZA1F9os)

------
person_of_color
Really though, if an attacker has unencumbered access to one’s device, all
security goes flying out the window.

The website is highly self-promoting.

~~~
mappu
_> if an attacker has unencumbered access to one’s device, all security goes
flying out the window_

This is rapidly starting to become less true - full disk encryption is
everywhere, backed by hardware TPMs; the Lockdown LSM prevents root from owing
the boot chain; devices with soldered RAM are functionally immune to cold boot
attacks.

There are still things an attacker can do - put a hardware keylogger on the
keyboard wires, a skimmer on the fingerprint reader - but that requires future
input from the victim. It is feasible today to defend against a physical
attacker if you have the right hardware upfront and don't use it after the
attack.

~~~
userbinator
_This is rapidly starting to become less true_

Unfortunately, both for right-to-repair and actually owning the hardware you
bought.

~~~
gruez
TPMs don't impede your ability to repair anything. Soldered ram is a hassle,
but it's not any more malicious than soldered CPUs. It's a design choice, and
tradeoffs had to be made.

~~~
mappu
_> TPMs don't impede your ability to repair anything_

There are some stories like this:
[https://www.vice.com/en_us/article/akw558/apples-t2-security...](https://www.vice.com/en_us/article/akw558/apples-t2-security-
chip-has-created-a-nightmare-for-macbook-refurbishers)

It's suggested that many such devices might be stolen. But there will also be
devices where the user forgot to wipe their data (or didn't know how); or
devices that are only just damaged enough that you can't wipe the user data.

Probably an official Apple store can refurbish them somehow, but that is the
NOBUS / EARN IT argument.

~~~
p_l
Well, that's more an explicit T2 issue that goes beyond what is known as
"industry standard" TPM. Apple just hates you a (big) bit extra.

