
A Full Break of the Bitstream Encryption of Xilinx 7-Series FPGAs - Nokinside
https://www.usenix.org/conference/usenixsecurity20/presentation/ender
======
alexdowad
Once I worked at a place which was very interested in protecting the IP
inherent in their firmware. They gave me a research assignment to get an idea
of how difficult it would be for an attacker to extract it as a binary given
unlimited physical access to a sample device.

Since I read and write Chinese, I did some searching on Chinese-language sites
and found a company advertising their ability to do just that... for about
$10,000 per job. They listed the chips they knew how to crack, but the one my
employer was using was not on the list...

I feel that trying to prevent reverse-engineering by adversaries with
unlimited physical access is a fool's errand. So this break of Xilinx FPGAs is
interesting... But kind of a shoulder shrug.

~~~
userbinator
There are quite a few Chinese (and Russian, not surprisingly...) companies who
will do "MCU breaks". $10k (USD) is near the high end of the price range;
price depends on complexity and newness --- less than $1k for some of the
common and older parts. Mikatech is one of the better-known and older ones.

They are great for maintaining legacy equipment where the original company has
either discontinued support or disappeared completely.

~~~
thr0w__4w4y
I've often wondered -- and perhaps you don't know the answer, or there is no
one typical answer -- but, do these companies in Russia, China, etc., that one
pays to break the chip, promise to not re-sell the extracted chip contents?
(Or maybe we should just remember the saying "no honor amongst thieves")

I think the last time I looked into a service like this, they required that
you send them multiple targets to attack because normally the attack is
invasive / destructive, and they might need to "burn through a few" before
they succeed. If you have any experience in this area, do you know if that
still holds true?

Thanks. Sorry to punish your useful answer with more questions.

~~~
userbinator
_but, do these companies in Russia, China, etc., that one pays to break the
chip, promise to not re-sell the extracted chip contents?_

I don't know --- I imagine a lot of the time they wouldn't even know what
equipment the chip came out of, and/or it's something extremely obscure, so it
would be very hard to sell the contents.

 _I think the last time I looked into a service like this, they required that
you send them multiple targets to attack because normally the attack is
invasive / destructive, and they might need to "burn through a few" before
they succeed. If you have any experience in this area, do you know if that
still holds true?_

If it's a new MCU they've not done before, they might ask for that --- to find
where the protection fuses are. It doesn't have to be the actual one you want,
just an example of a protected one along with the code it contains in order to
"check their answers". That will cost quite a lot more than known-extractions,
however.

------
LeifCarrotson
While this does open up the code (sans descriptive text) to external attacks,
I rather feel that once you have a logic analyzer or compromised
microcontroller on the logic bus of your secure device you've got the attacker
on the wrong side of the airtight hatchway.

I'm personally much more interested in what it means for 'attackers' who wish
to use it to open up their own hardware. Perhaps that might not align with the
goals of Xilinx or the OEM, but it's great for their customers!

~~~
Nokinside
Attacker needs physical access to just one device in the whole product line
that uses the same encryption key.

After that, all you need is to get the device to update using the bitstream
you made. In some cases this could be remote attack.

~~~
IshKebab
I'm not sure this is true. This attack allows reading the encrypted bitstream,
but it doesn't say anything about allowing you to sign modified bitstreams.

~~~
teraflop
As the paper explains, there is no "signing" involved (in the sense of a
public-key cryptosystem).

Each encrypted bitstream includes an HMAC, but because the HMAC key is part of
the encrypted bitstream itself, it basically only acts like a non-
cryptographic checksum. An attacker who knows the encryption key can simply
choose an arbitrary HMAC key and generate a valid HMAC for arbitrary data.

EDIT: I should clarify that this attack doesn't appear to actually let someone
_extract_ the AES encryption key. But they can use an FPGA that has the key
programmed as a decryption oracle. And a weakness of CBC mode is that a
decryption oracle can be used for encryption as well.

~~~
IshKebab
Ah interesting, thanks!

------
eqvinox
This isn't bad for "security" or "secure microcontrollers." It is in fact good
for security. Designs running on these FPGAs can now be analyzed and inspected
for accidental or intentional security issues. Mind you: the security issues
are there whether you know about them or not. The function that the FPGA
implements can (and should) still be secure - since the security of its
algorithms should never rely on the secrecy thereof. (And to protect secrecy
of private key material, it comes down to physical security either way.)

What it's bad for is vendors relying on DRM to protect their assets. Which is
normally diametrically opposed to user freedom.

~~~
pmorici
This encryption is the only way that you can ensure the integrity of the
firmware at the chip level so anything relying on that as part of their chain
of trust is going to have to redesign their device now. Firmware is loaded
from an external eeprom on these devices DRM wasn't the sole use of this
feature.

~~~
eqvinox
The chain of trust could already be attacked by replacing the entire FPGA chip
with an unkeyed/open one, and then loading you own malicious bitstream.

Also, encryption never ensures integrity; it ensures confidentiality.
Integrity would've come from the accompanying signature scheme, which
apparently was badly implemented and broken at the same time.

If anything, the encryption makes it impossible to conduct spot checks on a
batch of devices you receive, since it prevents the end user from verifying
bitstream integrity. (The keys are device specific AFAIK, so the bitstream is
device specific too, and signature public keys aren't known.) To establish
trust, you ideally need an unencrypted, verifiable, signed bitstream.

(An encrypted, signed bitstream with the keys available does not protect
against manufacturer collusion; they can cooperate in sending you a tampered
device. An unencrypted bitstream allows comparing a device you received
against other devices around the planet.)

~~~
pmorici
On these chips the encryption and integrity checking feature is one and the
same you can't turn on one without the other.

Whether or not you use the same key over every device in a product line or a
per device key is up to the oem. So you can still verify firmware in the
former instance.

Replacing the FPGA chip is a lot harder than re-flashing an eeprom and they
would also have to put a lot of effort in to replicating your firmware just to
insert their change.

------
userbinator
I wonder if everyone who works on stuff like this is pro-DRM/anti-freedom,
because while I've seen plenty of DRM-breaking papers which paint a very
negative view of their findings (this one included), I can't recall seeing a
single one which takes the opposite view that this is another step forward for
freedom and right-to-repair. Are the researchers really believing that this is
a bad thing, or is it because they're afraid of taking that position since
others could disapprove and reject their paper?

~~~
bb88
So this will help:

1\. Security researchers, so they can see what malware may be lurking in FPGA
bitstreams.

2\. Open source developers working on FPGA bitstream compilers.

3\. People who want to steal proprietary IP cores.

It hurts:

1\. People who chose the part because of the closed bitstream particularly. In
part because they made security decisions that the bitstream wasn't open.

2\. Anyone who bought the products based upon the marketed security claims of
the product (Hopitals/DoD/etc)

~~~
black_puppydog
Woah, from what I undeddtand, the bitstream is the fpga equivalent of the
compiler output? Then selling something as "secure" because nobody can know
what code it is running would be security through obscurity no? How would
vendors get away with _that_ sort of BS?

To be clear: just talking about the confidentiality requirement here.
Authenticity (ie code signing, right?) is obviously something very useful
_especially_ in these cases.

~~~
chapplap
This is about bitstream _encryption_ , so there is an expectation of
confidentiality. The keys needed to decrypt the bitstream are stored in
nonvolatile memory on the FPGA itself. Assuming that it is implemented
correctly (evidently not in this case), it is impossible to decrypt the
bitstream without analyzing the FPGA die itself, using tools that are usually
beyond what a casual attacker might have. It probably won't stop a nation-
state from figuring out how to read out your FPGA design, but it will probably
slow down your competitors.

~~~
black_puppydog
Yes, for IP protection I get why that's interesting. But crucially, it's the
vendor's interest. For a hospital or such, the interest is actually opposed to
this. They should be looking for secure software that is as open as possible
to allow for audit and servicing if needed. So selling DRM as something that
somehow makes the _customer_ more secure is BS.

------
voxadam
Is there any way that the breaking of the Xlinix bitstream encryption opens
the door to documenting and reverse engineering that bitstream in the same way
that was done with Project IceStorm[0] for the Lattice iCE40 FPGAs?

[0] Project IceStorm -
[http://www.clifford.at/icestorm/](http://www.clifford.at/icestorm/)

~~~
amelius
It's such a sad situation. Why can't companies just provide all the necessary
hardware info in the datasheet?

~~~
ATsch
Vendor lock-in is the primary way in which these companies make money

~~~
amelius
But wouldn't an open specification be a much better value proposition for
engineers?

~~~
CamperBob2
Not directly. Much of the value in a modern FPGA lies in the specialized
proprietary hardware provided by the manufacturer -- transceivers, memory
controllers, clock management, dozens of other things -- and in the IP cores
that can either be inferred or generated through wizards.

So knowing the bitstream format by itself is only a small step forward, if
your goal is to take full advantage of the hardware and IP available. You'd
need to reverse-engineer all of the specialized hardware and IP support as
well. Opening the bitstream format would still be very worthwhile, but it's
not the game-changer that many believe it would be.

------
Nokinside
>3.5 Wrap-Up: What Went Wrong?

>These two attacks show again that nowadays, cryptographic primitives hold
their security assumptions, but their embedding in a real-world protocol is
often a pitfall. Two issues lead to the success of our attacks: First, the
decrypted data are interpreted by the configuration logic before the HMAC
validates them. Generally, a malicious bitstream crafted by the attacker is
checked at the end of the bitstream, which would prevent an altered bitstream
content from running on the fabric. Nevertheless, the attack runs only inside
the configuration logic, where the command execution is not secured by the
HMAC. Second, the HMAC key K_HMAC is stored inside the encrypted bitstream.
Hence, an attacker who can circumvent the encryption mechanism can read K_HMAC
and thus calculate the HMAC tag for a modified bitstream. Further, they can
change K_HMAC, as the security of the key depends solely on the
confidentiality of the bitstream. The HMAC key is not secured by other means.
Therefore, an attacker who can circumvent the encryption mechanism can also
bypass the HMAC validation

~~~
ATsch
This is another example of what Moxie Marlinspike calls the "cryptographic
doom principle". If you do anything, _anything_ with a ciphertext before
checking authenticity, doom is inevitable.

~~~
loeg
For others following along: [https://moxie.org/blog/the-cryptographic-doom-
principle/](https://moxie.org/blog/the-cryptographic-doom-principle/) (2011)

~~~
brokenmachine
That was really interesting, thanks for that.

------
lnsru
If I would really care about security, I would not pick SRAM FPGA in the first
place. The are nice Flash based FPGAs out there for projects with high
security requirements. They don’t need configuration devices leaking bitstream
all over the place.

On the other hand is somehow sad, that popular 7 series is compromised. Though
I never saw a company, that cared about bitstream security. It was best case
“nice to have” feature, usually being completely ignored.

~~~
duskwuff
A lot of flash-based FPGAs are actually an SRAM FPGA with an internal flash
die bonded to the configuration pins. The bitstream is harder to get to, but
it's still available to a determined attacker.

~~~
lnsru
Actel/Microsemi are very real Flash FPGAs while Altera/Intel MAX10 is SRAM
FPGA with configuration Flash inside. Very nice and highly integrated chip,
comfy development with it.

~~~
Ballas
I'd rather start from sand than use Microsemi again...

------
segfaultbuserr
It's not just an issue for big corporations and their proprietary software and
DRM, but also has serious implications for the free and open source hardware
community, especially the infosec hackers. To begin with: While it's not
realistic to make secure hardware (let's say, a OpenPGP/X.509/Bitcoin Wallet
security token) that can be 100% independently verified and free from all
backdoors, but still, relatively speaking, FPGAs are generally a better and
more secure option as a hardware platform than microcontrollers (for example,
see the talk [0] by Peter Todd on Bitcoin hardware wallet and pitfalls of
general-purpose microcontrolles), because of three advantages:

* It's possible to implement custom security protections at a lower level than accepting whatever is provided by a microcontroller or implementing it in more vulnerable software.

* Many microcontrollers can be copied easily, but FPGAs are often used to run sensitive bitstream that contains proprietary hardware/software, manufacturers generally provide better security protections, such as verification and encryption, against data extraction (read: OpenPGP private key) and manipulation attacks.

* Most "secure" microcontrollers are guarded under heavy NDAs, while they are commercially available (and widely used in DRM systems), but it's essentially useless for the FOSS community. On the other hand, because the extensive use of FPGA in commercial systems, security is NDA-free for many FPGAs. It's often the best (or the only option) that provides the maximum transparency - not everything can be audited, sure, but the other option is using a "secure" blackbox ASIC, which is a total blackbox.

Unfortunately, nothing is foolproof, manufacturers leave secret debug
interfaces, cryptographic implementations have vulnerabilities, etc. Hardware
security is a hard problem - 100% security and independent verification is
impossible, making it harder to attack is the objective, but it's worse than
software - once a bug is discovered and published, the cost of an attack
immediately becomes 0, and it cannot be patched. We can only hope that the
increased independent verification, like the researchers behind this paper,
can somewhat reduce these problems systematically.

[0]
[https://www.youtube.com/watch?v=r1qBuj_sco4](https://www.youtube.com/watch?v=r1qBuj_sco4)

~~~
eqvinox
The way to protect secure hardware tokens is not bitstream encryption, it's
tamper protection. You store the key material in SRAM that is erased when the
device detects any attempt at manipulating.

If your Bitcoin Wallet or whatever token is affected by this, it was IMHO
badly designed to begin with, since apparently it was relying on an AES-CBC
bitstream encryption scheme. That should've been a red flag even if it wasn't
broken.

~~~
segfaultbuserr
> _The way to protect secure hardware tokens is not bitstream encryption, it
> 's tamper protection. You store the key material in SRAM that is erased when
> the device detects any attempt at manipulating._

You need both. First, make all external storage (that hold keys, firmware,
configurations) unreadable to everything else besides the main processor
itself. Also, in the ideal world, implement tamper detection, in most HSMs
there are tamper detections, but unfortunately, the world is not ideal, in the
FOSS world, I don't see anything that uses tamper detection, developing an
open source tamper detection is something has great value to the community,
yet, I don't see it happening at anytime soon. Also, the majority of security
token/hardware have no tamper detection - SIM cards, bank cards, OpenPGP cards
(Yubikeys, Nitrokeys), smartphones, they only depend on encrypting external
storage and/or restrict the access of the secret inside a chip to maintain
security. In practice, they still have an above-average security level, it
clears shows tamper protection is not the only way to protection the hardware,
although it's less effective and occasionally something is going to be broken,
to be sure.

This specific FPGA bitstream encryption vulnerability may be a non-issue, as
pointed out by the critics, relying on external storage is not a good idea to
begin with, better to burn everything inside the FPGA. My point is that FPGAs
are the only platform to implement FOSS security hardware in the most
(relatively) transparent and secure manner, yet, the recent discoveries of
FPGAs vulnerabilities indicates they are much less secure than expected, and
it's only the tip of an iceberg. If external bitstream encryption has
cryptographic vulnerabilities, what comes next? More broken cryptos that allow
you to read an internal key?

~~~
eqvinox
Moving your data inside a device not easily accessible is tamper protection
too.

------
krilovsky
I don't know what the best prctices are now, but it used to be best practice
to blow the CFG_AES_Only eFUSE when using bitstream protection, which prevents
the loading of a bitstream which isn't authenticated, and thus foils this
attack. If a manufacturer went to the trouble of encrypting the FPGA but then
allowed loading of plaintext bitstreams they probably didn't really understand
what they were doing.

~~~
Nokinside
This attack breaks the encrypted and authenticated bitstream.

I thought that the title "A Full Break of the Bitstream Encryption of Xilinx
7-Series FPGAs" would give some information even for those who don't want to
read the article before commenting. :)

~~~
krilovsky
While I understand that without the proper context (knowing a bit about
bitstream protection in the Xilinx 7-Series FPGAs) my comment may seem a bit
obscure, I did read the paper.

As the sibling comment mentions, the attack requires programming a plaintext
bitstream in order to perform the readout of the WBSTAR register after the
automatic reset caused by the HMAC authentication failure. Blowing the
CFG_AES_Only eFUSE prevents the loading of that plaintext readout bitstream
and the first stage of the attack is thus foiled (preventing the second stage
of the attack from taking place as well).

~~~
Nokinside
That was the first attack. How about the second attack where they show how to
encrypt a bitstream?

~~~
krilovsky
See my reply in the sibling comment thread. Basically, the second attack is
not possible without the first succeeding.

------
jtaft
From the paper:

> On these devices, the bitstream encryption provides authenticity by using an
> SHA-256 based HMAC and also provides confidentiality by using CBC-AES-256
> for encryption

> We identified two roots leading to the attacks. First, the decrypted
> bitstream data are interpreted by the configuration logic before the HMAC
> validates them. Second, the HMAC key is stored inside the encrypted
> bitstream

------
Nokinside
This is not small issue. Up to 10% of FPGA's in the market can be affected.

RAID-, SATA-, NIC- controllers, Industrial control systems, mobile base
stations, data centers, devices like encrypted USB sticks and HDD's. In some
cases it's possible to carry the attack remotely.

~~~
baybal2
Only ones who need to keep firmwares secret will be affected.

There are few companies I knew who transitioned from MCUs to FPGAs solely for
their obsession of keeping their "IP" from leaking, hoping that FPGA will
provide more obscuration than simple encrypted MCU firmware.

~~~
Nokinside
If the FPGA can be updated, attacker can take over the hardware and reprogram
it.

If attacker gets access to the bitstream, the has complete control over the
FPGA.

~~~
alexdowad
But if an attacker is already "inside" your system and is able to access the
interface for configuring the FPGA, I think you have already lost...

~~~
segfaultbuserr
> _But if an attacker is already "inside" your system [...] I think you have
> already lost..._

It's not necessarily true. Protecting the system from physical attackers is a
legitimate requirement in cryptography.

1\. While all hardware secrets can be broken with physical access, but there's
a different in cost, cost and cost. A commercial HSM - used by CAs,
businesses, banks to hold private keys - contains RF shields, temper-detection
switches, sensors for X-ray, light, temperature, battery-backed SRAM for self-
destruction, and so on, it's extremely unlikely that anyone has ever succeeded
to break into a HSM, possibly only a handful people in the three-letter
agencies were able to do that, and even for them it's a great expense,
launching a supply-chain attack, bribing the sysadmin or stealing the key are
more reasonable options. It's certainly possible to break into it, but the
cost is prohibitively expensive for most.

2\. You can make the hardware 100% secure against physical attackers if the
actual secret is not even on the hardware. If I steal your full-disk-encrypted
laptop while it's off, I cannot obtain any meaningful data because the
encryption key is in your brain, not in this laptop. This is a practical
threat model and often desirable. However, there's nothing to stop me from
bruteforcing the key because the hardware itself doesn't have protections.

3\. If we make some compromises to trust the hardware, we have another
security model used by a modern smartphone - an encryption key is buried
inside the chip, I can boot the phone but it's impossible to physically bypass
any software-based access control like passwords, since all Flash data is
encrypted. All hardware can be broken with physical access, but opening the
chip and extract the key may be cheaper than breaking into a HSM, but it's
still expensive in terms of cost and expertise. It's difficult to bypass
without an additional software vulnerability, this is a good enough threat
model and often desirable.

We can combine (2) and (3): Save the actual secret outside the hardware so it
cannot be stole, at the same time, implement some hardware-based protection
that requires the attacker to launch an expensive physical attack before one
is able to bruteforce the secret. It will be a defense-in-depth and the best
of all worlds. What we have here is actually a OpenPGP Card (Yubikey,
Nitrokey), or a Bitcoin wallet, which uses both mechanism to protect the user
from thieves. For example, Nitrokey's implementation first encrypts the on-
chip OpenPGP private key with an user-supplied passphrase, and it also set the
on-chip flash to be externally unreadable (only readable by firmware itself),
so that the private key cannot be extracted, finally it has the standard
OpenPGP card access control: If multiple wrong passphrases are attempted, it
locks itself, of course, this feature requires an inaccessible on-chip flash -
either the flash itself is on-chip, or an encryption key to the flash is on-
chip.

If the firmware executed by the chip can be replaced, the attacker can disable
all the access restrictions (disable the wrong-passphare lockout, and readback
the key), totally eliminating the hardware layer of defense, which is not what
we want here. Unfortunately, Nitrokey is based on a standard STM32
microcontroller, and its flash protection has already been broken. Nitrokey
Pro remains "secure" \- the real crypto is performed on an externally inserted
OpenPGP smartcard, which is powered by a "secure" microcontroller, but the
card is a partial blackbox and cannot be audited - When Yubikey said it was
unable to release the source, many recommended Nitrokey since it's "open
source", unfortunately, it is, but it depends on a Yubikey-like blackbox. If
you want to implement something better, more trustworthy than a Nitrokey or
Yubikey, the option for you here is to write a FOSS implementation of the
blackbox, make it becoming a whitebox. Not that the underlying FPGA can be
audited, it cannot be, but it's still much better than a complete blackbox.

And now back to the original topic: If your FPGA's bitstream encryption has a
vulnerability, it's game over. This is a serious problem. A response may be:
relying on bitstream encryption is not the correct approach, one should use
external Flash at all, well, yes, but this is not my argument here, my
argument is simply that securing your hardware against physical access by an
attacker is a legitimate requirement, and that even if everything can be
broken with physical access, doing so still has a point.

~~~
baybal2
> A commercial HSM - used by CAs, businesses, banks to hold private keys -
> contains RF shields, temper-detection switches, sensors for X-ray, light,
> temperature, battery-backed SRAM for self-destruction, and so on, it's
> extremely unlikely that anyone has ever succeeded to break into a HSM

A service to lift firmware from Gemalto chips used in SIM, and credit cards
costs $25k here at most

~~~
segfaultbuserr
I think there's some confusion. Are you sure that you are talking about the
same thing? What I meant here is a real HSM, something similar to a IBM 4758
[0] (which was once vulnerable, but only because it had buggy software), not a
SIM card or a credit card chip. Do you imply that many HSMs are based on the
same Gemalto chip?

[0]
[https://www.cl.cam.ac.uk/~rnc1/descrack/ibm4758.html](https://www.cl.cam.ac.uk/~rnc1/descrack/ibm4758.html)

------
nabla9
I don't know much about FPGA's but I tried to read the paper. Maybe someone
can tell me if this is correct:

Getting your hands to raw gate configuration helps with cloning. It's pita to
reverse engineer.

Any device that uses Xilinx 7 or Virtex 6 SPI or BPI Flash remote update is
potentially fucked. There is HMAC in bitstream and no other authentication.

------
SlowRobotAhead
The number of people glossing over his to take a bitstream which is the gate
configuration of the fabric and read it back to human logic like Verilog is
extremely few people and always a lot of time.

This is a big issue for cloning though.

Oh AES CBC, when will you stop disappointing!?

------
lallysingh
So can I use this hack to use open source tools on these FPGAs?

~~~
firmnoodle
No. It means people will be able to copy FPGA's like was possible in the
2000's. It also means that the design in an FPGA could be altered by an
unauthorized 3rd party without having to physically replace the device.

~~~
londons_explore
Some FPGA's _require_ bitstream encryption. On those devices, breaking this
encryption is the first of many steps to making an opensource toolchain

~~~
eqvinox
This is not the case for the FPGAs targeted here. Encryption is optional on
Xilinx 7-Series. Also there already is an open source toolchain coming up for
them.

[http://www.clifford.at/yosys/](http://www.clifford.at/yosys/)

