
Exceptional failure: breaking the STM32F1 read-out protection - bcantrill
https://blog.zapb.de/stm32f1-exceptional-failure/
======
userbinator
IMHO unlike the tone of the article, this is cause for celebration, like every
other time DRM is broken. Now all the proprietary firmware in those otherwise
useless/insecure IoT devices etc. can be more easily reverse-engineered and
replaced, possibly driving more hardware reuse and reducing e-waste.

[http://www.gnu.org/philosophy/right-to-
read.en.html](http://www.gnu.org/philosophy/right-to-read.en.html)

~~~
the_pwner224
DRM can be used for good as well as bad; it is about control. Most DRM is used
for corporations to control you. But you can use things like this readout
protection and Restricted ("Secure") Boot to get control over your own
devices. If the BIOS lets you use your own keys (many do), Restricted Boot
prevents attackers from booting unauthorized software on your computer.
Similarly readout protection just hides the code on the device; this is useful
for anyone who wants added security (security through obscurity is not perfect
but it is always helpful).

In that regard this news is bad since it means a security tool has been
broken. But it is good in that the security tool was very often used by
evildoers.

Of course both of these examples rely on you trusting the bios & rdp
implementation - ideally they would be open source.

~~~
saagarjha
> Similarly readout protection just hides the code on the device; this is
> useful for anyone who wants added security (security through obscurity is
> not perfect but it is always helpful).

Trying to hide your code is a stupid thing to do. Trying to hide cryptographic
keys is more useful (though still often only used for DRM applications) but
preventing people from dumping your firmware is misguided.

~~~
zimmerfrei
> Trying to hide your code is a stupid thing to do

That's not what I hear from reputable reverse engineers, at least for IoT
devices.

Even though security by obscurity should be frowned upon, and being understood
that hiding code might give a false sense of security, most RE workflows
assume availability of firmware, and it is a giant pain to start breaking
platforms where the unknown firmware must be manually extracted first,
especially the boot loader.

~~~
maxbond
I agree.

I've said before on this site, obscurity is a totally valid tactic to impose
additional costs on attackers. It's best to think of it more as a preemptive
strike than a defensive layer. Thinking of it as a defensive layer can lead to
complacency, but thinking of it as an opening gambit is totally fine.

One shouldn't be obsequious to heuristics like "never use security through
obscurity," one should understand the systems they're building and make
considered choices.

Additionally, preventing people from dumping your firmware is usually not
about security as much as it is preventing some fly-by-night company from
reversing your product & selling it as their own. Why engineer a product when
you can steal someone else's IP?

~~~
zdkl
> obscurity is a totally valid tactic to impose additional costs on attackers

But there's the rub: you'll only impose additional costs to the least
sophisticated/determined adversaries. While that works to keep random
scriptkiddies/scans out, I'd argue it has little to no effect if you require
serious security guarantees.

~~~
maxbond
It imposes costs on all attackers. The value of that cost is skill dependent,
but no one has unlimited time on their hands. In other contexts, like hiding
an admin login page, shutting out low skill attackers means your log files
have better signal to noise, and you can focus more resources on the more
significant threats.

The reason I say to think of it as a preemptive strike rather than a defense
is that you still do need strong defensive layers.

This is basically just setting a compiler flag. It's free for you and costs
something for the attacker.

------
ohazi
To be fair, "restricting" flash readout while allowing hardware debug access
always seemed like a minefield, and I would hope that anyone with a security
sensitive application would have seen this from a mile away.

You could have a completely bug-free, constant-time, constant-power
cryptographic library running on one of these microcontrollers, and debug
access would allow you to reliably extract encryption keys just by examining
the execution path.

The amount of processor and system state that you have access to with a
hardware ARM debugger is crazy, but that isn't really the problem -- you can
extract a ton of state with a minimal debugger too. Just a log of instruction
pointer values would get you 90% of the way there.

I think it's reasonable to assume that microcontrollers with exposed debug
interfaces simply cannot be made secure, just as people generally assume that
it's game over once someone has physical access to a computer.

~~~
remcob
Modern cryptography libraries make sure that the execution path (and memory
access patterns) do not depend on sensitive data. Usually this is what is
meant with 'constant-time'.

If a debugger can read out registers or memory you can just read out the
sensitive material of course.

~~~
ohazi
Whoops... Yeah, you're right. You wouldn't expect to see instructions like
bne, bge, etc. that depend on key material directly, so you wouldn't be able
to rely the instruction pointer alone.

Instead, you might see instructions like addlt, so you'd also need to inspect
the value before and after, which, as you correctly state, the debugger will
happily let you do.

------
kosma
I've talked to Johannes Obermaier in the past... very nice guy. It's not their
first bypass, and hopefully not the last either.

PS. I actually have yet another STM32F1 RDP bypass in my archive, waiting to
be published. It used a technique where the MCU writes its own debug
registers... pretty crazy stuff. If only I had some free time to write a
proper publication about it...

~~~
userbinator
_If only I had some free time to write a proper publication about it..._

You could just drop some hints on a hardware forum and let the community
figure out the rest.

------
leggomylibro
I think you can thank this sort of hack for the widespread availability of
cheap cloned "ST-Link" debuggers. They use STM32F103 or F102 chips inside,
with firmware that was probably lifted from the debuggers on ST's evaluation
boards.

As recently as a few years ago, it was unusual to see standalone debugging
hardware in the $2-20 range. Sometimes I wonder if ST bristled at
the...reuse...of their IP, but it probably did more to promote STM32s as a
learning platform than anything that ST did in that time period.

~~~
osamagirl69
I am not sure if it was always the case, but at least with ST and
NXP/freescale you can download the firmware for their debugger from the
website for free. I suspect that it was a strategic decision by ST to release
their dev kits for cheap (<$10 for a stm32 dev board with programmer!) to
drive developer/hobby/edu interest in hopes of people using their chips in
production.

Come to think of it, I think it was actually TI and the MSP430 that started
the trend with the $4.30 kits with a socketed msp430 micro and onboard
programmer. ST was the first to try it with an ARM as far as I know...

~~~
nrp
Both were likely a response to Arduino presumably increasing the adoption of
AVR without Atmel having to do anything. I recall the MSP430 kits being
pitched that way in any case.

------
ajross
tl;dr: The processor protects data accesses to the internal flash while the
hardware debugger is connected so people with hardware access can't read out
the code and config. But this protection only applies to the data side of the
Harvard architecture buses. The instruction bus is used by the hardware to
fetch the reset vector on a hardware reset. But the vector table is under
software control. So by changing the reset vector to point to an arbitrary
address in flash, then resetting the CPU under the debugger, you can get it to
load your desired word from memory into the PC.

Pretty clever.

------
rollulus
I'm curious how they managed to get in contact with STM. I once discovered a
silicon error in their STM32F0 but failed to get in touch at all.

