
Show HN: The SC4-HSM, a fully open USB hardware secure module - lisper
https://sc4.us/hsm/
======
mcpherrinm
Calling this an HSM is somewhat disingenuous, as it seems to contain none of
the usual defenses found in a "real" hardware security module, like tamper
self-destruction of key material, and likely no side-channel protection.

This is essentially just putting your keys on a dedicated computer, connected
over USB. That does have value, mind you. Keeping your keys isolated
definitely makes it harder for malware to exfiltrate them, and a "trusted UI"
makes it clearer to the end user what's happening.

~~~
captainmuon
At least for me, this "HSM" makes the right trade-offs. I don't need tamper
self-destruction, for example. My adversaries are either: 1) thieves of
opportunity, or 2) the "world conspiracy" / nation state intelligence agencies
who suddenly might decide to spy on me if the world takes a dystopian turn.

For 1) tamper-proofness is total overkill. For 2), I doubt that any civilian
technology would withstand a hardware attack e.g. by the NSA. If I just look
at the equipment we have at our lowly, underfunded physics laboratory - probe
stations, decapping facilities, wirebonders, an expert can already defeat many
older hardware countermeasures with that. And I know there are much more
sophisticated techniques out there, e.g. to image chips and extract secrets.
Also, a variation of the "wrench" argument: Who says that the vendors of
closed HSMs have not all been just forced to place backdoors?

The biggest realistic threat to me (by probability times possible damage) is
by far that I might catch a Trojan. This device provides at least partial
protection, in that it makes it hard for a remote attacker to sign messages or
log on in my name.

~~~
akiselev
> _I doubt that any civilian technology would withstand a hardware attack e.g.
> by the NSA._

You'd probably be wrong. There are several consumer accessible chips (i.e.
from Atmel that you can order on digikey) that use tamper-proof silicon where
the very act of opening up the package to inspect the die destroys the private
key data. Even civilian tamper-proof tech is extremely robust, using physical
phenomena to protect the data instead of obfuscation.

That doesn't preclude someone with the resources of the NSA from figuring it
out or backdooring the design but the resources needed to defeat tamper proof
devices is orders of magnitude greater than those needed to use them and that
gap is constantly growing.

~~~
captainmuon
Well, that's probably true, but my only source for that is what people wrote
on the internet. OTOH, I know e.g. the NSA has basically infinite resources.

Anecdotally, I've heard that there are ways to destructively image complete
chips, and not just the layout but also the internal charge state. I believe
the technique used femtosecond lasers. Of course that is also hearsay and I
can't validate it.

My point is, as an interested layman, I don't know enough to trust this
technology with my life - but it would be exactly useful if my life depended
on keeping my secrets. So that contradiction kind of reduces its value for me.

I'm sure people with the required knowledge can properly evaluate these
products, and for those people they are probably immensely useful.

~~~
lisper
I would not bet my life savings against the NSA's ability to compromise the
SC4-HSM if they had physical possession. On the other hand, there is a simple
defense against physical compromise that would probably hold up even against
the NSA: encrypt the keys with a strong pass phrase. That wouldn't hold up
against a monkey wrench attack
([https://xkcd.com/538/](https://xkcd.com/538/)), of course, but there are
limits to what you can do for <$100.

------
jaekwon
Awesome. One question... why STM/ARM? I'm not an expert in the available
choices, but I'm generally curious about RiskV devices because they're fully
open. Are the designs of this chip at least available, and audit-able, or
could it contain something like Intel's HM?

~~~
pjc50
Are RISCV devices actually _available_ at all?

I'm not too worried about backdoors in microcontrollers, they're too small to
hide much.

 _Are the designs of this chip at least available, and audit-able_

I'm not aware of any processors you can buy from current production that meet
this criterion (because it would involve complete exposure of that company's
IP). I suppose the fully reverse-engineered 6502 meets it.

~~~
kabdib
> I'm not too worried about backdoors in microcontrollers, they're too small
> to hide much.

You can hide a LOT on a chip, though for embedded systems the attacks are not
likely to be generic, or scale.

If I was a spook agency, I'd definitely try to compromise USB handling, at a
low level. Most controllers I know of use DMA, so it's likely that sneaking in
a few hundred gates would open up a whole SOC. The USB controllers are just
libraries that people buy; I don't know how closely they are audited by
customers.

Hook that up to a compromised PC (read about system management CPUs...) and
that's the ball game.

Silicon is pretty opaque.

------
ComodoHacker
>The display and user buttons protect against an attack launched from the host
machine. Before the unit performs a cryptographic operation it can display
information about what it is about to do on the display and wait for the user
to confirm the operation by pushing one of the user input buttons on the unit.

As the author is here, I'd suggest adding a beeper as a way to alert about
potentially unsolicited access to the module.

~~~
lisper
There is a tri-color LED that is intended to be used for that purpose. Red
means something bad has happened :-)

------
mrsteveman1
I've been watching this project since I saw it mentioned in an HN comment some
time ago, it's nice to see stuff like this being worked on. Doubly so because
I envisioned building something similar a few years ago using the Beaglebone
Black as the platform, but this is _much_ better all around :)

------
watersb
I am impressed and thankful for this as open-source. You learn a lot when you
attempt to secure the hardware.

A good starting place.

------
kabdib
Hmmm, it has a display. This is going to last about two days in my pocket.

For the last year I've been carrying around a couple Yubikeys on my keychain.
They've been through the wash at least once. They're well nigh indestructible.

I don't know how you make something capable of user interaction AND immune to
keys, pocket fuzz and soapy water. It's a tall order.

~~~
halomru
>I don't know how you make something capable of user interaction AND immune to
keys, pocket fuzz and soapy water. It's a tall order.

One consumer device that has consistently delivered that is the waterproof,
rugged feature phone. In fact, taking an old Samsung B2100 and writing a Java
app to serve as a Bluetooth HSM probably leads to a cheap, serviceable
solution.

------
mmlopes
I've been looking for something like this in a while. It would be feasible to
make something like this but with a Bluetooth interface instead (I'm concerned
about MITM attacks)? I've been wondering about the possibility of writing a
mobile app that uses a Bluetooth HSM to sign documents.

Great work and nice project by the way

~~~
slinger
Looking for a Bluetooth solution as well

~~~
lisper
Why? That seems fraught with all manner of peril.

------
unwind
Neat!

There's an interesting typo in the STM32415 link first in the bullet list, it
has an attribute called "jref" which should very likely be href. In Firefox
it's underlined, but not clickable. Very confusing.

~~~
lisper
Fixed. Thanks!

------
franciscop
Funny how this Open Source hardware screen uses half of it to display the
copyright. I know they are not related, but it still feels _weird_.

~~~
lisper
Heh, that didn't even occur to me. Bad optics. I'll fix it.

------
sschueller
Where can I find the HW schematics and the PCB layout?

~~~
lisper
Heh, you're the first person to ask. Let me see what I can do.

------
nickpsecurity
Security assessment is a bottom-up process. Most security in a HSM is in the
hardware. This is clearly not using secure IC's as illustrated in the
smartcard market. It's already going to hit a ceiling there. Let's look at the
datasheet anyway to see what can be built on foundation of STM32F415. Keep in
mind I rarely deal with MCU's or embedded stuff. Some questions or concerns
will come from my memory loss or lack of experience.

[http://www.st.com/content/ccc/resource/technical/document/da...](http://www.st.com/content/ccc/resource/technical/document/datasheet/98/9f/89/73/01/b1/48/98/DM00035129.pdf/files/DM00035129.pdf/jcr:content/translations/en.DM00035129.pdf)

Embedded flash 512KB or 1MB for programs and data. Checked with a CRC unit.
There's some kind of accelerator between flash memory and main bus. Effects on
security or robustness unknown. The flash might be writeable to attacker, esp
since it's flash not ROM. What protections are here to prevent that?

Embedded SRAM. Up to 192KB of system SRAM w/ up to 64KB of CCM data RAM. Has
4KB backup SRAM "only accessible from CPU... content protected against
unwanted write accesses, and retained in Standby or VBAT mode." First, what
does "protected against unwanted" mean specifically? Second, does HSM software
properly overwrite new memory, check pointers, do periods processing, and so
on to prevent leaks via requests from host or compromise of subsystem?

Memory protection unit. Simplified MMU splits memory into 8 areas with 8 sub-
areas ranging from 32 bytes to 4GB addressable memory. Access control
typically managed by RTOS. Does HSM properly use MPU to isolate keys,
cryptographic functions, kernel, and non-security critical software? What's
the TCB size and assurance activities that went into the component that
manages this?

DMA controller. Two units do what DMA always does: bypass all checks for raw
speed. Configured by software. Attacks on that code or anything that can
reference it might lead to DMA attacks. Curious what effect attack on host,
esp a driver, could have far as divulging the MCU's secrets to a malicious
program on the host.

Interrupts. Can compromised host flood the device for DDOS or use them
selectively to extract keys via glitch in interrupt handling? Doing just the
flooding while otherwise doing nothing could also be used to trick users into
thinking device is unreliable. They might use less confidential method as
backup.

"The boot loader is located in system memory. It is used to reprogram the
Flash memory by using USART1 (PA9/PA10), USART3 (PC10/PC11 or PB10/PB11), CAN2
(PB5/PB13), USB OTG FS in Device mode (PA11/PA12) through DFU (device firmware
upgrade)." This is a pile of stuff embedded people probably recognize. All I
know is it needs exploration to help determine exposure to boot- or flash-
level attacks.

"The 4-Kbyte backup SRAM is an EEPROM-like memory area. It can be used to
store data which need to be retained in VBAT and standby mode." This could be
useful for long-term, key storage. The flash filesystems are harder to
overwrite since they put data all over the place to reduce wear on individual
components. Overwriting this is... with my limited memory... more straight-
forward. It also preserves availability. To prevent thieves extracting, the
keys might be encrypted before storage using a function that combines onboard-
software, something entered through the host, and/or something done via
buttons on drive. If host app deletes its secrest after use, then a simple
snatch-and-grab won't work.

Watchdog timers. These are useful in high-security to deal with HW- & SW-level
failures in the "known unknowns" and "unknown unknowns" categories. Maybe set
a watchdog during important, cryptographic operations that should be done in
certain timerange and/or for system operation in general. If it activates, any
secrets are cleared immediately then user must recover device. Basically,
"Fail Safe, Hard, and Noticeably" principle in action given who knows what led
to watchdog activating. Catch all.

"Secure digital input/output interface" I got unnecessarily excited: it was
just SDcard support. Oh well. :)

"USB controller... Internal USB DMA." The USB section doesn't tell me enough
to know the risks it might have. Depends on what software directs it, what
access it has to what memory, and so on. Then, we can at least see where the
checks have to be put or what code has to be perfect. The DMA part was in
"high-speed" section but not "full-speed." So, add DMA concerns above to USB
subsystem and code if HSM uses high-speed. Possibly if full-speed if DMA stays
on and accessible anyway where designers just assume people wouldn't try to
use it rather than force that to be true.

"embed a cryptographic accelerator" This always concerns me as I have trust
issues with these. Let's say we trust them. I'd apply all the tests available
for these. Past that, do security at interface design-by-contract style plus
extra code review to make sure they're used correctly. Guttman's security
kernel in cryptlib is a nice example. Also mentions it supports DMA request
generation. If it's always on, then add DMA considerations to the protocol
engine that uses these.

"delivers 32-bit random numbers generated by an integrated analog circuit"
Have at least one unit doing nothing but outputing analog noise all day for
weeks. Run all randomness tests on the whole set plus subsets of it to see if
statistical properties change significantly during specific periods of time.
This is good practice in general to assess effectiveness. However, especially
important with analog in case they chose shitty noise source that varies with
environmental factors. I recall Bitbabbler lead says they keep a whole pile of
them doing this since first prototypes to keep sample size statistically
significant itself. That approach is even better if one has resources to do
it.

JTAG and embedded trace macrocell. Well, these are always interesting from an
attacker perspective. :) Maxim says there's no security once it's in their
physical possession. Let's not make it easy, though. A basic mitigation here
is requiring user to activate it with their password, PIN or some button
presses. Lacking that, anything the software sees indicating unauthorized
access can lead to nuking the secrets. Should stop majority of attackers who
are casual or script kiddies. Part of that backup 4K SRAM might be used to
keep track of anomolies that otherwise disappear with power turned on and off
due to attacker experimenting.

So, that's the hardware. It has potential, as smartcards illustrate, to
perform the job at hand with numerous risks that must be assessed and
countered. Anything on host that configures, does requests through, or
interrupts it is a security threat to be mitigated somehow. Protocol engines
need sanity checks plus coded in a way to be amenable to any and all forms of
static/dynamic analysis. MPU should compartmentalize secrets & kernel from the
rest. Long-term keys can be stored in flash and/or backup SRAM with SRAM
having some advantages. Other SRAM overwritten after use, including memory for
intermediate steps. Any onboard registers overwritten during context switch
(periods processing) to untrusted code. Ops take fixed time with careful
attention to shared memory/resources to mitigate timing channels. Make damn
sure they can't corrupt the boot code. I still don't know exactly how that's
installed or protected from software attack. Need help from MCU or embedded
people here.

The system level I'm unclear on as lisper hasn't given me full use-case list.
The basic is to make sure keys are generated properly, used properly, never
see host and trustworthy TRNG. These can be met with above methods if
identified risks are knocked out or turn out not a concern. Security-oriented
IC's, as found in EAL5+ or EAL6+ smartcards, are _much_ more suited to this
stuff as they put in more effort into HW mitigations and assurance of TCB
implementation. Ledger Blue wallet uses an EAL6+ IC as an example. Consider
using one of them in future version of product. Additionally, pull up MCU
errata page to look at any CPU errata in specific product causing crashes or
weird execution. Especially on MPU. Malware & smart pentesters used those in
the past.

Emanation or HW-level attacks will still work on this device due to no
mitigations. That means one use-case, using it on untrustworthy computers in
public, cannot be performed with this device in event person's secrets are
high-value to Five Eyes, Israeli, or Russian intelligence services known to
use those attacks plus any high-end contractors that can do the same. Security
researchers are also publishing more over time, which means blackhats will
eventually use them. An associate pointed out that one of the designs for card
computer (Bitcoin?) used red-black separation in PCB itself to a degree to
reduce cryptozone's ability to electrically effect untrusted zone. Physical
Red-Black was used by military for decades to reduce leaks. Cheaper solutions
like those might be used to reduce impact a bit but users should be warned
this risk remains for high-value targets.

So, that's my brief review of this product at just the hardware and system
levels. I haven't looked at the app level to determine if how it's used or
implemented meets the criteria. There's risks there, too. Might look into
them. For now, I think lisper has plenty to think about on the hardware &
implementation side of things. He will be busy. :)

~~~
lisper
Wow, thanks for that extensive review! I'll need some time to digest all that.

------
sillysaurus3
Totally unrelated, but I was looking through their repo and discovered
tinyscheme:

[https://github.com/Spark-
Innovations/sc4-hsm/blob/master/src...](https://github.com/Spark-
Innovations/sc4-hsm/blob/master/src/tinyscheme/scheme.h)

[https://github.com/Spark-
Innovations/sc4-hsm/blob/master/src...](https://github.com/Spark-
Innovations/sc4-hsm/blob/master/src/tinyscheme/scheme.c)

Neat! Super-readable, self-contained, and from 1994. What's not to love?

~~~
lisper
TinyScheme is indeed very cool, but it's a bit of a parlor trick to run it on
the HSM. I don't think you'd actually want to use that in production. But it's
awfully neat to run Scheme on a gadget the size of a lipstick case. :-)

Here's the original TinyScheme site:

[http://tinyscheme.sourceforge.net/home.html](http://tinyscheme.sourceforge.net/home.html)

~~~
sillysaurus3
_it 's a bit of a parlor trick to run it on the HSM. I don't think you'd
actually want to use that in production._

Challenge accepted! I bet I could port Arc to it. Hmm...

It'd work. Eventually.

Thanks for the link. I'm intrigued by the idea of lisp-powered businesses. I
wonder if there's a survey of the state of non-Clojure lisps circa 2016?
Nothing against Clojure, of course. It powers many businesses. I'm just
curious how the rest are faring.

I think I'd like to work somewhere that intentionally uses a less-popular
language just because they feel like it. Or that makes Scheme-powered devices
the size of a lipstick case.

~~~
lisper
> It'd work. Eventually.

I have no doubt you could get it to run. Arc is just a thin layer on top of
Scheme. The question is whether or not it's prudent to run any Lisp on a
security device. I'm as big a Lisp fan as they come (look at my handle!) and
running Lisp on an HSM makes me a little queasy.

> I'm intrigued by the idea of lisp-powered businesses.

My back-end server is using Clozure Common Lisp, and I certainly believe in
using Lisp to cross-compile code for the HSM. There are a lot of ways to
leverage Lisp besides actually running it on the device itself. But I have a
lot of other Lispy tricks up my sleeve. Stay tuned!

~~~
sillysaurus3
Please write about this! It's awesome. Basically everything you're saying. If
you could just put all of that into about 9300 words sometime, that'd be
great. :)

~~~
lisper
OK, watch my blog

