Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: The SC4-HSM, a fully open USB hardware secure module (sc4.us)
137 points by lisper on July 8, 2016 | hide | past | favorite | 61 comments



Calling this an HSM is somewhat disingenuous, as it seems to contain none of the usual defenses found in a "real" hardware security module, like tamper self-destruction of key material, and likely no side-channel protection.

This is essentially just putting your keys on a dedicated computer, connected over USB. That does have value, mind you. Keeping your keys isolated definitely makes it harder for malware to exfiltrate them, and a "trusted UI" makes it clearer to the end user what's happening.


At least for me, this "HSM" makes the right trade-offs. I don't need tamper self-destruction, for example. My adversaries are either: 1) thieves of opportunity, or 2) the "world conspiracy" / nation state intelligence agencies who suddenly might decide to spy on me if the world takes a dystopian turn.

For 1) tamper-proofness is total overkill. For 2), I doubt that any civilian technology would withstand a hardware attack e.g. by the NSA. If I just look at the equipment we have at our lowly, underfunded physics laboratory - probe stations, decapping facilities, wirebonders, an expert can already defeat many older hardware countermeasures with that. And I know there are much more sophisticated techniques out there, e.g. to image chips and extract secrets. Also, a variation of the "wrench" argument: Who says that the vendors of closed HSMs have not all been just forced to place backdoors?

The biggest realistic threat to me (by probability times possible damage) is by far that I might catch a Trojan. This device provides at least partial protection, in that it makes it hard for a remote attacker to sign messages or log on in my name.


> I doubt that any civilian technology would withstand a hardware attack e.g. by the NSA.

You'd probably be wrong. There are several consumer accessible chips (i.e. from Atmel that you can order on digikey) that use tamper-proof silicon where the very act of opening up the package to inspect the die destroys the private key data. Even civilian tamper-proof tech is extremely robust, using physical phenomena to protect the data instead of obfuscation.

That doesn't preclude someone with the resources of the NSA from figuring it out or backdooring the design but the resources needed to defeat tamper proof devices is orders of magnitude greater than those needed to use them and that gap is constantly growing.


Well, that's probably true, but my only source for that is what people wrote on the internet. OTOH, I know e.g. the NSA has basically infinite resources.

Anecdotally, I've heard that there are ways to destructively image complete chips, and not just the layout but also the internal charge state. I believe the technique used femtosecond lasers. Of course that is also hearsay and I can't validate it.

My point is, as an interested layman, I don't know enough to trust this technology with my life - but it would be exactly useful if my life depended on keeping my secrets. So that contradiction kind of reduces its value for me.

I'm sure people with the required knowledge can properly evaluate these products, and for those people they are probably immensely useful.


I would not bet my life savings against the NSA's ability to compromise the SC4-HSM if they had physical possession. On the other hand, there is a simple defense against physical compromise that would probably hold up even against the NSA: encrypt the keys with a strong pass phrase. That wouldn't hold up against a monkey wrench attack (https://xkcd.com/538/), of course, but there are limits to what you can do for <$100.


Infinite resources don't exist.

Something I was told back when I was working around WikiLeaks is, "You can't make it impossible for them to get your goat, but you want to make it very, very expensive."


I see this as a lot like the "GPG is very secure but no one uses it" scenario. People who have certain needs can open up their wallets for certain products.

The adversary for the vast majority of practical scenarios is some remote attacker exploiting some unpatched vulnerability.

How much less of an issue would all the recent password dumps be if the hash algorithm was a simple HMAC on a device like this?

And surely a bitcoin wallet has a better chance on this device, than sitting in a file on my desktop just hoping the password never gets key logged.

And, moreso than any other point, at the current price point, this device is actually plausible to experiment with.


> at the current price point, this device is actually plausible to experiment with

Yes, that's really the idea at this point. Neither the hardware nor the firmware have been audited, so it would be unwise to use it to protect anything mission-critical at the moment (which is why there's a big disclaimer to that effect on the order page). On the other hand, the approach does seem to be holding up well to initial scrutiny.


> Calling this an HSM is somewhat disingenuous

Well, I had to call it something. I'm open to suggestions.

> tamper self-destruction of key material

That's true, but there is readout protection built into the SoC.

> and likely no side-channel protection

Side-channel protection is provided by the software, which is a side-channel-resistant implementation.

But the principal attack model is a compromised client machine, not physical compromise of the device. There's only so much you can offer for <$100.


At least in my mind, the defining attribute of an HSM is protection against physical compromise of the device.

Maybe we disagree on that definition: I guess it is hardware, and it exists for security, and it's a module.

I admit it's unfair to compare this with a $10,000 HSM from Thales or Safenet, of course. But it seems like a smartcard based solution might be better on that front, as they're designed to withstand physical attacks.

You can defend against some side channels (timing from the host, for example) but I was thinking more like EM/RF emissions.

I want to be clear that I think this is still a great project, and I commend your work so far!


>> At least in my mind, the defining attribute of an HSM is protection against physical compromise of the device.

FIPS definitions help a bit here.

A hardware FIPS 140 Level 2 cryptographic module (HSM) is only required to show evidence of tampering. A $5 broken seal if often the solution. Note that applying a seal to the module in the link is not sufficient because I can compromise the thing without opening it (just press the button and reflash with non-signed images).

Level 3 is a step up in that you must have some degree of tamper resistance, but in several cases, most of this security behemoths just drop a layer of epoxy on the board and call it a day. Then they sell it to you for $10.000.

With Level 4 things get serious, but then you can count your suppliers in on hand and pay accordingly (and Safenet is not one of them).

I agree that smartcard-based solution would be ideal, but specs and native firmware and SDK are never open, so you cannot build an open-source product with them.


> because I can compromise the thing without opening it (just press the button and reflash with non-signed images).

I don't think so. With RDP enabled the only thing the bootloader lets you do is erase all flash. After that RDP is disabled and you can write your own image but at that point the keys are already gone.


> smartcard

Those are not at all secure against compromised hosts.

> EM/RF emissions

You probably don't want to use this to secure the nuclear launch codes. But I think you'd have to work pretty damn hard to get anything out of it. There are no external parts here, it's all one SoC, one that is specifically designed to keep proprietary code out of the hands of Chinese hackers. If it were easy to attack it would lose most of its market.

> I commend your work so far!

Thanks!


>> smartcard

> Those are not at all secure against compromised hosts.

How so? Certainly there are poorly designed applications on smartcards that are vulnerable, but the smartcard itself should be fine.


If an attacker pwns your host they can do anything you can do. If you can access the smartcard, then the attacker can too. As soon as you plug the card in, the card is just as compromised as the host.

This is the reason that the SC4-HSM has a built-in display and user inputs. These serve the same purpose as the display on a PoS terminal, except that because they are a part of the HSM itself, they can't be compromised by an attacker who pwns the host that the HSM is plugged in to.

An attacker can still attempt a phishing attack, but this will be much, much harder. And for really mission-critical applications, an attacker would have to control every possible means of computing a secure hash on the host machine.

I should probably write a paper about this.


I thought that the whole point of a smartcard is that your keys, once loaded, can never be extracted again, even by the host. Rather than passing secrets to the host directly, there's a miniature PKI that does request/signature, secured by a password, using a hardware pinpad if you're very serious.

Little window for error too, since screwing up the password some small number of times results in the card being blocked, and in some cases, destroyed.

Even if the host is running hostile code, the worst it could do is forge signing requests?


> screwing up the password

If the host machine is pwned then it's not hard to steal your password.

> the worst it could do is forge signing requests

Well yeah, that's true. But if the card is protecting a really sensitive asset (like your bitcoin account) then a single forged request can do an awful lot of damage.


The point being that the malware either has to horn in on a legitimate request (and I'm not sure if that's possible or not given how PKI), or issue its own request, which the user has to authorize with password, ideally out of band using a hardware pinpad.


> ideally out of band using a hardware pinpad

Not "ideally". Critically. Without an out-of-band PIN the card is easy to compromise. And most cards don't have input devices.

And even with a secure PIN entry, without a display you can't actually know what transaction you are authorizing. So yes, the malware would have to "horn in" on a legitimate request. But if your machine is pwned that's not hard.


just thinking out loud. definately not disagreeing with your hardware choice.

a good idea for a pi zero?


You could certainly do something like this with a pi. But it would be bulkier and probably less secure. But my initial prototyping was done with a Teensy3. It all depends on your threat model.


its great that we are getting access to so many tiny sweet tiny boards. just in case - pi zero: https://www.raspberrypi.org/products/pi-zero/

i was thinking more feature set than threat model. Namely Secure at rest storage and password generator for secure comms. i believe the pi has a hwrnd which is probably the most essential thing for such devices.


We disagree on that definition. The defining attribute of an HSM is protection against software compromise of the main computer.

Having provided near-perfect defense against a software-only adversary, many HSM vendors then add layers and layers of protection for the hardware. But the real key is ensuring that someone who gets root on your server can use your keys but not copy them.


> there is readout protection built into the SoC

I have managed to bypass readout protection on STM32F1 with just a few nights of hacking and $2 in parts; I suspect F4 isn't much different. I wouldn't trust any general purpose MCU in this regard.


The RDP on the F4 is more capable than the F1. The F1 only has level 1 RDP, the F4 has level 2 as well. If you can break level 2 without decapping the chip that would be big news (and probably result in a class action lawsuit against STM). Can you describe how you did it? [EDIT: Never mind, I see you already answered that in the other branch of this thread.]


I had a look at the documentation and it seems they finally made it a JTAG fuse and not just a value in program memory. Nice move.


Nice, I'd wondered how hard that would be. Could you share details?


The FPB (flash patch and breakpoint) peripheral on Cortex-M3 allows live-patching the flash contents; this allows you to temporarily change the reset vector. That, plus careful supply voltage glitching to reset some peripherals but not other, and the device is pwned.


Nice! But it sounds like it would not be effective against RDP level 2.


Awesome. One question... why STM/ARM? I'm not an expert in the available choices, but I'm generally curious about RiskV devices because they're fully open. Are the designs of this chip at least available, and audit-able, or could it contain something like Intel's HM?


> Awesome.

Thanks!

> why STM/ARM?

The STM32F415 is specifically designed for crypto and security-sensitive applications. It has built-in support for AES and other crypto primitives (though I'm not actually using any of those at the moment), a hardware random number generator, and built-in readout protection modes that prevent you from externally accessing the flash or the RAM. And it has plenty of flash and RAM for this sort of application, and it's reasonably priced. If you designed a SoC from scratch for this application you couldn't do much better than this chip.

> Are the designs of this chip at least available, and audit-able, or could it contain something like Intel's HM?

No, I don't think so. And yes, it's possible there's a back-door in there, but I think it's unlikely. The market for this chip is embedded devices with sensitive code. If it became known that there was a back door that would destroy the market for the chip.

But if you want one based on a RiskV I'd be happy to discuss doing a custom development.


It looks like a very nice piece of kit - I just might have to get one :-) But speaking of hardware, I was recently reminded of the new BBC:Micro[m] project thanks to an email by the PythonAnywhere[p]-team.

I wonder how much of the features could be implemented on that? I'm guessing that while the micro-usb might allow for power and data connection to the host - there'd probably not be a way to prevent compromise when connected (I'm guessing malware could reprogram the device without overwriting the stored keys). Thoughts?

[m] https://www.microbit.co.uk/device

[p] https://www.pythonanywhere.com/

[ed: As for adding (some) tamper-proofing to either device, remember the glitter-nailpolish-picture-trick: https://www.wired.com/2013/12/better-data-security-nail-poli...

I wonder if it would be possible to epoxy up the BBC:Micro's micro USB port, and use the connectors/headers for communicating, possibly emulating USB 1.1 or something, with an soldered connector. ]


> I just might have to get one

Better hurry. My stock of prototypes is very nearly sold out.

> BBC:Micro[m] project

I have no idea about that particular device. But very few things end up being secure by accident. But if it's not designed for security like the STM32F415 is, then I'd say odds are good it's not secure.


Sorry (happy :) to hear that you're running out. Might actually be in a situation soon where it will make more sense to hold a workshop and build a good handful of the things.

Agreed on the "accidentally secure" part. Then again, if you could drench a full x86 pc in epoxy, it might be possible to make "reasonably secure" blob if all you left open was a serial port. Might.


Are RISCV devices actually available at all?

I'm not too worried about backdoors in microcontrollers, they're too small to hide much.

Are the designs of this chip at least available, and audit-able

I'm not aware of any processors you can buy from current production that meet this criterion (because it would involve complete exposure of that company's IP). I suppose the fully reverse-engineered 6502 meets it.


> I'm not too worried about backdoors in microcontrollers, they're too small to hide much.

You can hide a LOT on a chip, though for embedded systems the attacks are not likely to be generic, or scale.

If I was a spook agency, I'd definitely try to compromise USB handling, at a low level. Most controllers I know of use DMA, so it's likely that sneaking in a few hundred gates would open up a whole SOC. The USB controllers are just libraries that people buy; I don't know how closely they are audited by customers.

Hook that up to a compromised PC (read about system management CPUs...) and that's the ball game.

Silicon is pretty opaque.


> I'm not too worried about backdoors in microcontrollers, they're too small to hide much.

Have you read the article "A2: Analog Malicious Hardware" [1]? You really don't need much space to insert a backdoor in silicon.

[1] http://ieee-security.org/TC/SP2016/papers/0824a018.pdf


>The display and user buttons protect against an attack launched from the host machine. Before the unit performs a cryptographic operation it can display information about what it is about to do on the display and wait for the user to confirm the operation by pushing one of the user input buttons on the unit.

As the author is here, I'd suggest adding a beeper as a way to alert about potentially unsolicited access to the module.


There is a tri-color LED that is intended to be used for that purpose. Red means something bad has happened :-)


I've been watching this project since I saw it mentioned in an HN comment some time ago, it's nice to see stuff like this being worked on. Doubly so because I envisioned building something similar a few years ago using the Beaglebone Black as the platform, but this is much better all around :)


I am impressed and thankful for this as open-source. You learn a lot when you attempt to secure the hardware.

A good starting place.


Hmmm, it has a display. This is going to last about two days in my pocket.

For the last year I've been carrying around a couple Yubikeys on my keychain. They've been through the wash at least once. They're well nigh indestructible.

I don't know how you make something capable of user interaction AND immune to keys, pocket fuzz and soapy water. It's a tall order.


>I don't know how you make something capable of user interaction AND immune to keys, pocket fuzz and soapy water. It's a tall order.

One consumer device that has consistently delivered that is the waterproof, rugged feature phone. In fact, taking an old Samsung B2100 and writing a Java app to serve as a Bluetooth HSM probably leads to a cheap, serviceable solution.


I've been looking for something like this in a while. It would be feasible to make something like this but with a Bluetooth interface instead (I'm concerned about MITM attacks)? I've been wondering about the possibility of writing a mobile app that uses a Bluetooth HSM to sign documents.

Great work and nice project by the way


Looking for a Bluetooth solution as well


Why? That seems fraught with all manner of peril.


Thanks!


Neat!

There's an interesting typo in the STM32415 link first in the bullet list, it has an attribute called "jref" which should very likely be href. In Firefox it's underlined, but not clickable. Very confusing.


Fixed. Thanks!


Funny how this Open Source hardware screen uses half of it to display the copyright. I know they are not related, but it still feels weird.


Heh, that didn't even occur to me. Bad optics. I'll fix it.


Where can I find the HW schematics and the PCB layout?


Heh, you're the first person to ask. Let me see what I can do.


Security assessment is a bottom-up process. Most security in a HSM is in the hardware. This is clearly not using secure IC's as illustrated in the smartcard market. It's already going to hit a ceiling there. Let's look at the datasheet anyway to see what can be built on foundation of STM32F415. Keep in mind I rarely deal with MCU's or embedded stuff. Some questions or concerns will come from my memory loss or lack of experience.

http://www.st.com/content/ccc/resource/technical/document/da...

Embedded flash 512KB or 1MB for programs and data. Checked with a CRC unit. There's some kind of accelerator between flash memory and main bus. Effects on security or robustness unknown. The flash might be writeable to attacker, esp since it's flash not ROM. What protections are here to prevent that?

Embedded SRAM. Up to 192KB of system SRAM w/ up to 64KB of CCM data RAM. Has 4KB backup SRAM "only accessible from CPU... content protected against unwanted write accesses, and retained in Standby or VBAT mode." First, what does "protected against unwanted" mean specifically? Second, does HSM software properly overwrite new memory, check pointers, do periods processing, and so on to prevent leaks via requests from host or compromise of subsystem?

Memory protection unit. Simplified MMU splits memory into 8 areas with 8 sub-areas ranging from 32 bytes to 4GB addressable memory. Access control typically managed by RTOS. Does HSM properly use MPU to isolate keys, cryptographic functions, kernel, and non-security critical software? What's the TCB size and assurance activities that went into the component that manages this?

DMA controller. Two units do what DMA always does: bypass all checks for raw speed. Configured by software. Attacks on that code or anything that can reference it might lead to DMA attacks. Curious what effect attack on host, esp a driver, could have far as divulging the MCU's secrets to a malicious program on the host.

Interrupts. Can compromised host flood the device for DDOS or use them selectively to extract keys via glitch in interrupt handling? Doing just the flooding while otherwise doing nothing could also be used to trick users into thinking device is unreliable. They might use less confidential method as backup.

"The boot loader is located in system memory. It is used to reprogram the Flash memory by using USART1 (PA9/PA10), USART3 (PC10/PC11 or PB10/PB11), CAN2 (PB5/PB13), USB OTG FS in Device mode (PA11/PA12) through DFU (device firmware upgrade)." This is a pile of stuff embedded people probably recognize. All I know is it needs exploration to help determine exposure to boot- or flash-level attacks.

"The 4-Kbyte backup SRAM is an EEPROM-like memory area. It can be used to store data which need to be retained in VBAT and standby mode." This could be useful for long-term, key storage. The flash filesystems are harder to overwrite since they put data all over the place to reduce wear on individual components. Overwriting this is... with my limited memory... more straight-forward. It also preserves availability. To prevent thieves extracting, the keys might be encrypted before storage using a function that combines onboard-software, something entered through the host, and/or something done via buttons on drive. If host app deletes its secrest after use, then a simple snatch-and-grab won't work.

Watchdog timers. These are useful in high-security to deal with HW- & SW-level failures in the "known unknowns" and "unknown unknowns" categories. Maybe set a watchdog during important, cryptographic operations that should be done in certain timerange and/or for system operation in general. If it activates, any secrets are cleared immediately then user must recover device. Basically, "Fail Safe, Hard, and Noticeably" principle in action given who knows what led to watchdog activating. Catch all.

"Secure digital input/output interface" I got unnecessarily excited: it was just SDcard support. Oh well. :)

"USB controller... Internal USB DMA." The USB section doesn't tell me enough to know the risks it might have. Depends on what software directs it, what access it has to what memory, and so on. Then, we can at least see where the checks have to be put or what code has to be perfect. The DMA part was in "high-speed" section but not "full-speed." So, add DMA concerns above to USB subsystem and code if HSM uses high-speed. Possibly if full-speed if DMA stays on and accessible anyway where designers just assume people wouldn't try to use it rather than force that to be true.

"embed a cryptographic accelerator" This always concerns me as I have trust issues with these. Let's say we trust them. I'd apply all the tests available for these. Past that, do security at interface design-by-contract style plus extra code review to make sure they're used correctly. Guttman's security kernel in cryptlib is a nice example. Also mentions it supports DMA request generation. If it's always on, then add DMA considerations to the protocol engine that uses these.

"delivers 32-bit random numbers generated by an integrated analog circuit" Have at least one unit doing nothing but outputing analog noise all day for weeks. Run all randomness tests on the whole set plus subsets of it to see if statistical properties change significantly during specific periods of time. This is good practice in general to assess effectiveness. However, especially important with analog in case they chose shitty noise source that varies with environmental factors. I recall Bitbabbler lead says they keep a whole pile of them doing this since first prototypes to keep sample size statistically significant itself. That approach is even better if one has resources to do it.

JTAG and embedded trace macrocell. Well, these are always interesting from an attacker perspective. :) Maxim says there's no security once it's in their physical possession. Let's not make it easy, though. A basic mitigation here is requiring user to activate it with their password, PIN or some button presses. Lacking that, anything the software sees indicating unauthorized access can lead to nuking the secrets. Should stop majority of attackers who are casual or script kiddies. Part of that backup 4K SRAM might be used to keep track of anomolies that otherwise disappear with power turned on and off due to attacker experimenting.

So, that's the hardware. It has potential, as smartcards illustrate, to perform the job at hand with numerous risks that must be assessed and countered. Anything on host that configures, does requests through, or interrupts it is a security threat to be mitigated somehow. Protocol engines need sanity checks plus coded in a way to be amenable to any and all forms of static/dynamic analysis. MPU should compartmentalize secrets & kernel from the rest. Long-term keys can be stored in flash and/or backup SRAM with SRAM having some advantages. Other SRAM overwritten after use, including memory for intermediate steps. Any onboard registers overwritten during context switch (periods processing) to untrusted code. Ops take fixed time with careful attention to shared memory/resources to mitigate timing channels. Make damn sure they can't corrupt the boot code. I still don't know exactly how that's installed or protected from software attack. Need help from MCU or embedded people here.

The system level I'm unclear on as lisper hasn't given me full use-case list. The basic is to make sure keys are generated properly, used properly, never see host and trustworthy TRNG. These can be met with above methods if identified risks are knocked out or turn out not a concern. Security-oriented IC's, as found in EAL5+ or EAL6+ smartcards, are much more suited to this stuff as they put in more effort into HW mitigations and assurance of TCB implementation. Ledger Blue wallet uses an EAL6+ IC as an example. Consider using one of them in future version of product. Additionally, pull up MCU errata page to look at any CPU errata in specific product causing crashes or weird execution. Especially on MPU. Malware & smart pentesters used those in the past.

Emanation or HW-level attacks will still work on this device due to no mitigations. That means one use-case, using it on untrustworthy computers in public, cannot be performed with this device in event person's secrets are high-value to Five Eyes, Israeli, or Russian intelligence services known to use those attacks plus any high-end contractors that can do the same. Security researchers are also publishing more over time, which means blackhats will eventually use them. An associate pointed out that one of the designs for card computer (Bitcoin?) used red-black separation in PCB itself to a degree to reduce cryptozone's ability to electrically effect untrusted zone. Physical Red-Black was used by military for decades to reduce leaks. Cheaper solutions like those might be used to reduce impact a bit but users should be warned this risk remains for high-value targets.

So, that's my brief review of this product at just the hardware and system levels. I haven't looked at the app level to determine if how it's used or implemented meets the criteria. There's risks there, too. Might look into them. For now, I think lisper has plenty to think about on the hardware & implementation side of things. He will be busy. :)


Wow, thanks for that extensive review! I'll need some time to digest all that.


Totally unrelated, but I was looking through their repo and discovered tinyscheme:

https://github.com/Spark-Innovations/sc4-hsm/blob/master/src...

https://github.com/Spark-Innovations/sc4-hsm/blob/master/src...

Neat! Super-readable, self-contained, and from 1994. What's not to love?


TinyScheme is indeed very cool, but it's a bit of a parlor trick to run it on the HSM. I don't think you'd actually want to use that in production. But it's awfully neat to run Scheme on a gadget the size of a lipstick case. :-)

Here's the original TinyScheme site:

http://tinyscheme.sourceforge.net/home.html


it's a bit of a parlor trick to run it on the HSM. I don't think you'd actually want to use that in production.

Challenge accepted! I bet I could port Arc to it. Hmm...

It'd work. Eventually.

Thanks for the link. I'm intrigued by the idea of lisp-powered businesses. I wonder if there's a survey of the state of non-Clojure lisps circa 2016? Nothing against Clojure, of course. It powers many businesses. I'm just curious how the rest are faring.

I think I'd like to work somewhere that intentionally uses a less-popular language just because they feel like it. Or that makes Scheme-powered devices the size of a lipstick case.


> It'd work. Eventually.

I have no doubt you could get it to run. Arc is just a thin layer on top of Scheme. The question is whether or not it's prudent to run any Lisp on a security device. I'm as big a Lisp fan as they come (look at my handle!) and running Lisp on an HSM makes me a little queasy.

> I'm intrigued by the idea of lisp-powered businesses.

My back-end server is using Clozure Common Lisp, and I certainly believe in using Lisp to cross-compile code for the HSM. There are a lot of ways to leverage Lisp besides actually running it on the device itself. But I have a lot of other Lispy tricks up my sleeve. Stay tuned!


Please write about this! It's awesome. Basically everything you're saying. If you could just put all of that into about 9300 words sometime, that'd be great. :)


OK, watch my blog




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: