In a previous role we used a major vendor's HSM to protect our private keys. VERY expensive kit, more expensive than the load balancers and servers combined.
We needed to use Elliptic Curve keys for a particular customer - so it got even more expensive as we had to buy:
1. A license from the LB vendor to use the HSM
2. A licence from the HSM vendor to use EC with the LB.
... even though they trumpeted these announcements of how radically great they were together we found:
1. The integration didn't work, full stop.
2. The version of OpenSSL we had to use (supplied) was about 18 months out of date
3. The specially b0rked version of OpenSSL supplied didn't support EC via a HSM
Even better - when Heartbleed came out I had a patch from RedHat on day 1. The load balancer?
Nope - nothing on their website - I had to create a ticket which said 'we are aware of the issue', at which point the ticket was closed. I questioned this and was told they couldn't keep it open, I had to create a new ticket every few weeks to find out whether they'd actually deigned to assign a bug id to the issue.
The HSM vendor just said nothing, zero, until a new version of the firmware was silently released 4 months later.
The whole industry is shit. I'd rather have a farm of Yubikeys than one of those HSMs.
We ended up using the Nitrokey HSM platform and have been very pleased with the results.
I'd love to get hold of one of my old units from that vendor and try to attack it. I assume it hasn't been done as they're so expensive. It's basically the ultimate in security through obscurity, as it's too expensive to get one you're prepared to break.
That, and it's very important to choose the right vendor. I've worked with pretty much all of them over the years, and some of them can be a nightmare to deal with. Some of the devices themselves are INCREDIBLY frustrating and should never have seen the light of day (much like some of their sales staff!)
In the case where an HSM is used for e.g. database encryption, one absolutely still cares.
The attacker can politely ask the HSM to decrypt all the data. On the bright side, you didn't lose the key, and your HSM should at least securely log the fact that all your data just walked out the door.
For SSL offload the situation is less dire, but it is totally realistic that heartbleed might cause issues for a system with HSM assisted offload (e.g. leaking plaintext) because the protocol processing happens in software. Any sane architecture should protect the actual private key though.
In general, the right way to think about HSMs is that they protect your keys, not your data. If you care more about the data than the keys, you absolutely need to care about the security of your host.
If state actors or universal mitm or coffee shop attacks are primary threats then loss of private key is a very bad outcome for SSL offload. And this is what SSL is meant to help woth.
For many (more?) sites though, the issue was loss of session plaintext (e.g api creds, basic auth creds, pii or credit cards). For a lot of applications where just hitting the webserver remotely is a lot easier than MITM, this was actually the more severe problem.
However, you'd still be vulnerable to someone gaining root access on your Vault server. A true HSM uses physically secure memory and a dedicated cryptoprocessor to make it impossible for the secret to ever be leaked, even if the host OS is compromised. This isn't really something that's possible in software alone.
(In fact, the Vault documentation even mentions that you might still want to use a dedicated HSM for security, since they implement a plugable mechanism for secret storage.)
That said... you could use a TPM chip or ARM TrustZone to implement true TPM functionality. It's still a hardware based solution, but it's hardware that's much more affordable and possibly already inside your computer.
In contrast, while you could think of, say, an ssh key agent as a sort of "software security module", such things are inherently vulnerable to low-level attacks that side-step the intended interface. Say, a kernel exploit that grabs the decrypted key straight from the agent's memory.
So if you need to keep your secrets safe even if you unwittingly use them on a comprised OS, you can only pull that off with a hardware module.
conclusion: "The whole industry is shit."
You had a bad experience with one vendor, not with the industry.
There is an open-source toolchain for generating code for the card which works great from OS X or Linux. Contactless writers are available on eBay for like twenty bucks. And they will even work (via NFC) with Android phones.
It's a great time to be playing with contactless general-purpose smartcards.
I think I did buy one "$5" JavaCard but the shipping was like $50.
As much as I admire GlobalPlatformPro, staking your small company's security infrastructure on 1) a hobby project maintained by one guy (Martin Paljaik), and 2) materials with "call us" pricing that the suppliers really don't want to sell in sub-thousands volume seems worse than just using Yubikey.
Oh, also, I couldn't find an actual working open source PKCS#11 applet that wasn't abandonware. I did get something working with GPG (though it felt like duct tape and prayer), but I couldn't get the PKI stuff to work at all.
EU might be the differentiator. I do remember seeing a lot of cards that would only ship to EU, or shipping to US was prohibitively expensive.
Keep up the awesome work.
I have been thinking for a long time about opening a webshop for "pre-verified cards" that would come with an open source SDK and as much docs as possible. Maybe that would help (but would still only ship to "western world").
Only the most persistent and knowledgeable about the ring will go through the trouble to get the data off it.
I believe you would still have to write most of the software involved yourself. A discussion of smart card login with SSH as of five years ago:
And a useful discussion on SSH and 2FA last year here on HN: https://news.ycombinator.com/item?id=10885654
Hi Jeff, good to hear from you.
There's really two parts to our signing server; the first is the
separation of signing to a separate machine with the associated
client/server and ACL controls, and the second is the interface to the
nCipher HSM. The first part we've not made open because it's quite
specific to Red Hat internal build systems and our kerberos setup.
The second part is mostly straightforward use of nCipher utilities but
includes a patch to GNUpg which I was originally going to make public
but came into difficulty because it requires headers from the nCipher
developer kit, and linking to it, and it's under a very non-compatible
license. Given the cost of nCipher HSM units we didn't think other
projects would want that solution either.
So I'd actually prefer to point you to the work that has been done on
a signing server for Fedora, which is open. See
The Fedora folks looked into various hardware solutions too which were
cheaper and didn't have the proprietary API issues, I can't find a
link to that at the moment but Jesse Keating
should be able to give you more info.
Hope that's a good starting point...
An HSM is typically a 1-2U server, that is designed to provide high throughput of cryptographic operations. It is ultimately a collection of a few high performance servers networked together, with some custom ICs - not just a small chip. As a result, you pay up to tens of thousands of dollars for one, because it's a piece of critical infrastructure that is made to high tolerances. It's akin to buying hardware load balancers or firewalls appliances.
In addition to this, the validation process of an HSM is long. An HSM company will likely have teams of hardware engineers, software engineers, and specialised cryptography teams. There are audits for things like FIPS compliance, as well as extensive pentesting by external companies. All of this is expensive, to create a device that will never be mass market.
Many HSMs also add advanced authentication capabilities, such as M-of-N access control and/or hardware authenticators (e.g. you need 3 of 5 smart cards to use the HSM). The other key feature usually found in HSMs but not smart cards is backup/cloning without exporting the key (in PKCS#11 terms). This means that the key can be moved between HSMs with all the protections in place. I've yet to see a smart card that does this.
How does this work? Can an attacker buy an identical HSM, back up the key, and restore it onto the new HSM?
The NitroKey HSM requires all previous setup (DKEK?) passwords and PINs. Anecdotal random unnamed vendor? Not so much:
They did, with undocumented commands, export the key from the device in an unencrypted format and loaded it into the other model
And this is exactly my point, it's all functionality which the manufacturers have decided some customer might need. I don't want that, I want a secure general-purpose Turing-complete execution environment which gives me full flexibility in what crypto, mechanisms and policy to implement. This entire industry approach is silly.
Using them from code is like using a remote API, you can't execute code, you can only make requests and receive responses, and a lot of thought it put into the interface to ensure it's as secure as possible.
As far as the physical interface, they are more like an appliance than a server, in that you don't get the normal ports you would on any other machine, the actual casing is typically alarmed so that keys are wiped when the device is opened, important circuitry is embedded in epoxy so that it's essentially impossible to attach debugging instruments without destroying it, etc.
I can't really emphasise enough how these machines are designed for security.
For instance the YubiCo HSM is just a USB device that plugs into a server.
The stuff about the NDA I do find alarming. In order to write "secure" programs for the chip on the Yubikey, you must have an NDA with the manufacturer. In fact the open source pgpcard app for the Yubikey is different than what ships with the Yubikey because they can't open source the secure bits. Which is a bit upsetting. So uploading the open source version weakens your security.
That said, having my keys there still gives me much higher degree of security then an encrypted file on my computer. Malware may be able to get my pin, but not my keys.
I've seen some references to anti-side-channel techniques that are trade secret or patented or both. I think you can only get them in a Windows-based "solution" deployed by consultants who are authorized resellers of these companies.
Yubikey has done an incredible thing in democratizing the technology as far as they have for smaller-scale, Linux and Mac-centric users. It's shitty, but don't blame Yubikey.
It's a fully open USB HSM based on an STM32F405 SoC. Includes an HWRNG, 1MB Flash, and 196k of RAM. Currently runs TweetNaCl and also functions as a FIDO U2F token. Technical details are here:
Currently out of stock but we will be shipping again in early January.
Howver I'm wondering if there's a way I can also use it as a TOTP to replace my Google Authenticator app for several accounts? I don't really like having it in the phone because when I lose it I need to reset everything and it's a pain.
Is it possible with SC4 to achieve this functionality?
My use case is this. I have several business accounts (Heroku, Cloudinary, etc) that I like to enable 2FA on but which only support TOTP. I usually add those accounts to my Google Authenticator App and all is fine.
Except when I lose/wipe my phone, which happened recently. Then I have to go around resetting the 2FA setting, then re-enabling it with the new phone.
After a quick search, I found almost no solutions that fit this. Except something like . I would love to buy let's say, 2 of those for each account I want to enable 2FA on, I keep one in the safe and the other one for regular use.
So then I remembered about SC4 and thought that if I can do TOTP on it, I can just buy 2 and put all my seeds there (or so I was thinking) and use that instead of the phone.
That was my reasoning. Maybe I'm doing something wrong here, but I would love to be able to de-couple 2FA from my phone for my business accounts... I guess it's not a common use case?
Adding a RTC to the hardware is probably not viable. It would increase the cost too much. (If I were selling millions of units that would be different, but that hasn't happened yet.)
Another possible solution is to lobby the sites you care about to implement U2F. It's not particularly difficult. I wrote some minimalist reference code that you/they can find here:
And of course you can always point them in my direction if they want to hire a consultant :-)
date > /dev/cu.usbmodem*
and the OTP would appear on the HSM display. Would that work for you?
Just an idea, is there any USB device class that could be used to get the date and time? such that if you configure the SC4 to work with TOTP, when you plug it in a USB driver is installed that allows it to query the date and time?
Not that I know of. But it's a good idea if it turns out that such a class does exist. If you find one please let me know!
Do you guys think something like this could be patented and/or commercialized?
Bad memories of how touchy these babies were when they first came out:
I would definitely be interested if it's general purpose and open/verifiable.
I think it would make sense if you teamed up with an insurance company. That way, you could prove your technology once, to the insurance company, and sell devices with insurance against compromise. So all your potential customers wouldn't have to audit your technology, since this has been taken care of by the insurance company.
I'm hoping to focus my PhD on trying to come up with a solution to address the issue above. In other words, how can you design chips that can be verified (at all levels) without exposing your IP to a third-party? Furthermore, can this be done at runtime; e.g., could there be a syscall that queries the state of the hardware your software is running on? Cisco is one company that is particularly interested in solutions to both of these problems and is funding multiple research groups to explore these issues.
> The issue of trust is solved if you can find a trustworthy intermediary
No it isn't solved at all, because that assumption breaks down very easily, especially now that we know for a fact how invasive surveillance and backdoors have become.
For example, a Chinese company who would like to use such a product would reject a certification by a US or European insurance company, and rightly so. The same applies to a US company with Chinese insurance. The requirements for trust become exceedingly more difficult to meet once you start dealing with military contractors, law enforcement, etc. So where do you propose insuring the hardware module? The US? What if China proves to be a larger market? How about if you want to sell the tech in the EU? It's a rabbit hole of "trust" imo.
This is why an objective verification function would make things much more straightforward for chip designers and fabless semiconductor IP companies. And if you can objectively verify the hardware at runtime, you get even more useful guarantees.
I completely understand that the use of a trustworthy third-party is sometimes necessary, such as in X.509, but when it comes to circuit design, I think we need to and can do better than that.
I hope not.
I hope so.
I think Intel's SGX is a better solution for hardware-supported software isolation, but it still isn't widespread and has a number of weaknesses.
It's not really usable at all right now, but I'm cautiously optimistic that a version of Linux coming soon will support it on a CPU coming eventually.
(The relevant CPU feature is IA32_SGXLEPUBKEYHASH. Until that feature is available, SGX is every bit as worthlessly locked down as the worst GlobalPlatform gadgets IMO.)
Kaby Lake does not appear to have this feature.
Show HN thread: https://news.ycombinator.com/item?id=12053181
As I understand it, the u2f zero acts as an HID device and not as a smartcard provider, but could one modify the firmware to do that? Isn't this basically an open source yubikey you can make yourself for < $25?
The feature table also lists various supported
applications, demonstrating the interest of the
manufacturer in programming the device for specific
applications, rather than providing a platform for others
to do so. (Imagine if manufacturers of USB drives made USB
drives for text files and USB drives for image files and
USB drives for MP3 files and so on, and the idea of selling
a USB block device was alien to these people. If you wanted
to store a new kind of file on a USB drive, you had to
convince the manufacturer to implement support for it.) The
draw of the Nitrokey then is the possibility the
manufacturer merely incidentally allows alternate firmware
to be flashed, rather than the manufacturer explicitly
capitalising on the premise of an HSM as a general-purpose
After musing on the comments here I wrote a followup about improv HSMs. These aren't tamperproof and as such are suitable for use in secure datacentres only. https://www.devever.net/~hl/improvhsm
There is no question that there would be value in having a hardware platform that has certain security features, but that alone doesn't meet the requirements of most users of HSMs and Smart cards. The primary use cases I've seen are allowing a third party to have assurance of protection of data stored in the device and assurance of the rules for accessing the data. In most cases this assurance comes from a combination of the hardware itself and the software/firmware running on the hardware. A hardware platform only solves half the problem that most purchasers of HSMs and smart cards are asking vendors to solve.
A hardware platform alone solves less than half the problem for many HSM buyers, myself included. There's also software, then there's support. HSMs, at least the kind we use, are niche products because relatively few people have reasons to operate them (they're not cheap either). There aren't a lot of people who know how to use them well. We need to be able to get support on the phone 24/7/365 to deal with problems that come up because like most people who own HSMs, they are critical to the functioning of our systems.
Just throwing this out there to remind people interested in open HSMs (a fine idea) that at least when it comes to most people buying HSMs today, there needs to be an organization backing the product with good support. Otherwise it's probably a non-starter for critical systems.
Fulfilling this market, with the need for the outsourcing of liability, etc. is not mutually exclusive with providing general-purpose HSMs.
The use case for the smart card is different than a HSM with FIPS 140-2 level 3 or 4 validation. The whole point is to operate in a tested, known valid state while resisting tampering. The higher level devices are filled with epoxy and have other anti-tampering features.
A smartcard is most often a form of MFA. It can be used as an HSM of sorts, but offers limited benefit for that purpose.
And if you don't think a secure tamperproof general-purpose Turing-complete execution environment in a compact form factor with contactless induction-powered interface isn't an interesting opportunity for innovation, I really don't know what to say.
1) you can read/write flash via JTAG
2) you can only write flash, but not read the old one
3) you can't rewrite flash, neigher can you read it.
You will still have to implement USB communication, but there is already a lib from STM for it. Some models also have generous flash (in MB ranges).
You can use internal SRAM which is more than enough and use AES acceleration peripherial. One can attach sdcard and use SPI + DMA + AES periherial to shuffle data along if one needs alot of storage.
OK, you've got it. Where should I have it sent?
I know someone who uses one of these services to get the firmware for old industrial equipment whose manufacturer had long gone out of business or stopped providing support and replacing would cost far more. Total cost was <$1000USD, although that was an older MCU.
TPMs in most cases are based on smart cards but can also be implemented in a TEE (Trusted Execution Environment). Security does not always mandate tamperproof hardware.
var article = document.querySelector('article'); article.style['max-width'] = '650px'; article.style['margin'] = '0 auto';
"Smartcards and HSMs are essentially two “brands” for the same thing: a chip which guards access to the data stored within it, and will only allow that data to be accessed in certain ways or under certain conditions. HSMs are the “enterprise” label for such devices, whereas smartcards are essentially the same thing, only cheaper."
Yubikey(mentioned in the title) is a TOTP card that works with the HSM on the far end though. They serve different purposes. You load the tokens into the HSM device.
They aren't the same thing. What am I missing?
A Yubikey consists of some secure memory to store a secret and a program, and a processor to run the program to perform computations using the secret.
The programs are different but they are basically the same thing. The author wonders why there isn't a simple general purpose gadget you can load your own program on to. As long as the action of loading a program clears existing secrets, the device could be secure.
Or to put it another way, consider a Raspberry Pi acting as a router and as a Raspberry Pi acting as a media streamer. They have completely different purposes, but they are the same thing.
I use a Yubikey that's configured both as a U2F token for Google, as well as a smartcard for PGP/SSH use.
And their core, the construction of a Yubikey, smartcard, and HSM are very similar. The core idea is that you have physically secure memory guarded by a dedicated cryptoprocessor, so that your secrets never have to be exposed to the host OS when performing a signing operation.
the irony is that smart cards and even SIM cards in your phone are already general secure computers. the problem is that only by spending a lot of money and signing your life away on a NDA you can have access to it. the result: inefficiency beyond belief.
Try looking at nRF52. It has NFC, Bluetooth radio, and hardware RNG. I'm pretty sure it has the features he asks for (firmware can lock down and block reading/writing from debug port. but debug can always do a complete erase/reset of the chip)
A future SKU will probably have USB as well.
The only problem is it is probably too power hungry to be powered by the NFC radio waves itself. And that is probably true for anything with an powerful ARM microcontroller.
Maybe it'd be best to use a microcontroller with ARM TrustZone as well though. That should help bring the security of the device up to a more acceptable level.
 - https://news.ycombinator.com/item?id=13031484
Please expand your acronyms.