Hacker News new | comments | ask | show | jobs | submit login
Why I don't like smartcards, HSMs, YubiKeys, etc. (devever.net)
194 points by philipn on Nov 24, 2016 | hide | past | web | favorite | 132 comments

HSMs are shit.

In a previous role we used a major vendor's HSM to protect our private keys. VERY expensive kit, more expensive than the load balancers and servers combined.

We needed to use Elliptic Curve keys for a particular customer - so it got even more expensive as we had to buy:

1. A license from the LB vendor to use the HSM

2. A licence from the HSM vendor to use EC with the LB.

... even though they trumpeted these announcements of how radically great they were together we found:

1. The integration didn't work, full stop.

2. The version of OpenSSL we had to use (supplied) was about 18 months out of date

3. The specially b0rked version of OpenSSL supplied didn't support EC via a HSM

Even better - when Heartbleed came out I had a patch from RedHat on day 1. The load balancer?

Nope - nothing on their website - I had to create a ticket which said 'we are aware of the issue', at which point the ticket was closed. I questioned this and was told they couldn't keep it open, I had to create a new ticket every few weeks to find out whether they'd actually deigned to assign a bug id to the issue.

The HSM vendor just said nothing, zero, until a new version of the firmware was silently released 4 months later.

The whole industry is shit. I'd rather have a farm of Yubikeys than one of those HSMs.

We started looking into an HSM solution for a Customer and quickly found out that the entrenched "big" HSM manufacturers are all about lock-in, questionable documentation (and strong insistence that we use their very expensive service rather than attempt our own integration), severe granularity in licensing features, and _insane_ prices (on a 3-5 year product replacement 'treadmill').

We ended up using the Nitrokey HSM platform and have been very pleased with the results.

Don't forget broken HA features. HSMs are a great idea in theory that are ruined by incompetent companies making them. They soon become a gun your org holds to its own head with the outcome of a data loss incident or a hack because of their janky OpenSSL integrations.

That was exactly my problem. Also, the consultant they sent to help us with it didn't know anything about Linux when they knew that was our platform.

I'd love to get hold of one of my old units from that vendor and try to attack it. I assume it hasn't been done as they're so expensive. It's basically the ultimate in security through obscurity, as it's too expensive to get one you're prepared to break.

We have a bunch of HSMs and we had to sign a quazillion pages which at least deny us and anyone we hire or ask from trying to temper with the machines. Also most of them you cannot buy; they remain owned by the vendor.

They're not (necessarily) shit. They're just appropriate for a VERY specific purpose. A lot of companies that use them aren't using them right, or are using them in a scenario in which they are overkill. My company (medium-sized financial firm) uses them to protect the keys for our internal PKI. For these types of high-security situations, an HSM is ideal (if configured and managed properly). But I've seen places that use them to protect the SSL keys for webservers. While capable of doing that, an HSM is really overkill for that purpose. And, often results in them being mismanaged in the long run because the wrong teams end up responsible for them.

That, and it's very important to choose the right vendor. I've worked with pretty much all of them over the years, and some of them can be a nightmare to deal with. Some of the devices themselves are INCREDIBLY frustrating and should never have seen the light of day (much like some of their sales staff!)

So how about doing a blogpost/pastebin with vendors, products and your experience?

Want to (please) name the vendor?

Nope, they'll know who they are and i'm not dealing with these things anymore.

Why do you have your HSM where Heartbleed matters?

This exactly. You use a HSM so it doesn't matter if your server gets popped.

I think one should be really careful making statements like this.

In the case where an HSM is used for e.g. database encryption, one absolutely still cares.

The attacker can politely ask the HSM to decrypt all the data. On the bright side, you didn't lose the key, and your HSM should at least securely log the fact that all your data just walked out the door.

For SSL offload the situation is less dire, but it is totally realistic that heartbleed might cause issues for a system with HSM assisted offload (e.g. leaking plaintext) because the protocol processing happens in software. Any sane architecture should protect the actual private key though.

In general, the right way to think about HSMs is that they protect your keys, not your data. If you care more about the data than the keys, you absolutely need to care about the security of your host.

I used HSMs for a code signing project, and once someone can borrow your key to sign malicious payloads, it really doesn't matter if they have your signing key or not. You still have to get a new key, and clean up the mess.

As far as I understand heartbleed, the core issue was that you could search memory for the private key and then do whatever you wanted with it. With an HSM, that wouldn't be an issue. However, it's true that you could search for other data as well and that data could be sensitive, just not quite as sensitive (depending on your industry and architecture, of course).

I agree with you -- it depends on your threat model.

If state actors or universal mitm or coffee shop attacks are primary threats then loss of private key is a very bad outcome for SSL offload. And this is what SSL is meant to help woth.

For many (more?) sites though, the issue was loss of session plaintext (e.g api creds, basic auth creds, pii or credit cards). For a lot of applications where just hitting the webserver remotely is a lot easier than MITM, this was actually the more severe problem.

I have nation states as part of my threat model so stealing keys enables passive and active monitoring vs popping a box and getting some limited subset of connections intercepted.

OT, but I wonder how many of these LBs are/were TLS version or extension intolerant: http://www.bus.umich.edu/KresgePublic/Journals/Gartner/resea...

Is there any way to do what a HSM does, in software? (Maybe with OS support)

If your definition of an HSM is just "hides my key material from my application server", then sure. This is basically the idea behind HashiCorp's Vault.

However, you'd still be vulnerable to someone gaining root access on your Vault server. A true HSM uses physically secure memory and a dedicated cryptoprocessor to make it impossible for the secret to ever be leaked, even if the host OS is compromised. This isn't really something that's possible in software alone.

(In fact, the Vault documentation even mentions that you might still want to use a dedicated HSM for security, since they implement a plugable mechanism for secret storage.)

That said... you could use a TPM chip or ARM TrustZone to implement true TPM functionality. It's still a hardware based solution, but it's hardware that's much more affordable and possibly already inside your computer.

take a look at my post from earlier: https://news.ycombinator.com/item?id=13031870 . I'd be interested to hear your (and others') thoughts.

Not really, no. AIUI, the whole point of having a hardware security module is that it only leaves a very narrow attack surface (ie, the protocol for communicating with the device) for malicious software to try to exfiltrate the sensitive data through, with no way to get around it - so if that interface is sufficiently hardened, you should be secure regardless of any weaknesses elsewhere in the system.

In contrast, while you could think of, say, an ssh key agent as a sort of "software security module", such things are inherently vulnerable to low-level attacks that side-step the intended interface. Say, a kernel exploit that grabs the decrypted key straight from the agent's memory.

So if you need to keep your secrets safe even if you unwittingly use them on a comprised OS, you can only pull that off with a hardware module.

Yes, but then you lose the whole benefit of a HSM -- that keys generated in the HSM are physically protected from being extracted. Aside from that, a HSM is just an implementation of regular crypto algorithms.

Yes you can and I know at least two companies that do and are certified by the same standards as hardware HSMs. So you can and certify at least as an issueing bank (and because of that, probably for a lot more).

This is an area of cryptography called whitebox cryptography. Tl;dr is that most solutions nowadays are proprietary, and all open sourced and documented whitebox cryptosystems are broken.

introduction: "we used a major vendor's HSM to protect our private keys"

conclusion: "The whole industry is shit."

You had a bad experience with one vendor, not with the industry.

Odd -- JavaCard smartcards are available for under $5, have crypto co-processors, and certainly support general-purpose code. See for example my project for KeePass, http://code.lardcave.net/2016/08/06/1/ . After programming, you can choose to lock down the card (which means you can only erase the card, not modify it). I'm using NXP chips and although I haven't investigated completely I would be highly surprised if it was not possible to get the tamper-resistant and cryptographic properties the author is after.

There is an open-source toolchain for generating code for the card which works great from OS X or Linux. Contactless writers are available on eBay for like twenty bucks. And they will even work (via NFC) with Android phones.

It's a great time to be playing with contactless general-purpose smartcards.

I tried to do something like this last summer. I found little to zero intersection between the cards that GlobalPlatformPro can personalize, and the cards that you can just buy in low volume at reasonable cost.

I think I did buy one "$5" JavaCard but the shipping was like $50.

As much as I admire GlobalPlatformPro, staking your small company's security infrastructure on 1) a hobby project maintained by one guy (Martin Paljaik), and 2) materials with "call us" pricing that the suppliers really don't want to sell in sub-thousands volume seems worse than just using Yubikey.

Oh, also, I couldn't find an actual working open source PKCS#11 applet that wasn't abandonware. I did get something working with GPG (though it felt like duct tape and prayer), but I couldn't get the PKI stuff to work at all.

I've used http://www.smartcardfocus.us/shop/ilp/se~any/so~5/p/index.sh... without any trouble - the Yubico GPG applet runs on them happily, and we're using https://github.com/philipWendland/IsoApplet for PKCS#11 based signing.

this "one guy" here. I would say that the availability of open access cards is almost excellent these days, at least in EU. The quality of those easily available cards (chinese) is questionable though. But there are cards available from other vendors as well (most notably NXP)

Haha I love HN.

EU might be the differentiator. I do remember seeing a lot of cards that would only ship to EU, or shipping to US was prohibitively expensive.

Keep up the awesome work.

HN is nice indeed, but the awesome work would be more effective with less HN ;)

I have been thinking for a long time about opening a webshop for "pre-verified cards" that would come with an open source SDK and as much docs as possible. Maybe that would help (but would still only ship to "western world").

I even have a ring that runs JavaCard software on its JVM, they are sold for cheap, and I bought one just for fun and history, there is an eBay listing here: http://www.ebay.co.uk/itm/JAVA-RING-RARE-Sun-Microsystems-JA...

That seems like a great way to distribute some secret. Create an Arduino reader for the ring, save the secret in a format that requires 3 out of 5 rings to reconstruct and store them apart.

Only the most persistent and knowledgeable about the ring will go through the trouble to get the data off it.

That is the weirdest ebay listing I saw in a while. Also I want to have one super-badly now. Thank you :)

I remember the JavaONE when they were given. Oh getting old!

I think I still have one somewhere from that JavaOne.

What can you actually do with it?

Hey Nick! I did a smartcard module for the old Cobalt Raq/Qube systems back in ~1999/2000, perl. Worked fine but few customers IIRC. Since then only used a bunch of bank ones. Can probably help you find cheapies from China, eg. try a search like https://s.taobao.com/search?q=智能卡 (92% < USD$3) with results like https://item.taobao.com/item.htm?spm=a230r.1.14.201.VcCJvD&i... (100 cards + USB programmer for ~USD$70). We are flying back to Sydney tonight... if you are back before xmas drop me a line. (NXPs only: https://s.taobao.com/search?q=智能卡+nxp)

What I don't get is why they're all virtual machine based, though I think someone posted a brief explanation for that in response to me last time this came up. Moreover it doesn't help that I am violently allergic to Java. But when it comes down to it, the smartcard industry giving people JavaCard is the same as not giving them actual, full access to the chip. It's probably the best thing going with regard to my article, but there's still something amiss there.

Why are you violently allergic to Java?

Can I get something that would allow me to unlock an SSH key by tapping a card to my phone?


I believe you would still have to write most of the software involved yourself. A discussion of smart card login with SSH as of five years ago:


And a useful discussion on SSH and 2FA last year here on HN: https://news.ycombinator.com/item?id=10885654

Did it get easier recently? I tried getting into this about six years or so ago and found it really hard to get started. Finding appropriate applets (MUSCLE applet I guess was the common crypto one), incompatible cards and readers, etc.

So FWIW, I asked about how Redhat signs their packages some time ago (about 6-7 years ago!) and was introduced to Fedora's "Signing Server" service, which is entirely open source. The email in full is:

    Hi Jeff, good to hear from you.
    There's really two parts to our signing server; the first is the
    separation of signing to a separate machine with the associated
    client/server and ACL controls, and the second is the interface to the
    nCipher HSM. The first part we've not made open because it's quite
    specific to Red Hat internal build systems and our kerberos setup.
    The second part is mostly straightforward use of nCipher utilities but
    includes a patch to GNUpg which I was originally going to make public
    but came into difficulty because it requires headers from the nCipher
    developer kit, and linking to it, and it's under a very non-compatible
    license. Given the cost of nCipher HSM units we didn't think other
    projects would want that solution either.

    So I'd actually prefer to point you to the work that has been done on
    a signing server for Fedora, which is open. See
    The Fedora folks looked into various hardware solutions too which were
    cheaper and didn't have the proprietary API issues, I can't find a
    link to that at the moment but Jesse Keating 
    should be able to give you more info.

    Hope that's a good starting point...
If anyone is interested, the project is actually named Sigul and is located at:


A blog post by Mozilla on the topic of package signing with an HSM:


I'd like to address the difference between a SmartCard and an HSM as I feel like the author doesn't acknowledge some of the practical differences. While at the core they are both "hardware security", i.e. a physical chip that implements security, an "HSM" as I have commonly seen the term used is a completely different thing in most other ways.

An HSM is typically a 1-2U server, that is designed to provide high throughput of cryptographic operations. It is ultimately a collection of a few high performance servers networked together, with some custom ICs - not just a small chip. As a result, you pay up to tens of thousands of dollars for one, because it's a piece of critical infrastructure that is made to high tolerances. It's akin to buying hardware load balancers or firewalls appliances.

In addition to this, the validation process of an HSM is long. An HSM company will likely have teams of hardware engineers, software engineers, and specialised cryptography teams. There are audits for things like FIPS compliance, as well as extensive pentesting by external companies. All of this is expensive, to create a device that will never be mass market.

Both Smart Cards and HSMs can (and frequently do) contain FIPS validated cryptographic modules and can be USB devices. What seems to set them apart is content capacity and speed. A "HSM" can usually store dozens, hundreds, or even tens of thousands of keys and can do numerous cryptographic operations per second. Most "smart cards" can only store a few keys and frequently 1-2 operations per second.

Many HSMs also add advanced authentication capabilities, such as M-of-N access control and/or hardware authenticators (e.g. you need 3 of 5 smart cards to use the HSM). The other key feature usually found in HSMs but not smart cards is backup/cloning without exporting the key (in PKCS#11 terms). This means that the key can be moved between HSMs with all the protections in place. I've yet to see a smart card that does this.

> The other key feature usually found in HSMs but not smart cards is backup/cloning without exporting the key (in PKCS#11 terms). This means that the key can be moved between HSMs with all the protections in place. I've yet to see a smart card that does this.

How does this work? Can an attacker buy an identical HSM, back up the key, and restore it onto the new HSM?

Theoretically yes, though you would have to explicitly switch the original HSM into backup mode, an operation that requires one or more admins to be present and strongly authenticated, most often with smart cards.

It sounds like that's exactly how it works (two months ago):


The NitroKey HSM requires all previous setup (DKEK?) passwords and PINs. Anecdotal random unnamed vendor? Not so much:

They did, with undocumented commands, export the key from the device in an unencrypted format and loaded it into the other model

When you initialize your HSM, a key is created which is owned by you and stored in a smartcard/token. If you buy an identical HSM, you shoud need the token to restore keys into it.

>Many HSMs also add advanced authentication capabilities, such as M-of-N access control

And this is exactly my point, it's all functionality which the manufacturers have decided some customer might need. I don't want that, I want a secure general-purpose Turing-complete execution environment which gives me full flexibility in what crypto, mechanisms and policy to implement. This entire industry approach is silly.

Any old server gives you full flexibility. One key point is, what do you mean by secure? If they give this Turing-complete machine to their customers, and then the customer uses or is given the wrong software (Murphy's Law), how is it going to be assured that this is still "secure"? And are many other paying customers looking for this Turing-complete thing they can write their own software for, or don't a lot more customers want turnkey solutions to specific problems?

While these devices are turing complete machines under the hood, the interface to them is incredibly strict.

Using them from code is like using a remote API, you can't execute code, you can only make requests and receive responses, and a lot of thought it put into the interface to ensure it's as secure as possible.

As far as the physical interface, they are more like an appliance than a server, in that you don't get the normal ports you would on any other machine, the actual casing is typically alarmed so that keys are wiped when the device is opened, important circuitry is embedded in epoxy so that it's essentially impossible to attach debugging instruments without destroying it, etc.

I can't really emphasise enough how these machines are designed for security.

I'm not sure there's any qualification for how much rack space or form factor HSM an HSM uses.

For instance the YubiCo HSM is just a USB device that plugs into a server.

Sure, I did say "typically". Obviously the form factor does not define an HSM, but in general the term is used to refer to dedicated machines designed for high performance at scale with a large number of users, rather than a SmartCard/USB device which is, typically, for a single user.

I see, that makes sense. Cheers.

He mentions yubikey in the title, but then nowhere else. The Yubikey Neo seems to be pretty close to his target device. The Yubikey 4 removed the ability to write new apps.

The stuff about the NDA I do find alarming. In order to write "secure" programs for the chip on the Yubikey, you must have an NDA with the manufacturer. In fact the open source pgpcard app for the Yubikey is different than what ships with the Yubikey because they can't open source the secure bits. Which is a bit upsetting. So uploading the open source version weakens your security.

That said, having my keys there still gives me much higher degree of security then an encrypted file on my computer. Malware may be able to get my pin, but not my keys.

When I dug into the smart card scene, it was clearly a world meant for enterprises that make deals on golf courses. The official specs, SDKs, etc. for the hardware is sold for many thousands under "call us" licensing and NDA from NXP. There's some working GPG applets but the open source PKI applets are all abandonware; you're meant to license one (Windows only, of course) from Gemalto, also a "call us" deal.

I've seen some references to anti-side-channel techniques that are trade secret or patented or both. I think you can only get them in a Windows-based "solution" deployed by consultants who are authorized resellers of these companies.

Yubikey has done an incredible thing in democratizing the technology as far as they have for smaller-scale, Linux and Mac-centric users. It's shitty, but don't blame Yubikey.

Only the dev version of the Neo supported altering apps on the device. Later versions were locked -- causing Yubico to issue replacement devices when it was discovered their GPG applet contained a vulnerability a couple years ago.

This site is down so I was not able to read the original article, but I would like to take this opportunity to draw HN's attention to my current project:


It's a fully open USB HSM based on an STM32F405 SoC. Includes an HWRNG, 1MB Flash, and 196k of RAM. Currently runs TweetNaCl and also functions as a FIDO U2F token. Technical details are here:


Currently out of stock but we will be shipping again in early January.

I'm interested in buying a couple of these.

Howver I'm wondering if there's a way I can also use it as a TOTP to replace my Google Authenticator app for several accounts? I don't really like having it in the phone because when I lose it I need to reset everything and it's a pain.

Is it possible with SC4 to achieve this functionality?

TOTP is tricky because that requires a real time clock, which the SC4-HSM does not have. That would require adding a battery and associated power management circuitry, which would considerably increase the cost. However, you could run a driver on the host machine that provided the time to the HSM, which would still securely store your secret key. But then you could only use it on a machine that had this driver installed. So yes, it's possible, but it's not straightforward.

Also... why do you want TOTP rather than U2F? U2F is better in every way (unless you don't have access to a USB port).

Well, mostly because of those services that only implement TOTP, in particular those that I can use with my Google Authenticator App.

My use case is this. I have several business accounts (Heroku, Cloudinary, etc) that I like to enable 2FA on but which only support TOTP. I usually add those accounts to my Google Authenticator App and all is fine.

Except when I lose/wipe my phone, which happened recently. Then I have to go around resetting the 2FA setting, then re-enabling it with the new phone.

After a quick search, I found almost no solutions that fit this. Except something like [0]. I would love to buy let's say, 2 of those for each account I want to enable 2FA on, I keep one in the safe and the other one for regular use.

So then I remembered about SC4 and thought that if I can do TOTP on it, I can just buy 2 and put all my seeds there (or so I was thinking) and use that instead of the phone.

That was my reasoning. Maybe I'm doing something wrong here, but I would love to be able to de-couple 2FA from my phone for my business accounts... I guess it's not a common use case?

[0] https://www.protectimus.com/protectimus-slim-mini

I see. Yes, you are right that one SC4-HSM can store multiple keys. The only missing piece is the time. The simplest solution is to feed the time into the HSM from some external source, like a custom driver or a command line. Would that work for you?

Adding a RTC to the hardware is probably not viable. It would increase the cost too much. (If I were selling millions of units that would be different, but that hasn't happened yet.)

Another possible solution is to lobby the sites you care about to implement U2F. It's not particularly difficult. I wrote some minimalist reference code that you/they can find here:


And of course you can always point them in my direction if they want to hire a consultant :-)

Many more services implement TOTP and not U2F, so this would tend to be seen as a useful thing for end users even if the technology is unattractive.

Actually, I just thought of a way you could make this work with a terminal command instead of a driver. You'd have to type something like:

date > /dev/cu.usbmodem*

and the OTP would appear on the HSM display. Would that work for you?

Yeah, I guess that could work since I usually don't log in to my business accounts from anywhere else but my own computer, so even if it needs a bit of configuration it's not that bad.

Just an idea, is there any USB device class that could be used to get the date and time? such that if you configure the SC4 to work with TOTP, when you plug it in a USB driver is installed that allows it to query the date and time?

> is there any USB device class that could be used to get the date and time?

Not that I know of. But it's a good idea if it turns out that such a class does exist. If you find one please let me know!

The issue of affordable HSM/TPM for general purpose use is something my research group is trying to solve. We have most of the theory down, but the implementation is a work in progress. The key point is trying to maintain full physical isolation from the CPU and OS, while also providing general low-level computing capabilities.

Do you guys think something like this could be patented and/or commercialized?

IBM's Citadel project and Doug Tygar's group at CMU researched crypto co-processors about 20 years ago. You can still find Bennet Yee's PhD thesis online. It and it's bibliographical references gave a pretty good overview of the lay-of-the-land at that time. You'd think that the theory might have progressed some since then, but theory doesn't progress as fast as a front-end development framework...

Bad memories of how touchy these babies were when they first came out:

  - http://www-03.ibm.com/security/cryptocards/pciecc2/overview.shtml
EDIT: What the hell. Here's Bennet Yee's PhD thesis. So you don't have to convert it from PostScript. (That said - this is a nasty image scan - you might want to do that anyway!)

  - http://www.dtic.mil/dtic/tr/fulltext/u2/a281255.pdf
I don't think I'm going to be able dig around for the extant Usenix papers from that era on the topic right now.

I read through Tygar and Yee's paper on secure boot during my research. Their work was very good to be frank. They foresaw most of the recent developments in TPM design, which I thought was quite impressive.

Oh - cool. It wasn't clear to me from your top-level comment how far back you'd gone. Even though it's old, I still think it's pretty good, and thought you should be made aware if you weren't already. Good to see that you're "on it"!

Thanks for mentioning it regardless! I guess you are bound to miss something when sifting through past work on a topic.

> Do you guys think something like this could be patented > and/or commercialized?

I would definitely be interested if it's general purpose and open/verifiable.

I think it would make sense if you teamed up with an insurance company. That way, you could prove your technology once, to the insurance company, and sell devices with insurance against compromise. So all your potential customers wouldn't have to audit your technology, since this has been taken care of by the insurance company.

Yes, that is the key issue: companies won't use such a module unless it is verifiable and does what it claims to do. I guess the first step would be to propose an open standard and a sample implementation of such a module. I don't think we're there yet though.

I'm hoping to focus my PhD on trying to come up with a solution to address the issue above. In other words, how can you design chips that can be verified (at all levels) without exposing your IP to a third-party? Furthermore, can this be done at runtime; e.g., could there be a syscall that queries the state of the hardware your software is running on? Cisco is one company that is particularly interested in solutions to both of these problems and is funding multiple research groups to explore these issues.

The issue of trust is solved if you can find a trustworthy intermediary, like an insurance company, to financially guarantee your products against compromise. Insurance is the tried and true method for transferring risk from one party to another. I'm certain there are plenty of customers who would find an insurance by eg. AIG sufficient to do business, without needing to know the internals of the hardware you produce. I know I would.

Edit: I now see that you are referring to someone attacking the module, rather than the module having a backdoor. I agree that insurance is a good way to avoid financial loss, but it doesn't at all address the backdoor issue.

> The issue of trust is solved if you can find a trustworthy intermediary

No it isn't solved at all, because that assumption breaks down very easily, especially now that we know for a fact how invasive surveillance and backdoors have become.

For example, a Chinese company who would like to use such a product would reject a certification by a US or European insurance company, and rightly so. The same applies to a US company with Chinese insurance. The requirements for trust become exceedingly more difficult to meet once you start dealing with military contractors, law enforcement, etc. So where do you propose insuring the hardware module? The US? What if China proves to be a larger market? How about if you want to sell the tech in the EU? It's a rabbit hole of "trust" imo.

This is why an objective verification function would make things much more straightforward for chip designers and fabless semiconductor IP companies. And if you can objectively verify the hardware at runtime, you get even more useful guarantees.

I completely understand that the use of a trustworthy third-party is sometimes necessary, such as in X.509, but when it comes to circuit design, I think we need to and can do better than that.

Since you're doing research in this area, have you taken a look at something like https://www.dyadicsec.com/ at all? I looked at the whitepaper a while ago, and it seemed to make sense, though I was way out of my depth.

No, I have not actually. I will definitely check the whitepaper out. Thanks!

> patented

I hope not.

> commercialized

I hope so.

I think Intel has done something similar to what you are looking for: http://www.intel.com/content/www/us/en/architecture-and-tech...

No, that's different. The crypto is done in hardware, yes, but keys and plaintext are still seen in software, and software is inherently untrusted. Our aim is to support use cases where you do not want software to handle anything.

I think Intel's SGX is a better solution for hardware-supported software isolation, but it still isn't widespread and has a number of weaknesses.

> I think Intel's SGX is a better solution for hardware-supported software isolation, but it still isn't widespread and has a number of weaknesses.

It's not really usable at all right now, but I'm cautiously optimistic that a version of Linux coming soon will support it on a CPU coming eventually.

(The relevant CPU feature is IA32_SGXLEPUBKEYHASH. Until that feature is available, SGX is every bit as worthlessly locked down as the worst GlobalPlatform gadgets IMO.)

Kaby Lake does not appear to have this feature.

Has the author seen the SC4-HSM I wonder? https://sc4.us/hsm/

Show HN thread: https://news.ycombinator.com/item?id=12053181

How does something like the U2F Zero[1] compare?

As I understand it, the u2f zero acts as an HID device and not as a smartcard provider, but could one modify the firmware to do that? Isn't this basically an open source yubikey you can make yourself for < $25?

1. https://github.com/conorpp/u2f-zero

This is the key quote:

  The feature table also lists various supported 
  applications, demonstrating the interest of the 
  manufacturer in programming the device for specific 
  applications, rather than providing a platform for others 
  to do so. (Imagine if manufacturers of USB drives made USB 
  drives for text files and USB drives for image files and 
  USB drives for MP3 files and so on, and the idea of selling 
  a USB block device was alien to these people. If you wanted
  to store a new kind of file on a USB drive, you had to 
  convince the manufacturer to implement support for it.) The 
  draw of the Nitrokey then is the possibility the 
  manufacturer merely incidentally allows alternate firmware 
  to be flashed, rather than the manufacturer explicitly 
  capitalising on the premise of an HSM as a general-purpose 
Great point, and completely lost on manufacturers.

I'm the author of the article.

After musing on the comments here I wrote a followup about improv HSMs. These aren't tamperproof and as such are suitable for use in secure datacentres only. https://www.devever.net/~hl/improvhsm

The author brings up many reasonable points but seems to mix issues of HSMs & Smart Cards not providing a generic open hardware platform with possible security problems of a platform.

There is no question that there would be value in having a hardware platform that has certain security features, but that alone doesn't meet the requirements of most users of HSMs and Smart cards. The primary use cases I've seen are allowing a third party to have assurance of protection of data stored in the device and assurance of the rules for accessing the data. In most cases this assurance comes from a combination of the hardware itself and the software/firmware running on the hardware. A hardware platform only solves half the problem that most purchasers of HSMs and smart cards are asking vendors to solve.

"A hardware platform only solves half the problem that most purchasers of HSMs and smart cards are asking vendors to solve."

A hardware platform alone solves less than half the problem for many HSM buyers, myself included. There's also software, then there's support. HSMs, at least the kind we use, are niche products because relatively few people have reasons to operate them (they're not cheap either). There aren't a lot of people who know how to use them well. We need to be able to get support on the phone 24/7/365 to deal with problems that come up because like most people who own HSMs, they are critical to the functioning of our systems.

Just throwing this out there to remind people interested in open HSMs (a fine idea) that at least when it comes to most people buying HSMs today, there needs to be an organization backing the product with good support. Otherwise it's probably a non-starter for critical systems.

These are valid concerns, but they're not good reasons not to provide general-purpose compute HSMs. If you want an external company to have certified the software, the policy which goes on a general-purpose HSM, that may be entirely sensible from a business perspective; I'm sure if general-purpose HSMs were a thing, with standard HSM platforms, such certified programs would be available on the market. Many would probably be made by the manufacturers of the HSMs themselves, providing a vertically integrated solution, support-wise.

Fulfilling this market, with the need for the outsourcing of liability, etc. is not mutually exclusive with providing general-purpose HSMs.

The author is not thinking about why these things are built and marketed as they are.

The use case for the smart card is different than a HSM with FIPS 140-2 level 3 or 4 validation. The whole point is to operate in a tested, known valid state while resisting tampering. The higher level devices are filled with epoxy and have other anti-tampering features.

A smartcard is most often a form of MFA. It can be used as an HSM of sorts, but offers limited benefit for that purpose.

Yes, I know. I want those anti-tamper features, and I want to be able to take advantage of them to secure cryptographic policies designed and coded by myself or other people in the open source community. And of course I would be free to audit that code before making use of it.

And if you don't think a secure tamperproof general-purpose Turing-complete execution environment in a compact form factor with contactless induction-powered interface isn't an interesting opportunity for innovation, I really don't know what to say.

What is the problem to take a 10$ stm32f discovery board and use it as TPM. There are different flash protections:

1) you can read/write flash via JTAG

2) you can only write flash, but not read the old one

3) you can't rewrite flash, neigher can you read it.

You will still have to implement USB communication, but there is already a lib from STM for it. Some models also have generous flash (in MB ranges).

You can use internal SRAM which is more than enough and use AES acceleration peripherial. One can attach sdcard and use SPI + DMA + AES periherial to shuffle data along if one needs alot of storage.

Send me a locked STM32F1 chip and I'll send you back the binary contained in it. Not kidding. The problem with general purpose MCUs is that they are trivial to break.

Can you do that with an F4? If so, I'd like to take you up on your offer. I'll even pay you if you tell me how you did it.

Security on F4 is way better. I haven't found a way to circumvent the JTAG fuse - but to be honest, I never really tried as I don't have access to proper glitching hardware.

What would you need?

A ChipWhisperer and some boring winter weekends, probably. I don't break these for cash or fame; it's just for fun.

> A ChipWhisperer

OK, you've got it. Where should I have it sent?

I would be very interested in the information on how can you do this :) We already use a Cypress PSoC5LP MCU with a flash protection feature as a License Manager(Essentially a TPM with encyption keys in locked flash) because we had a surplus of those. I am just trying to estimate if it is dangerous an should be changed. Our case is to prohibit Customers (mostly industry) from pirating firmware for new devices.

Various companies in China and Russia will read locked MCUs for a fee. Search for "MCU Break".

I know someone who uses one of these services to get the firmware for old industrial equipment whose manufacturer had long gone out of business or stopped providing support and replacing would cost far more. Total cost was <$1000USD, although that was an older MCU.

I had to look up TPM, I didn't realize there was an open standard for crypto processors. Is it safe to say that most HSMs are TPMs?

They are quite different beasts. HSMs offer tamper protection through various physical means like wrapping all components in resin, or implementing self-destruction mechanisms. Smart cards protect their secrets against all kinds of side channel attacks trying to read keys off a thermal signature for example.

TPMs in most cases are based on smart cards but can also be implemented in a TEE (Trusted Execution Environment). Security does not always mandate tamperproof hardware.

Thanks for the clarification.

Since zooming won't fix the line width, here's a quick fix - paste into the console:

     var article = document.querySelector('article'); article.style['max-width'] = '650px'; article.style['margin'] = '0 auto';

The OP states:

"Smartcards and HSMs are essentially two “brands” for the same thing: a chip which guards access to the data stored within it, and will only allow that data to be accessed in certain ways or under certain conditions. HSMs are the “enterprise” label for such devices, whereas smartcards are essentially the same thing, only cheaper."

Yubikey(mentioned in the title) is a TOTP card that works with the HSM on the far end though. They serve different purposes. You load the tokens into the HSM device.

They aren't the same thing. What am I missing?

An HSM consists of some secure memory to store a secret and a program, and a processor to run the program to perform computations using the secret.

A Yubikey consists of some secure memory to store a secret and a program, and a processor to run the program to perform computations using the secret.

The programs are different but they are basically the same thing. The author wonders why there isn't a simple general purpose gadget you can load your own program on to. As long as the action of loading a program clears existing secrets, the device could be secure.

Or to put it another way, consider a Raspberry Pi acting as a router and as a Raspberry Pi acting as a media streamer. They have completely different purposes, but they are the same thing.

I see, thanks for the clarification. That makes sense.

A Yubikey can be run either in TOTP mode, in U2F mode, or as a generic smartcard. (Or a combination of the above.) It's configurable.

I use a Yubikey that's configured both as a U2F token for Google, as well as a smartcard for PGP/SSH use.

And their core, the construction of a Yubikey, smartcard, and HSM are very similar. The core idea is that you have physically secure memory guarded by a dedicated cryptoprocessor, so that your secrets never have to be exposed to the host OS when performing a signing operation.

This is a somewhat older rant (at least 2015, I think). And the title is misleading. It is really "Why I wish there were a product similar to but different than smartcards, HSMs, YubiKeys, etc." Because there isn't much in there that argues why smartcards (or yubikeys, etc.) are not good at what they do. The author just wants a different thing, and doesn't understand why this fantasy product doesn't exist.

you missed the point, which it easy because author is mostly rambling :)

the irony is that smart cards and even SIM cards in your phone are already general secure computers. the problem is that only by spending a lot of money and signing your life away on a NDA you can have access to it. the result: inefficiency beyond belief.

How is a title "Why I don't like X" misleading if the article explains what properties of all existing examples of X the author doesn't like?

As is with many things, the business decisions that makes sense usually overwrites the technical decisions that makes sense.

It's not exactly in small card form, but someone looking for a general-purpose programmable tamper-proof computer might be interested in the ORWL: https://www.crowdsupply.com/design-shift/orwl

For the microchip itself, I'm pretty sure this already exists.

Try looking at nRF52. It has NFC, Bluetooth radio, and hardware RNG. I'm pretty sure it has the features he asks for (firmware can lock down and block reading/writing from debug port. but debug can always do a complete erase/reset of the chip)

A future SKU will probably have USB as well.

The only problem is it is probably too power hungry to be powered by the NFC radio waves itself. And that is probably true for anything with an powerful ARM microcontroller.

Maybe it'd be best to use a microcontroller with ARM TrustZone as well though. That should help bring the security of the device up to a more acceptable level.

What about the FST-01? It's what I use and it works pretty well in my experience. http://wiki.seeedstudio.com/wiki/FST-01

FST-01 is an STM32F1-based board, which according to [1] can be dumped fairly cheaply.

[1] - https://news.ycombinator.com/item?id=13031484

HSM: Hardware security module


Please expand your acronyms.

Isn't the Fidesmo card what he wants? You can write JavaCard applications for it and run them within the secure element, as far as I know.

Speaking of which, whatever happened to Google's Project Vault? Did it die after Mudge quit Google? It looked so promising.


Would that answer OP's needs: https://www.ledgerwallet.com/products/9-ledger-blue ?

Could something like this be built using ARM's TrustZone features?

Nope, TrustZone is not tamperproof, cannot resist determined attackers who have access to the hardware. HSMs and smart cards are designed precisely to cover this use case.

I am curious to hear why the device you are looking for should be a compact and portable device. You listed it as your very first requirement so it must be a must-have.

The statement that "all HSMs and smartcards are the same" shows limited understanding. High-end HSMs can take 1000s of hits per second, a smartcard only a few.

why would this distinction result in a new name? my consumer switch and an enterprise switch are both switches, because that's what they do. or e.g. all cars, it doesn't matter if it's a ferrari or a lada, it's still a car.

Does this guy realize you can buy PCI HSM devices that fit what he describes?

Post first, UPDATE later.

Do you also not like CSS?

I'm quite a big fan of OPs work and I think that if they take some time with JavaScript they will change their "Let me be clear about this: JavaScript sucks. It’s not the worst, but it’s also not by any means good" opinion.

Check out JavaScript the Good Parts. It's a great language hidden under a layer of horrible horrible design choices.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact