That's really neat, but I'm curious about what interdiction threat models this does or doesn't help with. For example, could the people performing the interdiction have sophisticated enough hardware-tampering capabilities that they could modify the key or extract its secrets and then send it on its way? Could they have a small enough chip that they could place in the USB connector itself to do some kind of malicious thing later on?
Are we to assume the CIA and it's ilk don't have operatives inside these organisations?
Of course, if you're sufficiently organised, you'd have low level grunts take on all the risk.
Or did staying up for way too long selling meth-amphetamine for the better part of a decade make me way too paranoid?
edit: the hardware design is nearly identical, but not the firmware. See my follow-up post.
> One of the most exciting opportunities the Librem Key opens up to us is in integrating with our tamper-evident Heads BIOS to provide cutting-edge tamper-evident security but in a convenient package that doesn’t exist anywhere else.
> We have worked with Nitrokey to add a custom feature to our Librem Key firmware specifically for Heads. This custom firmware along with a userspace application allows us to store the shared secret from the TPM on the Librem Key instead of on a phone app. Then when Heads boots, if the BIOS hasn’t been tampered with the TPM will unlock its copy of the shared secret, and Heads will send the 6-digit code over to the Librem Key. If the code matches what the Librem Key itself generated, it flashes a green light. If the codes don’t match, it flashes a red light.
We have worked with Nitrokey to add a custom feature to
our Librem Key firmware specifically for Heads. This
custom firmware along with a userspace application allows
us to store the shared secret from the TPM on the Librem
Key instead of on a phone app. Then when Heads boots, if
the BIOS hasn’t been tampered with the TPM will unlock
its copy of the shared secret, and Heads will send the
6-digit code over to the Librem Key. If the code matches
what the Librem Key itself generated, it flashes a green
light. If the codes don’t match, it flashes a red light.
I've been working on something about Heads (a minimal Linux-based secure bootloader) since January, too. And I can say the boot verification used by Heads is sound and solid. The implementation is basically a verified/measured boot scheme with TOTP.
During initialization, you generate a random TOTP key, add the key to your TOTP authentication device (e.g. Google Authenticator on mobile phone), and "seal" the key in your TPM, along with your boot "measurements". During the boot process, these "measurements", or the SHA hashes of various information/configuration about hardware, software and firmware, is being passed to the TPM. If the configuration has changed, the TPM will refuse to release the TOTP secret, otherwise, the TOTP key is released, then used to calculate a random number by a shell script in Heads.
If the number on your mobile phone matches with the number on the screen, then it proves the system is not being tampered.
Read the code here:
(yes, all shell scripts... I'm not sure whether this is a security issue, but this design is probably inspired by the initrd/sysvinit shell scripts)
Obviously, it means every time you boot your computer, it requires you to check the 6-digit code against your phone, before it boots the actual Linux kernel, or enter your full-disk encryption key. To me, Librem Key improves it, by automating the process (as Nitrokey has TOTP functionality already), and using a simple protocol to automate the challenge-response verification.
If you want to learn more, make sure to read about Heads first.
This presentation is a good start.
1. The trustworthiness and security of the TPM. The Free Software community has historically rejected them because of the DRM aspect of "trusted" computing. But to this day, the complete DRM dystopia (where all the proprietary software is running inside Intel Management Engine, and performs DRM-related cryptography in TPM blackboxes, was it called Microsoft Palladium?) didn't turn out to be a threat, fortunately. So now even RMS acknowledged that there is no actual reason to not implementing free software security tools based on TPM.
Another reason is potential backdoor, but even there is a backdoor, using it still improves security, compared to completely unprotected machines. In the future, perhaps there can be Free Hardware TPM, but not in the foreseeable future, and Heads's usage of TPM is still a big step towards security.
2. Completeness of measurements. If some software/hardware changes are not measured, or can be replayed by the attacker, the attack will not be detected. But the measurements are done collaterally by coreboot in early boot, and Heads in later boot, and to me is fairly extensive. Maybe there is still room for attacks, but difficult. Pentesters are always welcomed. BTW, man-in-the-middle attack to the entire verification process is possible, but it only has theoretical interests, as the attacker has to stay in the middle between you and your screen.
3. Another general issue is the security of the TOTP seed, like, if your Google Authenticator is hacked. The problem is somehow mitigated by using a Nitrokey/Librem Key, but the TOTP code is still running on a generic STM32F1 MCU, not the OpenPGP Card. STM32F1 is known for its tamper-nonresistance. But because of NDAs, there is currently no good alternative choices though. But still, just like TPM, I think it greatly improves the current situation so far, let's use it. It still has problems, and in the future we may do better.
4. Automation. Librem Key automates the challenge-response, unlike the original Heads, which prints the code on the screen. In the original Heads, if Heads itself is tampered, it will noticed by the user for incorrect/no code. But with automation, perhaps the attacker has a way to trick the users now? Need to check this.
> and even then i'm an amateur at cryptography stuff
Me too. I'm also working on a similar security token in my spare time. Hopefully, before the New Year, I can submit a Show HN. You may find it's interesting to read.
Finally, all of my descriptions are based on my first-hand impressions, not necessarily facts, and totally unverified. Make sure to check the primary sources!
That's optimistic. It's reasonable to assume that Intel's Management Engine has been penetrated by NSA, the CIA, the FSB, and the PLA's Third Department. It mostly relies on security through obscurity, which can be overcome with money.
What I was addressing there is a different issue, which was how the general objection of TPM in FOSS community came from - In the original vision of "Trusted Computing" around 2006, it was expected that a TPM and ME-based DRM would prevail in a proprietary system and lock every piece of media, software, and file down.
You can read Lucky Green's presentation from 2002 to understand more about the situation of that time. https://web.archive.org/web/20180416211840/https://cypherpun...
> You could create Word documents that could be read only in the next week
- Steven Levy
> Fritz Hollings Bill: S. 2048: Plug “analog hole” with 2048-bit RSA: Monitor out, Video out, Audio out. Microsoft: Additionally encrypt keyboard input to PC. S. 2048 makes it illegal to sell non-TCPA compliant computers: A $500,000 fine and 5 years in prison for the first offense; double that for each subsequent offense.
But fortunately, THEY were way too optimistic...
> As of 2015, treacherous computing has been implemented for PCs in the form of the “Trusted Platform Module”; however, for practical reasons, the TPM has proved a total failure for the goal of providing a platform for remote attestation to verify Digital Restrictions Management. Thus, companies implement DRM using other methods. At present, “Trusted Platform Modules” are not being used for DRM at all, and there are reasons to think that it will not be feasible to use them for DRM. Ironically, this means that the only current uses of the “Trusted Platform Modules” are the innocent secondary uses—for instance, to verify that no one has surreptitiously changed the system in a computer.
> Therefore, we conclude that the “Trusted Platform Modules” available for PCs are not dangerous, and there is no reason not to include one in a computer or support it in system software.
I actually don't know whether he is or he isn't an expert, but the likely basis for the claim that he is --- maintainer of GnuPG --- isn't a valid one. People have funny ideas of where crypto expertise comes from.
(Disclaimer, or something: I consistently and reliably tell clients that while I'm comfortable testing certain cryptosystems, I am not qualified to design them. ['lvh, one of my partners, is, but he has formal training and I don't.])
edit: okay, found. https://puri.sm/posts/purism-and-nitrokey-partner-to-build-p...
This parent comment looks like a refutation. But the entire thread never claimed Werner Koch was an "amateur", and in fact didn't even mention him, so I don't know what is the point of the parent comment. If it's not a refutation, but making a point about its security, it is better to phrase positively, "Werner Koch is an expert", to make the point clear.
If it doesn't, I don't understand the product. It's more expensive than a Yubikey 4, and bigger, and the Y4 does U2F along with the smard card RSA stuff.
We use Y4s in our practice, and my experience is that for every time I use the smart card RSA stuff (for instance, to SSH into something with a long-term RSA key), I use the U2F feature 10 times.
I’d also be curious about the level of certification, if any, that they can achieve by being open.
Disclaimer: I’m working on Solo (https://solokeys.com), an open source fido2 security key. Cost will be $15 and less during Kickstarter, and we’ll apply for FIDO2 certification at the next round in mid November.
PIV applet also supports P-256: https://developers.yubico.com/PIV/Introduction/YubiKey_and_P...
> ... for every time I use the smart card RSA stuff (for instance, to SSH into something with a long-term RSA key), I use the U2F feature 10 times.
Does that mean you use SSH rarely, login to U2F sites frequently or use different keys for SSH (e.g. not on Yubikey)?
The thread is actually better than the linked post because it tells you about ways you can (relatively easily) set up systems without long-term ssh keys.
Philosophy 1: Move to hardware SSH keys. That's what Y4s and Nitrokeys represent: keys where, if your machine is compromised, the SSH key itself can't easily be stolen.
Philosophy 2: Move to SSH CAs that issue short-lived certificates, and use U2F to authenticate issuance. This is roughly how you'd do a modern integration of an SSO system (like Okta or GSuite) with your SSH services.
SSH CAs have more steam behind them, and are desirable for a bunch of reasons that hardware SSH keys don't really address.
Since the primary function (beyond U2F, which these keys don't do) of a hardware key is to protect your SSH key, I guess it's worth considering how they fit into the modern SSH worldview or whatever you want to call it.
U2F is "just" a hardware P-256 key (or a set of keys unique to origin, but ignoring privacy issues they are comparable, of course I'm assuming the same setup so touch-to-use set to required).
(I've set up SSH CAs manually and I'm familiar with low level U2F for that matter).
If this does U2F, I'd be interested in knowing. I'm not saying it's impossible.
It'd be really cool to have a similarly priced "open" solution which comes in a minimally invasive package.
For reference: https://i1.wp.com/vaultumllc.com/wp-content/uploads/2017/03/...
I haven't carried a thumb drive in a while, but I personally have lost a couple before...
- For use as a key storage device / GPG smartcard, you should have the usual contingencies in place (e.g. backups of decryption keys, alternative signing/auth keys). Only GPG nerds are likely to use this feature.
- For MFA use, you can list an additional device as another acceptable factor. E.g. a second key, or an authenticatior app on your phone.
The Heads boot validation stuff is non-blocking; you can still boot into a system without verifying the boot partition/BIOS. Alternatively, there’s no reason you couldn’t fall back to TOTP on a phone, though I’m not sure if the interface supports that currently.
Source: I put everything on a YubiKey, then lost it.
* You buy multiple devices and configure your systems to honor both, as a backup plan.
* You back up your keys or their artifacts to paper or a small drive and keep that somewhere safe.
If you do neither of those things, and you lose the hardware key, you are either (a) boned or (b) using an insecure system where the key is just theater.
Some services allow you to configure multiple separate keys, so different private keys. Lastpass for example.
- I often log in to web services on my phone, and I don't have an usb adapter cable with me to use the key, and
- because having to carry a device always with me sounds cumbersome. My phone is always with me anyways.
Is there some other use case I am overlooking? I am eagerly awaiting browser vendors to implement these use cases: https://w3c.github.io/webauthn/#use-cases
U2F uses a challenge-response protocol that should (in theory) make it impossible to MITM. Google has said that they have had zero successful phishing attacks since they switched to using U2F devices (I think Yubikeys) for all employees.
To be sure, I'm not sure if it's that important. Mostly, I'm trying to understand where the market is going to settle and who is making devices for that market.
The problem is just that we don't live in a world where most people can use heads.
You either have to buy older hardware that is compatible with coreboot or buy a Librem laptop.