You can use TPM chips to RSA sign arbitrary data, and use that to authenticate SSH:
Even under Windows:
The secret is using TSS_HASH_OTHER as the hash algorithm, which tells the TPM "it's already hashed". Whether it actually is hashed, or just raw input, is up to you.
Also relevant: https://developer.apple.com/documentation/security/certifica... and CryptoKit https://developer.apple.com/news/?id=3bwfq45y and https://developer.apple.com/documentation/cryptokit (Note that the Linux version of CryptoKit doesn't get any of the fancy hardware features, you'll need an Apple OS for them.)
The article has a couple of other weird faults, too:
1. I'm not sure why the author is complaining about FIDO2 having backwards compatibility with U2F/CTAP1. The article even incorrectly claims FIDO2 is "a 3rd incompatible standard" only to counter-argue the point a few paragraphs below explaining CTAP? People not having to throw away their perfectly fine old devices is a good thing in my book.
2. "All FIDO standards are web-centric and aren’t designed with any other client software in mind" the first part is true, the second part not so much. For example FIDO2 supports silent authentication (no user interaction) while WebAuthn explicitly does not. It also supports the hmac-secret extension which is used for offline authentication with Azure Active Directory and IIRC no WebAuthn browser implementation exposes this extension to web apps.
: see e.g. discussion on https://github.com/w3c/webauthn/issues/199
Specifically, you can get that Microchip HSM in a form factor that plugs into a click shield, then plug the click shield into a Raspberry Pi's GPIO pins. You now have a PKCS#11-usable HSM from a Pi. Including the click shield still puts the cost at <$20.
(I have a few such setups lying around because my $dayjob includes a PKCS#11-consuming application that runs on such setups.)
> Since the U2F device creates and stores asymmetric key pairs, and is able to sign arbitrary “challenges”, can I use it as a general-purpose hardware key store?
You can however do it "the other way round" and use a private key to derive a U2F path. And that same private key can be used for many other applications (or none). For example you can use the Ledger Nano S (originally a cryptocurrencies hardware wallet), which has an HSM, with your "seed" (say a 256-bit secret, stored as 24 words you hide), to log in sites using U2F.
Additionally as long as you've got your secret, you can reinitialize your Nano S (or another one) as a new U2F device and there's no need to reset your U2F credentials on the site as the newly initialized device shall work exactly as if it was the old one.
Fun fact: the CTO of Ledger was part of the group working on the original FIDO specs.
According to the yubico explanation linked from article, U2F includes cloning protection (an authentication counter, which the site should check has increased vs. its last known value), so that might not actually work if the site you are authenticating against is well-implemented (Unless the Nano S also lets you back up the counter value).
I'm using it on several sites and already did swap / reinitialize my U2F devices... It works, including on GMail. As I understand it the most recent Webauthn is going to be supported by Ledger soon.
I don't think they're non-compliant or badly implemented websites: although I'm not sure what the specs say.
I do personally love that I can back up my "seed" and know that by going to pick my safe at the bank I'll always be able to reinitialize an U2F device and I'll also really love that it displays "Google" on the Ledger Nano S's tiny screen.
Pricey little thing to use as "only" an U2F device: about 60 USD but I like it a lot.
> (Unless the Nano S also lets you back up the counter value)
Late reply but... As I understand it as long as the counter is monotonic it'll always work. What Ledger does (and apparently the Trezor too from reading this thread, another device with an HSM) is, upon initializing the U2F app the first time on your hardware device, is to set the counter to the current timestamp.
So basically: once you use another device to log in, then you cannot use the old one, unless you reinitialize it (and then it's the other you cannot use, unless you reinitialize it etc.). These devices do not have a clock, which is why it works that way (in the case of the Ledger Nano S / Ledger Nano X at least).
But isn't it the whole point that these devices never let you have the secret?
You may have thought (as I did at first) they meant a Yubikey; there is after all also a model called 'Nano', but not, I think, 'Nano S'.
This is a feature, not a bug. It's one of the core points: Everything is in hardware, so software can't attack it. In theory, a bug in Android/iOS could expose my authenticator app secrets, but even a full root compromise of the host operating system can't extract the secrets or trigger a signing from a U2F key without me physically touching it.
How do I prevent such scenario from happening? Is there truly a fool proof way of hardware authentication?
By enrolling my spouse and cross registering all keys, both of us are safe. We might loose our keychain, but we will always find each other, even when we are traveling.
This works for Google and GitHub, but not every service allows for multiple keys. But this should be a no-brainer imho.
I also don't use my personal key for work stuff and recovering my work key is my sysadmins problem :)
That said, when I had admin accounts at work, we used TOTP with a similar scheme: when we registered important (admin) accounts we shared the second factor (the QR code) between 2 people and sometimes I printed the QR code itself. This works for AWS, gsuite, github, etc. I still receive calls from old colleges for TOTP codes occasionally :)
What about steganography in tattoos? That would be pretty interesting. Perhaps combining that data with a short code or seed that you memorize.
The tech exists. My neighbor works for a company that does encoding of data on packaging. Goal is that every cereal box on the grocery aisle can be individually tracked.
I think a good with system should include some sort of revocation, like a master key you can keep in a safe to revoke other devices.
But I'm thinking of a revocation scenario, where a key is stolen. In that case the attacker can just remove your keys first.
If the hacker gets your key, and your password it's game over.
But hacking the password will take some time hopefully, and systems usually have retry limits etc. so if you discover your lost key, you hopefully have some time to revoke the lost key.
I personally would not use it for password-less login, as it is only good as a second factor.
If your threat model includes any real likelihood of people capable of stealing your keys and cracking your passwords, then 2FA is only a small part of the opsec you need.
Things like NSA's Zero Trust Security model comes to mind https://news.ycombinator.com/item?id=26549363
If you're at that level, you probably need specialist infra.
But potential compromises don't mean you're not less secure than before, Yubikey would still make you more difficult to hack.
> you can use the Ledger Nano S with your "seed" (say a 256-bit secret, stored as 24 words you hide), to log in sites using U2F.
> Additionally as long as you've got your secret, you can reinitialize your Nano S (or another one) as a new U2F device and there's no need to reset your U2F credentials on the site as the newly initialized device shall work exactly as if it was the old one.
If I read that right, some keys rather than having a hardcoded unique seed, will let you set your own. Which implies you can have multiple functionally-identical backup keys locked up securely somewhere. If true, that significantly reduces key loss anxiety, and increases my interest in hardware MFA.
Anyone know which keys support this (aside from the mentioned Nano S and Trezor)? What's the magic keyword to look for in specs?
Set your own or generate one for you, using an hardware CSPRNG, and then let you write it down in a convenient way (24 words out of a dictionary of 2048 words, so 264 bits: 256 bits of entropy plus 8 bits of checksum. Heck, you can use 12 or 18 words too. I think the (BIP39 / BIP44) standard even allows for 15 words but Ledger does only support lists of 12/18 and 24 words although don't quote me on that).
> Which implies you can have multiple functionally-identical backup keys locked up securely somewhere.
I don't even bother having multiple devices ready to use. I simply store the seed (once again: as a list of 12/18 or 24 words) on a sheet of paper and I store this in vaults/safes.
> Anyone know which keys support this (aside from the mentioned Nano S and Trezor)? What's the magic keyword to look for in specs?
I've got a Ledger Nano S only for this. It's really a nice little device. I don't work for Ledger btw! They're a bit pricey but, indeed, the anxiety of getting locked out of your account or having through crazy procedures goes away.
It's a pricey little device but not that crazy expensive: about 60 USD I think.
> What's the magic keyword to look for in specs?
I don't know but the U2F "nano app" does work with Google and other sites and I know that Ledger is working on the more recent Webauthn support.
you can do a lot with a yubikey but idk if you can actually change the U2F secret
No, you can have a random new value (effectively like you bought a different one), but you won't learn what that is, so it doesn't help you and there's no way to "keep the secret hidden somewhere" except in the sense that it's hidden inside the device where it belongs.
The easy. Add both a security key and OTP (e.g. Google Authenticator). You have 1 rule to strictly adhere: if you click on a link, you MUST use the security key, because that's the one that protects you against phishing. When you don't have your security key, you can just use OTP provided that you typed the url and not clicked on it from email/text.
The strong. You enable 2+ security keys. You keep one in a safe at home. You always use a security key to sign in. On your Google account you can also enable Advanced Protection, that's essentially this plus some extra restrictions to API access.
It’ll be a weekend project to convert over all of my smart home devices and automation away from my primary email account and transition it to a home media account. Then I will turn on google advance protection. Advance protection also puts a delay on logging into the system if you are locked out / some extra restrictions.
Until then I’m just using the key but not advanced protection. If I were starting fresh (a new smart home / google services), I’d make an email just for that and use my original address (a clean name with no numbers from 2004) strictly for mail / financial passwords / high priority accounts. The divide that I would do is smart home / subscription services used by the family gets the lower security one, everything else is high security.
I’d be interested to hear how others deal with this problem.
0. This mostly matters for the accounts that need to be particularly secure (eg email, maybe GitHub or Facebook or Twitter depending on how much you care about them). Also accounts for money if they offer this kind of security.
1. Set up a yubikey. Try to only ever use the yubikey for logging in.
2. Set up some account recovery codes, print them out, put them in a safe place (ie somewhere that you don’t live or work, though you could probably also keep copies there. If you have a folder of personal and account information ready in case you die unexpectedly, put it there too)
3. Set up Google Authenticator on an iPhone so you can get in if you don’t have access to your keys. You should treat these more like the recovery codes than the yubikey—be very careful about entering the name of the website and checking the certificate because they won’t protect you from phishing.
This is something that astonishes me. So many financial institutions still have:
- (low) max length limits on passwords
- restrictions like no special chars in passwords, or exactly this many numbers etc
- over reliance on a pin where a password would be more suitable
- no 2fa, or at best SMS based 2fa
- ridiculous security questions as if someone's favourite colour etc is drawn from a large pool of values
As a developer I can guess that most of these restrictions are probably stemmed from a mountain of tech debt, but I would've expected this to be a priority for such companies. It makes me wonder if they are bound to follow some outdated regulations or something preventing them from doing better
As others have pointed out, account recovery should always be provided in some form. A common way seems to be the use of a one-time password that can be used to regain access.
What I'd really like is a clear best practice for this for developers to implement though. It doesn't help if your authentication method is strong when the recovery option has glaring security holes in it. It now looks like every site that uses WebAuthn just rolls its own recovery solution.
I've settled on OnlyKey (https://crp.to) because it's OSS and you can back it up and restore. Backup can be GPG encrypted.
24 slots, programmable to store URL, U/N, P/W, 2FA, with the ability to click the address bar in a browser, hit a key on the OnlyKey and watch as it types it all in without any other interaction.
It stores MANY types of auth.
Not affiliated, but a very happy user.
It's a device you'll learn more about as you use it, but YubiKey functionality is achieved in a couple of minutes.
The dev managed to answer everything put to them with alacrity.
The question of code quality was of no consequence as even ugly code can work, and it's got more eyes on it than closed source.
In my threat theatre this device is far more than adequate for securing GitHub, GOOG, and a couple of GPG/ssh keys.
Should I ever becaome a spy, I'll probably revert to speaking to strangers about how red sparrows fly at certain times of day again.
You can have other forms of authentication — PostIts with backup OAUTH codes, console passwords, root password etc — but those have to stay at home in the vault because a piece of paper in the outside world is too dangerous to lose.
You want both. You’ll lose your hardware keys eventually.
Most services let you register multiple devices. I typically use a Yubikey nano and a regular Yubikey. Then I have a backup on my keyring, but don't have to get it out every time. With WebAuthn becoming more popular, you can also use things like Windows Hello, Face ID, etc. Generally, I try to register all of those methods, and then if one device fails, I still have plenty of backups. But, some services don't let you register multiple devices (AWS comes to mind). In that case, you'll have to make sure you have a backup recovery method. (And those recovery methods obviously reduce the security of your account, SMS is notoriously weak.)
Most services and web sites also give you emergency login codes to print out, though.
...which people will still not do, or misplace/erase due to disuse, etc.
Security and availability are always at odds with each other. The question that you should always have when choosing a level of security is "does the risk of denying everyone access --- including myself --- outweigh the risk of someone other than myself gaining access?"
But I would suggest SoloKeys instead.
I use these to log into my Linux systems, in combination with a password. pam_u2f was pretty easy to setup.
You could also get 2 Yubikeys, but might be out of luck if the website doesn't support that.
(Also, some Android phones can act as a hardware key.)
There's probably also a bit of psychological biases at play, like: "if your HSM is 10x cheaper than everyone else's, it must be crappy and insecure".
For example, a normal YubiKey is unrated, a YubiKey FIPS is level 2 rated, and a Thales HSM is level 3 rated with all sorts of zeroization hardware.
I think they mostly require an outside evaluator to do a sort of documentation process that costs somewhere around $500k depending on complexity on a new product, and maybe $50k just for up-versioning.
It's generally hard to get that money back on a product since the market of organizations that need the certification is tiny and then the larger overall market for a security product is also usually small and not so happy to defray those costs.
Software was the same, hardware looked the same. The crypto module is validated only with the $$ hardware.
Sometimes the non FIPS devices will have other algorithms not on the FIPS list.
To use PKCS#11 for a particular device, you need a module (shared library) to translate between the C API and the actual hardware. This module is usually vendor-specific.
If I develop software with PKCS#11 support, I'm basically asking every user to find a PKCS#11 module from their device vendor and install it in the right place.
With U2F at least the hardware wire format is standardized: https://fidoalliance.org/specs/fido-u2f-v1.2-ps-20170411/fid...
This issue with existing smart card technology is not lack of standardization, it's too much standardization--too much flexibility and stacks that are too deep.
Vendors ship their own PKCS#11 drivers as a convenience. But PKCS#11 isn't the only high-level API. The other is PC/SC, which is actually simpler than PKCS#11, though it often requires more local support from the OS. But not necessarily. You can write PC/SC shims that talk directly to hardware, or even to Vault servers if you want, w/o OS support. I have my own rapid driver framework that supports all of these. For example, I have a PKCS#11 and PC/SC client driver which can use the Apple T1 chip to authenticate to a Vault server for remote signing using Transit keys--the only engine that supports ad hoc remote key operations. This permits sharing GnuPG (via PC/SC) and OpenSSH (via PKCS#11) keys between users, without actually disclosing the keys, though Vault actually makes it difficult to do this securely as you need to write ACLs to prevent transit keys from being exportable.
BTW, you don't need special drivers to use Yubikeys, either. They just provide them as a convenience because the FOSS ecosystem is confusing and... non-optimal.
I'm hoping to release a macOS product soon and as part that may release some of my framework as FOSS.
While the low level API is complex and the UI often isn't ideal, PKCS#11 has been a godsend for interoperability because it abstracts out the low level hardware interfaces and other implementation details. It lets your application seamlessly access hardware-backed keys whether the keystore is sitting on USB (Yubikey), ISO7816 (smartcard), I2C (TPM), or something else. On the application side, adding PKCS#11 support only takes about a dozen lines of code, after which the app can use hardware backed keys/certs to perform TLS negotiations.
There's nothing standing in your way. However, SSH has the advantage that it's commonly used interactively, so FIDO is a good fit. Any protocol that is most often used silently with the user maybe not attentive or not even present won't work well - it's easy for your phone's mail client to just present the same password it remembers each time it re-connects, but it would be annoying if you need to touch the fingerprint reader each time. I can imagine (if anybody wanted to) retro-fitting SMTP submission to do FIDO, although I don't know if there's a practical way to hack it into the existing SASL AUTH layer.
Basically every site will offer to "never ask from this device again" and then I have a key that I haven't used in a long time.
Which, yes, I can use nothing but incognito windows, but that is too extreme and a single time I forget breaks it. Why can't I set it so that I need to use the key once a week?
Ideally with the following features:
* Stores keys securely in the Hardware-backed Keystore
* Authentication via fingerprint + periodically via password
* Allows to backup the secret key during setup
* Supports multiple devices
* Open source
* Works over Wifi
* Works with Linux desktops and Android phones
I wrote an authenticating proxy that uses Webauthn: https://github.com/jrockway/jsso2. I don't think you should use it, but you can fire it up locally and try enrolling the various devices. Actually, you can just use Duo's demo: https://webauthn.io/
The server operator can choose to reject NFC, but the implementation/standard doesn't require that behavior.
Edit: Actually, I don't understand your question. Are you trying to authenticate on a PC using your phone as the security key? If so, I don't know how to do that. I just enroll Windows Hello and Face ID.
I wonder how long before Akamai kills it.
Main downside is that it doesn't support multiple devices