Hacker News new | past | comments | ask | show | jobs | submit login
Security keys are now supported for SSH Git operations (github.blog)
215 points by Androider on May 19, 2021 | hide | past | favorite | 126 comments



I tried this on my macbook pro today - latest OS version. First I installed openssh with brew. Then I found I could barely get the new ssh key working with GitHub, frequently getting `sign_and_send_pubkey: signing failed for ECDSA-SK / invalid format` errors while trying to commit and not actually being prompted to press the Yubikey, despite it being plugged in and ready. I keep my mini Yubikey USB-C in my laptop persistently which I wonder could be the issue.

Anybody else having similar issues?


Upon further investigation (Copy-pasting things from random internet webpages like a good HN would do), running the following two commands seemed to help get the Yubikey SSH running a little more reliably with Github for me, including with Tower.app after making sure it has the correct PATH setup:

`eval `ssh-agent -s`` `ssh-add`

Without the eval ssh-agent command, `ssh-add` fails to add the ECDSA-SK key since Mac OS' SSH implementation doesn't seem to natively support it. I don't understand how running eval ssh-agent resolves it because that shouldn't persist very long with Mac OS' own SSH agent trying to run.


ssh for macOS has some Keychain integration built into it in lieu of ssh-agent, or something like that IIRC.


I think that the keychain is used to memorize pass phrases and the ssh agent is used to memorize the keys themselves.


Yeah I believe my mac machine has the following in `~/.ssh/config`:

Host *

  AddKeysToAgent yes

  UseKeychain yes


Yubikeys seem to have multiple modes and have trouble switching between them. For instance in GPG mode, It's very difficult to switch back to U2F mode without unplugging. I wonder if this is your issue


Interesting! Any ideas of how to log which mode it's in to help debug this further? Would disabling unused modes help prevent this?


I don't understand - what's the new feature here? Yubikeys have always supported SSH authentication, and you have always been able to use that for git. What is the new functionality that Github has added?


Your Yubikey (but not Yubico's cheaper Security Key product and the dozens of other similar cheaper products) is able to do public key signatures exactly like the ones SSH supports anyway, so GitHub didn't do anything to support those.

But modern OpenSSH can do FIDO, the same technology that drives U2F and its replacement WebAuthn, and is in all those cheaper and more popular products. FIDO won't sign arbitrary data with keys, it will only perform a very specific operations and so OpenSSH grew a whole separate set of public key authentication types to support that approach, and GitHub is announcing their support for these extra types.

There are some neat features beyond "this makes cheaper devices work" but GitHub mostly does not use them, although they do choose to require that your Security Key verify you are present (typically by touching a sensor or clicking a physical button) whereas your existing Yubikey setup might not be doing that and GitHub can't force you to.


Another big benefit is ease of use setting up the Yubikey. My understanding is that these newer key types can be generated very easily using the ssh tools which already exist on your systems. Previously, it was very common to set up GPG keys and integrate them with SSH, which is not only more work but also fairly involved.

These new key types make it a lot easier to get up and running with storing your SSH key on your Yubikey. I would have used this format instead of GPG when I was setting up a Yubikey a month ago, but GitHub (and some other providers I use) didn’t support a new enough OpenSSH version.


I believe this adds support for the new -sk key types, in which the ssh server checks the presence of a U2F token on the client, rather than using something like gpg-agent to effectively store your SSH keys on the Yubikey.


GitHub now supports ecdsa-sk and ed25519-sk type keys. OpenSSH have supported those keys since 8.2, but GitHub has not until now.

With the -sk keys, you no longer need to install any software (gnupg, pinentry, yubikey CLI etc.), or run gpg-agent which has always had reliability issues etc. It should all Just Work out of the box now. GitHub was the last piece of the puzzle for us, and for example I can now change our team's onboarding docs from "follow this long OS specific guide to setup SSH w/ gnupg and ykman" to "run ssh-keygen -t ed25519-sk".

There's some confusion in this thread, but you can use ssh-keygen to generate either a public and private key pair, with the private file being a stub and the validation still happening on your physical YubiKey, OR you can omit the private key stub entirely with "-O resident" option to ssh-keygen allowing you to add your key to your ssh agent on any machine you plug it in (for good and bad).


Not always. On some systems (Fedora 34 in my case) you still have to install FIDO2 package (on Fedora, it's called fido2-tools), otherwise the ssh-agent will not able to function with the -sk keys ("agent refused operation" error message). I did not have to install any extra tools on the target system though.


Yeah, I'm confused as well. I've been using my yubikey for ssh auth for years now with github and other services.


Looks like they added the "-sk" options to Git Bash for windows. That seems to be what they are promoting for the most part with this post. Though the title is a bit deceptive.


Other than being a random untrusted USB device, is there any reason to not use the cheapest generic U2F device you can find?

I've been wanting to start using them for a while, but yubikeys are too expensive for me to get a bunch of.


I would try to find a key that supports ed25519, since there are concerns about the NIST curves used in ecdsa:

https://git.libssh.org/projects/libssh.git/tree/doc/curve255...


Remember that Github U2F key 6 years ago? It just supports the ecdsa keys... I feel a tiny bit grumpy that it took 6 years for me to be actually be able to use the thing for anything important and then to discover it's already outdated.

I hope many services will support them nonetheless.


What problem are you trying to solve? Yes you can get one for $10 instead of yubico's $20, but yubico's have been researched the most and importantly they are very reliable. Many people successfully abuse them and while you always should add a pair of them giving you a backup, it would be a pain if it would stop working.

I'm asking because I'm curious about use case where you think you need them (even a bunch), but you think they are too expensive compared to value which you are trying to protect.

Reliability would be a good enough reason to me even ignoring all security aspects.


1. Decide that USB C is a must-have feature. No $25 blue yubikey for you - prices now start at $50.

2. Decide you want two keys per user, in case they lose one.

Now you're looking at spending $100+ per person.


Is that an unreasonable expense for something like two factor SSH auth?


Most definitely. Not all people are made of money. $100 can buy a lot of things, so if I can't justify it for professional use, there's not much of an incentive to spend that much.


Yes.

And using a key like this isn't even two factor.



$25 USB-C: https://solokeys.com/products/solo-usb-c?variant=23528357560... For those not familiar, solokeys was an open source project that became a company via kickstarter (and now indiegogo)


Assuming this is a business, you're already spending thousands per machine per user.

And a developer that's down for half a day from a failed key is way more expensive than that $10 of savings.


If I'm getting a physical security key, I might as well pick something that has rudimentary PGP support. The cheapest Yubikeys do not, and the ones further up the range that do support it, are definitely outside the impulse/curious buy range.


Smartcards cost less than a dollar, and are omnipresent.

I think you can already integrate PCSC with openssh.

A good thing about smartcards is that ones compatible with CSP are driverless, and PnP in Windows. This means they can enjoy at least some semblance of keylogger protection for key password/pin with WinCAPI.


Got any links about how to use a smartcard?


If you're interested in contactless cards with an option in the future to upgrade to something like Omni ring (https://store.nfcring.com/products/omni) or to use them with your phone, then do this:

1) Buy a contactless card reader from a good source e.g. https://www.javacardsdk.com/product/acr1252u or last two from this table https://webshop.d-logic.net/nfc-rfid-device-comparison, don't buy NFC ones, you need smartcard support specifically.

Also steer clear of cheep ACR122U readers from ebay or ali, for some reason there are a lot of fakes https://www.acs.com.hk/en/press-release/2266/advanced-card-s...

2) Buy a few contactless javacards e.g. https://www.javacardsdk.com/product/j3h145/, don't buy EMV ones unless you're Europay, Mastercard or VISA.

3) Once you get them install opensc, pcsc-lite, ccid and get gp.jar from https://javacard.pro/globalplatform/ and read some pages from https://github.com/philipWendland/IsoApplet/wiki, it will get you started.


Step 1 is to buy a reader, any reader which is ISO 7816 compliant is fine.

Next, buy a smart card. The most famous brand I can think of right now is Gemalto, but there are lots of options. You can buy them in quantities of 1 extremely cheaply from AliExpress, but I'm not sure of the quality.

Smartcards are just little computers which run Java Applets (GlobalCard), and they come either blank or with software already loaded on them.

If they are blank you have to load software onto them. One open source option is CoolKey.

In either case you will need software on your computer to talk to the software on the card to ask it to do things, like sign an arbitrary piece of data. This software is called middleware (the stack looks like Application -> Middleware -> PC/SC subsystem -> smartcard reader driver (usually CCID compliant) -> smartcard software, so why it's called middleware I don't know).

For Windows, I only know for sure that PIV (US Government, NIST SP 800-73) card applets are supported, but there is a whole "minidriver" thing. I suspect you'll have to read the applet (or card, if preloaded) documentation to know for sure. macOS used to have a cryptographic layer called tokend, but it's deprecated and replaced with something else. For other things, PKCS#11 is the standard mechanism for talking to the card's application.

Feel free to reach out with further questions.


Excellent write up on howto. A note from me, Windows also supports GIDS smartcards since a while too. Which means that Google titan key (Feitian ePass FIDO-NFC) will also work now (both as as smartcard, and a fido key.)


https://www.rcdevs.com/docs/howtos/epass/epass/

This is a howto for USB key, a smartcard will be basically the same except you will program the card at first as the seller instructs you, or just enter the pin if they are already initialised.


A lot of smartcards won’t support FIDO2 though which the web is using going forward.


It will support whatever you install on it, there are some U2F applets and some stalled work on FIDO ones last I checked. All the necessary tech is there, the issue is with browser support. Google has this https://chrome.google.com/webstore/detail/smart-card-connect..., but it's Chrome OS only.


It's fairer to say it's going sideways, not forward with them.

FIDO is not a replacement for smartcards, nor a complement to smartcards. Fido is "Just better than passwords" level of authentication.

The golden standard for HTTPS security, two side mutual auth with public keys on TLS level for example is only there with smartcards.


Doing mutual auth is great security but it has a horrible privacy story. Advertisers would, I'm sure, love knowing that this visitor to PornHub's custard pie fight section is the exact same person who bought the book "In Praise of the Klan" on Amazon and the one who bought take-out from a Chinese in Denver last Thursday. The clever thing about say WebAuthn is that you get an excellent privacy story to go with your security. Even if PornHub, Amazon and that Chinese place all conspire against you, with the advertisers, they don't end up learning if you're the same person even though you used the same Security Key all over the place.


Please. We prefer it be called the Hoboken squat cobbler.


Yes, but, it's very unlikely that these reasons outweigh the benefits for you from having a Security Key.

* Cheaper devices may not support cool new features. For example FIDO 2 allows resident credentials †. The cheapest behaviour for SSH is that your laptop (or whatever) stores some data, and you need that data plus the Security Key to authenticate to GitHub but with resident credentials that extra data can live on the USB Security Key and so that's a huge benefit if you git push from random PCs. There are several features like this - for example one way to replace that boot-up password on encrypted disks uses another optional feature of Security Keys - and there may be more in the future, the cheapest devices only have the core feature. But hey, if you discover you want those features you can always buy a fancier device later.

* The cryptographic Quality of Implementation can matter. What we see today is some corner cutting maybe, some lack of polish, but nothing that seems like a plausible avenue of attack. But I haven't purchased every supposed different brand of Security Key, maybe some of them are quite awful. It seems likely that unless they're intentionally made to weaken your security they will always be much better than stuff like SMS 2FA.

Here's a rather old post by Adam Langley about the crypto problems he found in various Security Keys:

https://www.imperialviolet.org/2017/10/08/securitykeytest.ht...

* The physical QoI can really vary. If you're buying the cheapest you can find, maybe the touch sensor or button wears out much faster than expected, or the USB connector is a tighter fit than you'd like. Or maybe not. Your mileage may vary a lot. I own a device with a ludicrously bright LED when its powered up, not just when authenticating, always if it has power. Doesn't bother me, but a lot of people would hate that.

† Essentially without resident credentials the device has no "memory" of who you are. On web sites the natural back-and-forth makes this feel normal. You tell the site your email address or username, it finds one or more IDs in its database and asks your Security Key to authenticate with one of those IDs, the Security Key recognises an ID and does so. But a cheap Security Key can't remember that ID, it just knows (because of Authenticated Encryption if you care about the technical details) when it sees one it can authenticate. With SSH the protocol is designed differently, the remote site doesn't get an opportunity to store an ID and then ask your Security Key to authenticate, so that ID needs to live in a local disk file, unless you have resident credentials.


> With SSH the protocol is designed differently, the remote site doesn't get an opportunity to store an ID and then ask your Security Key to authenticate

Why not?

Being able to work around a gap/flaw in the authentication protocol is nice but I definitely wouldn't call that "cool".

Also a yubikey being able to hold 25 of those is kind of pathetic.


Why not, as in, why was the SSH standard, finished in 2006, not prepared for the way we'd prefer to authenticate in 2021? Because of time's arrow, locally stuff happens in order.

I was actually impressed that the OpenSSH team figured out a way to make this work at all without adding an entirely new mechanism to SSH which would then have taken ages to propagate out into the world and doubtless been the source of weird problems with poorly made proprietary SSH servers for many years after that. If you go back far enough in HN there's a comment where I supposed that couldn't be done.


> I was actually impressed that the OpenSSH team figured out a way to make this work at all without adding an entirely new mechanism to SSH

This is what they did, though: FIDO2 requires client and server support for the new "-sk" key types, since FIDO2 requires a very specific challenge/response format and does not just allow signing arbitrary hashes.

The older way of supporting SSH keys in security keys is through GPGs "smartcard" support, which requires using gpg-agent as an SSH agent and a security key that can speak CCID (i.e. pose as a smartcard reader with a permanently inserted smartcard over USB). That's what Yubikeys do, among others.


No, they just added a handful of new public key types, that's not even as invasive as adding a new authentication method to SSH.

For example, if your client knows some new FIDO backed credentials and is wondering if the proprietary ten year old SFTP server you're connecting to will trust them (it won't) it does exactly the same type of thing it did for other new OpenSSH key types, such as Ed25519. The server doesn't recognise these new types, just as it didn't recognise Ed25519 and no new exciting problems are discovered even though the people who wrote that server only read half of RFC 4252 while squinting and it only actually does RSA and password authentication and gets both of these wrong.

If they'd added a new method to support this, let's call it "securitykey" chances are that crappy server blows up whenever you just mention that you've heard of this "securitykey" method that was not explicitly listed in the document the programmers half-skimmed. Yes I have seen real SSH servers that behave this way, it isn't pretty and good luck getting somebody who has chosen to spend money on a bad proprietary SSH server to replace it with something that's not garbage.

And that still wouldn't enable them to sidestep the residential credential problem. To do so I think they'd need to reach down into the protocol layer and add another message, which again, it'd likely be compatible with all the competent SSH implementations on your preferred Free Software platform, but undoubtedly break the expensive half-arsed solution somebody spent $5000 on.

Also, if you do build that you run into another problem, even in a shiny Free Software environment, where do these IDs the server is now responsible for live? Is the SSH server now writing to files in the home directories of users it is authenticating? That sounds like a recipe for exciting new security bugs, not what we wanted.


> To do so I think they'd need to reach down into the protocol layer and add another message, which again, it'd likely be compatible with all the competent SSH implementations on your preferred Free Software platform, but undoubtedly break the expensive half-arsed solution somebody spent $5000 on.

The message wouldn't show up unless the server offers this new key type, would it?

> Also, if you do build that you run into another problem, even in a shiny Free Software environment, where do these IDs the server is now responsible for live? Is the SSH server now writing to files in the home directories of users it is authenticating? That sounds like a recipe for exciting new security bugs, not what we wanted.

It's not the job of the SSH server to write to authorized_keys, so it's not the job of the SSH server to write these blobs either.


> The message wouldn't show up unless the server offers this new key type, would it?

Good point. I think you can probably guard this so that neither clients nor servers ever see the new SSH authentication message if they aren't onboard with this plan, but it's a bit complicated.

The problem is on your first flight you don't know the ID, so your SSH_MSG_USERAUTH_REQUEST will definitely fail. That's OK, the protocol expects this, it's how lots of things already work. The server sends SSH_MSG_USERAUTH_FAILURE but unlike REQUEST, the FAILURE message isn't extensible, the only way forward it imagines is to tell you other methods that might work, and your method might work, you just don't know the ID.

So you're going to need to have this failure step function as a signal to inject the new message, a hypothetical SSH_MSG_USERAUTH_SECURITYKEY_ID_LIST or something, with one or more IDs for which the server claims to know a corresponding public key, the same way protocols like WebAuthn work.

The more I write about this, the more I'm convinced somebody could have actually built it, but since the OpenSSH people did the work to bring FIDO to SSH and I did not, I don't really have the right to criticise. For all I know they spent six months trying this approach and ended up in a cul-de-sac.

> It's not the job of the SSH server to write to authorized_keys, so it's not the job of the SSH server to write these blobs either.

Good point, you could probably build something where the ID lives in (the new type of) SSH public keys or some equivalent file, so it gets concatenated to authorized_keys and then the server just needs to know to dredge the ID out from there and send it to a client in the new message.


> The more I write about this, the more I'm convinced somebody could have actually built it, but since the OpenSSH people did the work to bring FIDO to SSH and I did not, I don't really have the right to criticise. For all I know they spent six months trying this approach and ended up in a cul-de-sac.

These were my thoughts exactly when I first read about the new feature in 8.2.

Would be really cool to learn about the internal reasoning for the solution that they ended up with.

> Good point, you could probably build something where the ID lives in (the new type of) SSH public keys or some equivalent file, so it gets concatenated to authorized_keys and then the server just needs to know to dredge the ID out from there and send it to a client in the new message.

That would open you up to being fingerprinted by the server, right? WebAuthN combats this by scoping the ID to the RP domain name, but SSH servers aren't guaranteed to have a (canonical) name.


> That would open you up to being fingerprinted by the server, right?

I actually don't think it makes this worse, if the client is coded carefully. At least, it's not clearly worse.

SSH public key authentication has a back-and-forth, the client proposes public keys for which it claims it knows the corresponding private key, and the server can choose to say "Yeah, that'll work, prove you know that one".

Filippo's fingerprinting trick (whoami.filippo.io) takes all the GitHub public keys and considers whether your SSH client claimed it can authenticate as them. But I believe it doesn't actually test that your claim was true, so it would actually be fooled if your client says it can authenticate as somebody else...

So this would be a bit different from that, but again a carefully configured client just won't admit to knowing how to do Security Key authentication to talk to some server it has never heard of, and the server needs to make a reasonable stab at guessing your ID, I think OpenSSH isn't going to allow a remote server to say "Here are 100MB of FIDO IDs, are any of these yours?"

The lack of scope restriction isn't ideal, but it feels like a parallel to the way TOFU was more or less enough for SSH in practice whereas the Web really needed the Web PKI. Normal people can probably list the SSH servers they connect to, but you connect to huge numbers of HTTPS servers in normal web use.


So you're saying that buggy SSH servers are much more likely to blow up when encountering new methods rather than new key types? That's interesting (and I don't doubt it, although I fortunately haven't had to work with non-OpenSSH servers much).


They could have added a new optional mechanism. I'm asking about github's servers specifically, not the long tail rest of the world.

Also, wait, if github isn't doing custom things on their end, how are they enforcing this rule that you need to tap once per connection?


The touch/no-touch requirement is controlled by the server (or in FIDO2 terminology, the "relying party").

OpenSSH exposes this as a new sshd option called "no-touch-required", which Github probably just does not set.


Specifically, Security Keys sign a blob of data to authenticate. Most of that blob is nonsense to the Security Key. It might mean something to a big complicated web browser or your SSH client, but not the simple, and thus hopefully secure, Security Key.

But, there's a field of bitflags. The Security Key knows what those mean. One of those bitflags is "User Present" or UP, which means, "I promise I have some means to verify a human interacted with me and they did".

For U2F and WebAuthn UP is just mandatory. So, most devices you will find just always set UP, even if the Relying Party doesn't ask them to. However some devices you could choose not to ask for UP, and a device could in this case just skip the touch step, but it must not sign a message with that UP bitflag set in this case.

Some of the flags are currently unused, one that's also interesting for SSH in some environments is UV, "User Verified" which means the device claims to have some way to know if this is its real owner or just a toddler clicking the button. UV is typically set for fingerprint readers, facial recognition, or the cheapest option, a Yubikey with a PIN can set UV if you entered your PIN.


Is the device explicitly not allowed to use a single touch to establish UP for a few minutes?


User presence is defined in present time. There are no FIDO police who'll burn down your factory or arrest your executives if your device doesn't do this, but removing this safeguard has (at least potentially) negative consequences.

I believe some theoretical attacks against user privacy would benefit from being able to attempt a huge number of "authentication" steps without annoying the human by making them touch the sensor. As with the Socialist Millionaire's Protocol or Magic Wormhole, we're relying on the fact that humans get annoyed fairly quickly and will just give up if it doesn't work, so attacks that require a large number of iterations cannot mechanically succeed.


Does anyone know if there is a way to have ssh just use the security key without having to have the key files on disk?

I work from several computers and VMs. It is somewhat of a pain to have to generate and register keys for each computer and VM. It would be nice if I could just generate the ssh key once on my security key and use that everywhere.


Yes, you can use gpg-agent to handle this - I've been doing it for years... Every machine does need to be setup to use gpg-agent in place of ssh-agent, which frankly is a little annoying - but once it's setup your key is entirely on the YubiKey or similar.

Here's[1] the first blog I could find that at a glance appears to provide the right instructions.

[1]: https://evilmartians.com/chronicles/stick-with-security-yubi...


My experience is that the initial setup with gpg-agent is a massive pain, especially on Windows+WSL2 setups, but once you get it working it's great.

Well, except when gpg-agent dies out of nowhere and you have to restart it..

It's still better security even if it doesn't go quite as smoothly as I'd hope.

A tip to anyone else here who wants to do this: You want gpg-agent, and you absolutely need to use a newer build of OpenSSH than the one that ships with Windows 10 currently. Remove that and go find the official distribution on GitHub or Chcoolatey so you have a version that supports that key type. Then you also need to make sure to set up the environment variable GIT_SSH to point to that version of OpenSSH or else Git for Windows will try to use its own older build and then you will spend a week trying to figure out how to make it work.


Yes, it's easy, you don't need gpg-agent or anything, just do `ssh-add -K`:

https://www.stavros.io/posts/u2f-fido2-with-ssh/


And adding -O resident to the ssh-keygen command. However it is only supported on FIDO2 keys (which are spec'd to ask a PIN when using a resident key)


Not just FIDO2, I think you also need CTAP 2.1 for credential management/protection. E.g. my pre 5.2.3 firmware (which adds credential management) YubiKey Nano fails with:

    debug1: ssh_sk_enroll: /dev/hidraw0 does not support credprot, refusing to create unprotected resident/verify-required key


Yes, your USB key needs to have resident key support. I'm looking forward to the Solo key (which is now one or two years delayed from when it was originally supposed to come out), which will apparently support thousands of resident keys.

You don't need so many for SSH, but it's great for usernameless/passwordless auth with websites.


And if you get nonsensical keygen errors even after enabling the PIN, check that you enabled it for the right embedded application:

https://bostik.iki.fi/aivoituksia/projects/yubikey-ssh.html


Does this work with existing keys, including ones generated on the device? (as distinct from 'resident' here which seems to generate on computer and copy to device, anything else special?)

Basically wondering if I can drop gpg-agent (for SSH auth purposes anyway) and 'sshcontrol' without changing key.


No, if you don't use resident keys you need the files (as the keys need to come from somewhere). The keys aren't copied to the device, they're generated on the device, more or less.


> No, if you don't use resident keys you need the files (as the keys need to come from somewhere).

Right, but I already have it on the device, I'm talking about using an already provisioned device with this new-ish OpenSSH feature; or whether it does something 'special' that means the key needs to be ('resident' or not) generated by this feature?

Thinking about it though it's probably not for me, since SSH is not the only and probably the minority of my use, since I also use the encryption key (e.g. with `pass`) and signing (e.g. git commits, for no particular reason).

> The keys aren't copied to the device, they're generated on the device, more or less.

In your blog post you say it produces two files, the private and public keys. If it generated on device it wouldn't (be able to) have the private key?


> I'm talking about using an already provisioned device with this new-ish OpenSSH feature

Talking about these things is a bit confusing because the keys do so much, but the simple answer is that you'll need to generate a new key, yes. You need to get the key from the device, at no point will you have the private key outside the device, which means you won't be able to use one you generated previously.

> In your blog post you say it produces two files, the private and public keys. If it generated on device it wouldn't (be able to) have the private key?

I may be misunderstanding your question, but the "private" key it generates is more of a reference to the one existing on the device, not a standalone private key that you get when you generate one with ssh-keygen.

The way it actually works is that these devices come with some randomness baked in and they use that to derive all the private keys you need with various cryptographic techniques. You can never extract any of those private keys (which is where the security lies) .


No. It can't work because the material you want to replicate doesn't exist in this scenario.


What material do I want/need to replicate?


The "ID" for this credential. For residential credentials, that lives on the Security Key, but without them the Security Key just mints a new random ID, plus a key pair each time it is told to enroll.

This ID is public, for practical reasons an OpenSSH client setup will store it in the same sort of place it would keep a (possibly encrypted) private key for normal public key crypto but it isn't actually private, you could for example put the file on a (HTTPS) web page you copy-paste from to configure every new device you get, if bad guys see it they don't learn how to sign in to GitHub as you.

The Security Key doesn't (without resident credentials) remember what this ID value is. On a web site, that ID value would get squirrelled away somewhere during enrollment by the site's backend together with the associated public key, maybe to a database table, and the site gives you a list of IDs after you tell it your username or whatever - when you try to use a Security Key, your Security Key can look at such a list and (using Authenticated Encryption) it can see it made this one, and from there rediscover the private key and use that to authenticate you.

But unless it is shown the ID it is clueless. No idea how to authenticate. If you've lost the ID, or maybe it's on a device you don't have access to at the moment, your Security Key can't help you.

Edited to add: Behind the scenes, the implementation at enrollment goes something like this, the Security Key mints the random keypair and then it uses its own permanent secret symmetric key (which never leaves the Security Key and is likely very hard to extract even in a lab setup) to encrypt the private key or some seed value and that encrypted value (which only this Security Key can decrypt) is used as the apparently random ID.

Oh yeah, I forgot to mention, the ID is not small. This is at least 16 bytes. And even devices that don't need to hide something inside that ID are forbidden from just using some boring counter or something that would be distinguishable, it has to look at least mostly "random" in that case (minimum 100 bits of entropy).


You do need a fairly recent version of OpenSSH though.


That's true, but it's old enough that you'll probably have it, and I think that's the only way that'll work with this Github feature, since it's U2F support they're adding.


This was exactly what I was looking for



Yep, I followed the guide here: https://github.com/drduh/YubiKey-Guide

Means I can SSH into any box that has that key on it, or push/pull repos from anywhere as long as I have a yubikey with me.

I also used https://www.jabberwocky.com/software/paperkey/ to make a backup, and have that and a spare yubikey stored elsewhere securely.

(You can create the key on the yubikey directly, but then I think it's not possible to have a backup yubikey)


Some security keys implement multiple standards, others don't.

A $25 blue Yubikey only supports U2F, which requires at least OpenSSH 8.2 on the host you're connecting to. U2F is the standard used for logging into websites with a security key, and there are many vendors other than yubikey.

A $45 black Yubikey 5 also supports OpenPGP and Smart Card/PIV which can do SSH without server updates (with a bit of software installation on your end).


Even I have and I use a Yubikey 5, the whole thing still feels a bit like a black box for me. I still have not found a good overview about the different standarts, what do they different and about the many options in the Yubikey software.

Also I kind of get, you can't make a backup key. But in the end, if they key is lost, you use also mostly just another password again. But for me, I would much prefer to create a second key somehow.


I felt the same at first when I received mine. And since the information is quite scattered in Yubico's documentation, blog and on external websites it was very time consuming to figure everything out... It also seems that a lot of things have evolved in this area so it is not easy to know what is still useful/the best way to do things. When I get the time to, I want to put up a webpage that explains the main state-of-the-art usages of the Yubikey, with step by step instructions for the major platforms


Would love to see such a website. Yubikey has for most platforms step for step instructions. But I really miss real documentaion, what does happen in the background. What standards do exist, why, etc.


* encouragement to post it on HN *


You can create a key and load it into the Yubikey (and thus backup the key) rather than have the yubikey create the key on itself directly.


You can't do this with the key underlying FIDO (thus U2F/ WebAuthn and the SSH feature the linked article is discussing)

Yubico supply a tool that lets you perform a "factory reset" giving you a random key (effectively: instantly invalidating all credentials you previously used) but not one that would let you pick the key. IIRC The relevant standard is clear that manufacturers should not offer this capability as it's obvious how it would get exploited.


Besides all the suggestions for GnuPG/OpenPGP-based SSH keys, there's also "-O resident" for FIDO2-based keys.

This also stores the private key on the security key itself, removing the need for the key handle file.

For more details, see the release notes for FIDO2 on OpenSSH: https://www.openssh.com/txt/release-8.2


Yes; See the tutorial on Resident keys here:

https://www.yubico.com/blog/github-now-supports-ssh-security...

With resident keys you can load the ssh key description from the key directly into the ssh-agent.

$ ssh-keygen -t ecdsa-sk -O resident $ ssh-add -K $ ssh git@github.com


I have my private key in a Keepass database, and with a keypress Keepass(XC) loads it into my agent. After a time that you specify, it unloads it automatically.

I use a U2F key now, and that key I just have in ~/.ssh. It's useless without my U2F fob anyway. (Right?)


Yes, you are correct that it is useless without your physical Security Key.

Technically what's happening is that the valuable private key is in a sense in that file you're not worried about. But, you're safe not worrying about it, because it was encrypted by the Security Key, and only the Security Key knows how to decrypt it, so even though in one sense it's the private key and very important, because it's encrypted it's not a big deal if anybody learns it, as they couldn't possibly decrypt it. The contents of the file are given back to the Security Key when you use SSH to connect to (say) GitHub and in fact it will decrypt them to discover your private key, then use it, and then forget it again immediately, but it could (hypothetically) instead store a library of all credentials and use the random contents of the file to just look up the right credentials in the huge library. That would cost $$$ and Security Keys are (relatively) cheap.

It's OK to completely forget that technical description, the designers of Security Keys specifically intended that you needn't care how the magic is done, I have explained it only to reassure anyone puzzling how this could possibly work.



Do you mean the TPM hardware Key?


This looks like an SSH client feature that is invisible to the server. Does the server operator (GitHub) even have to do anything?


They need to support that kind of key, i.e. you can't use it to login on your servers unless they are running OpenSSH 8.1+


I use Secretive to authenticate to SSH (including GitHub) with my Mac’s TouchID. Feels like this is (almost) as secure, while being more convenient (no extra hardware required). Of course it is a Mac-only solution.


Other systems allow something similar using TPM. Both Linux and Windows can use a hardware non-exportable key with SSH that way.

It doesn't allow for the touch verification on use, but it's still better than having a file in the disk. Security scale would be:

    - key in a file
    - TPM
    - touchid
    - hardware gpg/SSH key
    - hardware u2f key


are keys in the TPM really securely stored and accessed? can i trust those to only be used by myself by i.e. providing a password or similar? i also think touchid should rank even lower because its likely somewhat easy to forge.


TPM keys are not accessible once imported or generated. You only get access to operations like encrypt/decrypt on them. Those are also normally accessible only by the root user, so you use some kind of proxy with an extra step (like a password) to make them available to the user.

I'm not sure what you mean by touchid being easy to forge.


so that does mean once i have access to the computer the keys are somewhat free to use. thus i am authenticating the device not the user which is kind of an important distinction...

IMHO this is not as secure as storing the key encrypted on a drive where its only accessible using a passphrase. It is just harder or maybe even impossible to exfiltrate the key but you really don't need to do that in order to use it. i might be wrong though.

My remark about touchid came from the fact that fingerprint scanning is inherently insecure because it is trivial to fool most devices into thinking you got a matching fingerprint using something that was touched and left a mark and some glue.


> thus i am authenticating the device not the user which is kind of an important distinction..

It is very unlikely that the actual threats you face make this "an important distinction". If you don't have 24/7 bodyguards, it's time to be realistic with yourself that in fact "I got drunk and clicked something dumb" or "That wasn't a real email from FedEx, what was I thinking?" are much bigger threats than "A covert team picked the lock on my condo and then while I was asleep they modified my MacBook to help them break into my GitHub account".

> fingerprint scanning is inherently insecure because it is trivial to fool most devices into thinking you got a matching fingerprint

Because biometric security in these scenarios is local the bad guy needs to steal the device first. Again, it is very unlikely your threats look like that. Real crooks who steal devices like MacBooks or iPhones will sell them to some dodgy bloke not try to impersonate you and break into your GitHub account before you invalidate the keys.


No, ninjas won't be breaking into your home to install malware on your devices, but the maids at the hotels you stay at might! That is known as the Evil Maid threat. Yes, hotels tend to have safes in the room that will fit smallish laptops but those can be reset by... the hotel anyways.

This is also a problem at work where you might leave your laptop on your desk at your cubicle.


these are valid points and of course everything you do to improve your security posture should somehow be related to your real threat model. However, OP was implying TPM might be more secure then an encrypted key on a disk which i disagree with for the mentioned reasons.


TPMs are... extremely flexible. It's hard to summarize what they can do, but I'll try:

- a TPM is a crypto co-processor with HSM-like functionality (e.g., key wrapping).

- TPMs can be and are used to secure the boot process by having the CPU/ME do a core root of trust measurement of the BIOS, and then the BIOS can do a static root of trust measurement of all the option ROMs and such and the next boot stage for the OS, and the OS can continue the measurement process, and if you're willing to you can make sure no code is ever executed that isn't "trusted".

- TPMs have very flexible authorization policies, so you can make sure that use of some key requires multi-factor authentication (biometrics, smartcard, password) or multi-user authentication (two or more users have to authenticate), time of day restrictions, approved root of trust measurements, etc.

- TPMs can be used to attest root of trust measurements, authentication status, etc.

I recently contributed some tutorials[0] to TPM.dev[1].

  [0] https://github.com/tpm2dev/tpm.dev.tutorials
  [1] https://developers.tpm.dev/posts/14297688


> so that does mean once i have access to the computer the keys are somewhat free to use. thus i am authenticating the device not the user which is kind of an important distinction...

Isn't that the case with a U2F key? Since you observe how easy it is to fool a fingerprint scanner, is there any technology that enables you to authenticate a user at all? Password managers make passwords effectively an authentication of a device, not a person, in any case when the password is improbable enough to be secure. Maybe iris scanners work for you?

Really, all we can do is make it harder for people to steal an authentication device (gmail password, ~/.ssh/id_rsa, tpm module in laptop, yubikey, whatever) and easier for people to understand in their gut that they need to protect this from theft. Am I wrong?


i think technically you are correct but in case of a U2F key i am not authenticating the machine directly but check if the person in front of the screen is in possession of the token. that is different then assuring the device they are on is legit.


> once i have access to the computer the keys are somewhat free to use

Once you have access and either root or pass whatever extra authentication the access requires.

> storing the key encrypted on a drive where its only accessible using a passphrase

The extra failure mode here is that someone can copy the key and crack your password offline on their own time.

> using something that was touched and left a mark and some glue.

At that point a camera catching your password should also be a potential threat. But yeah, if that's something you're realistically worried about then it's not a great solution for you.


> Once you have access and either root or pass whatever extra authentication the access requires.

How hard would it be to boot a signed kernel with a system ready to give you all the access you want or need?

> The extra failure mode here is that someone can copy the key and crack your password offline on their own time.

but what i was arguing about was once i could copy your encrypted key i could likely use it right away with TPM.

> At that point a camera catching your password should also be a potential threat. But yeah, if that's something you're realistically worried about then it's not a great solution for you.

i feel like everybody should be seeing this as a threat. Don't you look out for someone snooping on your typing when you enter passphrases? i certainly do and avoid it while being watched, especially in public places where cameras are more likely.


> How hard would it be to boot a signed kernel with a system ready to give you all the access you want or need?

Depends on the config. From trivial (old BIOS, no protection) to ~impossible (locked uefi doing secure boot).

> once i could copy your encrypted key i could likely use it right away with TPM

You cannot copy the key once it's moved into the TPM chip.

> Don't you look out for someone snooping on your typing when you enter passphrases?

No. I'm not secure from targeted snooping anyway, but I'm not typing any non-local passwords usable by people reviewing old camera footage - in most cases access is being 2fa.


> Depends on the config. From trivial (old BIOS, no protection) to ~impossible (locked uefi doing secure boot).

so locked means i can not boot any other system until its unlocked? How do i do this? is there another passphrase involved? do i have to reset the chip before anything else is booted? i am probably not that well informed on this topic.

> You cannot copy the key once it's moved into the TPM chip.

obviously not! but i can just use it right away without figuring out your password. That is assuming i can fool the system to trust me. It requires the same level of access but in case of the passphrase i am blocked until i solved this. If i use your key immediately to ensure my own keys are trusted i am already done.


> so locked means i can not boot any other system until its unlocked?

https://linuxhint.com/secure-boot-linux/ / https://wiki.ubuntu.com/UEFI/SecureBoot

UEFI with your own keys in TPM (same tech involved) allows you to run only your signed bootloader and kernel. So no external media, replacing the disk, or tricks like init=/bin/sh will work. You can only boot into the signed system. From there you have the usual user account controls.


Does Mac store fingerprint or does only Mac's hardware have access to it? In other words - if your Mac breaks down, does backup save you or do you also need to have some alternative way to auth?


You can't back it up. For ssh keys for eg github this is fine, you just configure a new one as you dont need the ssh key to log in (eg I have password and multiple u2f tokens). For ssh to say servers you might want another key setup, you could have an offline software key, or multiple computers.


From Apple's support page [1] (Secure Enclave section near the bottom):

> The chip in your device includes an advanced security architecture called the Secure Enclave, which was developed to protect your passcode and fingerprint data. Touch ID doesn't store any images of your fingerprint, and instead relies only on a mathematical representation. It isn't possible for someone to reverse engineer your actual fingerprint image from this stored data.

It's pretty easy to make use of Touch ID for `sudo` authentication, for instance [2].

[1]: https://support.apple.com/en-us/HT204587

[2]: https://news.ycombinator.com/item?id=26302139


I just made a suggestion to dasKeyboard that they put a place to dock your Yubikey on top of their keyboards, since people will be touching them much more often now.


By "place to dock", you mean like a USB port or something else?

Lots of keyboards obviously have that already - including at least some of the Das Keyboards (mine does...)


Yes, but not on the back like they have (on some models) but on the top. If your key is in one of the rear USB sockets, when you press down to activate it it will stress the port (it acts like a lever).

I was thinking on the top, parallel with the surface. Lay the key on the keyboard, slide it to the left to insert into the socket. When you press on the key, the force gets transferred through the key onto the keyboard, and so no flex or stress on the socket.


I'm going to ask a naive but honest question: what is the security benefit of a piece of "key-hardware" over a passphrase protected key stored on a "normal" storage device?

I understand that for someone that moves between multiple physical machines it may be a benefit of convenience, but what threat vector does a physical key eliminate that exists for passphrased "normal" SSH keys?


Malware shouldn't be able to silently copy the key. Usually you have to touch the device to confirm usage.

Even if the malware got your one time password when you thought you confirmed a legitimate operation, the attacker wouldn't get persistent access.

Edit: Also, most people don't use them, so some attackers won't bother with more complex target.


The first difference can be negated by using an unplugged storage device to store the key.

The second one is trickier since you need to do processing to prove that you have a private key, so you'd have to send the key to your own computer which breaks the whole point. You could use a separate "key" computer who owns the keys and that computer is the one that proves that it has the private key, but at that point you'd be better off using a normal security key since it's basically the same thing.


Unless I am missing something, security keys never send the private key anywhere so the computer can not access it. Unlike an unplugged storage device which can be read by malware once plugged in, and need to be read into memory by the computer for use.


You can't get the private bits out from the hardware, it always stays on the hardware, never leaves it.

As for the simple storage device, you can just copy it to another one.

Also as others mentioned, the private bits are not exposed to the operating system, so no process can access it.


Your software based key is exposed unencrypted to the host OS and any software running on it. It likely sits in memory unencrypted for some period of time. Malware that steals your key can "clone" it.

A dedicated hardware device never exposes the key to the OS itself - if your host gets compromised, power it off, re-image or replace it, and your keys are still protected - malware on the host didn't steal the underlying key. It might have pre-signed some attestations or similar (hopefully your token requires hardware user input like pushing a button to use it), but it won't have had any access to your key, so can't "clone" it.


SSH secret key file exfiltration by running "npm install" or the like is a concern, and by using YubiKeys this type of attack is eliminated.

You can also enforce usage of YubiKeys, but you can't really enforce every developer sets a passphrase on their locally generated SSH key file.

And it's a convenient, and consistent way of authentication: Your work Google Workspace account uses and enforces a YubiKey, your AWS account login uses and enforces a YubiKey, and now your GitHub account also uses (but cannot not yet be made to enforce AFAIK) a YubiKey. It's less hassle than using one-time codes with fifty-seven different apps and cloud environments, so there's not much user push back.


One obvious thing would be keyloggers?


Maybe I miss something, but I have been using yubikeys (or any pgp key for that matter) for SSH and git since many years already through `gpg-agent --enable-ssh-support`.


PATs are still superior to SSH keys because 1) PATs allow more fine-grained access control 2) HTTPS is more widely available through firewalls and proxies 3) SSH without certificates is more vulnerable to MITM than HTTPS (with certificates) 4) it's easier to specify different PATs for different remote repos than it is different SSH config aliases for different remote repos. Yet people still prefer SSH because........ ?

And since the SSH private key used in this way is merely a pointer to a security token, it doesn't matter if it gets compromised. This is an interesting way to continue the status quo of not explaining to users how to create SSH keys using the new SSH key format (which prevents password cracking vulnerabilities in the old key format).


> 3) SSH without certificates is more vulnerable to MITM than HTTPS (with certificates)

Why would you claim that? As long as any CA with its root certificate in your trust store can vouch for any identity, I would argue that TOFU (where you verify the remote resource's key is in control of the identity you intend to connect with) is less prone to having your connection MITM'd, since the attacker would have to either break the underlying cryptography, or exfiltrate the remote's private key.

With the CA system in place, you "just" need to get another CA to sign for the identity you would like to spoof. Not saying it's easy, but for the proverbial "nation state"-level attacker (maybe with their own CA root already in your trust store :)), it's certainly easier to do that than to break RSA or EC.


> Why would you claim that?

Because it's a fact?

> TOFU is less prone to having your connection MITM'd

Not if you wait for the first SSH connection. Or a key is rotated. Or a new server with new host keys is added. Or the user disables their own strict host key checking. MITM is trivial in these cases. There's tools for it that any script kiddie can use.

They've already connected once? That's fine. Drop their connections and force them to try a connection to a new host, new IP, or from a new machine or client. In any case it'll be a new "TOFU" use case, the user will accept the new host keys, and never verify the signature. Or just try MITM at any time and see if the user has already disabled strict host key checking, as many online guides instruct, or if the user thinks their remote host just has new host keys that need to be replaced.

HTTPS does not have these vulnerabilities, by design, because PKI. Your connection is always authenticated against a trusted key store, you don't ever need to validate a signature by hand, and it's not common for people to turn off validation. In addition (if Git actually implemented this, and I'm not sure why it doesn't) it could use HPKP and HSTS to ensure that a specific key for a specific site was always validated, which would eliminate the different vulnerability that you mention for trusted CA certs.

So for default connections where SSH does not use Certificates (basically all SSH use cases) HTTPS is more secure by default, and supports more ways to prevent MITM.


> Why would you claim that? As long as any CA with its root certificate in your trust store can vouch for any identity, I would argue that TOFU (where you verify the remote resource's key is in control of the identity you intend to connect with) is less prone to having your connection MITM'd, since the attacker would have to either break the underlying cryptography, or exfiltrate the remote's private key.

There are about 170 root CAs, and not all of them are trusted for every domain. There's also things like certificate transparency that make the kind of attack you're talking about much harder. TOFU protects you against a one-off MITM but not against an ongoing attack (and the fact that host keys routinely change for non-attack reasons means many users don't treat a host key mismatch as especially serious).


I’m surprised that there’s no standard way to get a server SSH public key via HTTPS. I have to google for the gitlab key fingerprint and the expected result is not easily found, just to ensure that my first SSH connection is secure. Something like /.well-known/ssh/ed25519.pub would suffice, I guess.


There is an SSHFP DNS record for this purpose, which should be just as secure with DNSSEC enabled.

https://en.wikipedia.org/wiki/SSHFP_record


Well if you could do that, you would just use TLS. The public-key crypto used by SSH is a relic of when it was ridiculously expensive to get an SSL certificate for each of your routers and backend servers, but people still needed something more secure than Telnet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: