Hacker News new | past | comments | ask | show | jobs | submit login
Secretive – macOS native app to store SSH keys in the Secure Enclave (github.com)
416 points by guessmyname 9 days ago | hide | past | favorite | 106 comments





For anyone not aware, you can use macOS's keychain to store ssh key passwords and have them unlock at login. This way you can have the benefits and convenience of password managers in the command line for SSH certificates.

https://apple.stackexchange.com/questions/48502/how-can-i-pe...


Does it have touch to authorise (doesn't seem to support that), or is it just going to send on all of one's currently-loaded SSH keys whenever one connects with -A (seems to)?

> You can configure your key so that they require Touch ID (or Watch) authentication before they're accessed.

That, to me, would be a key thing to want to have: something that tells me "hey, Terminal just wanted to access your Github key. Is that okay?"

If I'm git pushing, that's fine. If I just connected to a random server... that's not okay. What is that trying to do? Deny.


1. with pubkeyauthentication, you don’t send your private key. Your public key, which is stored on the server, is used to encrypt something. You prove you have the corresponding private key by sending back a decrypted version of the something. Attempting to log in to a server, whether they have zero or more of your public keys poses no risk.

2. You can control which private keys are used for which remote server using .ssh/config. You can look up the man page for more.

3. There is a risk of using ssh-sgent key forwarding that while you are connected to a server with key forwarding turned on, a super user sudo to your user and log in to a second host. This risk can be minimized by only enabling key agent forwarding to hosts you trust and limiting the keys available to each host.


After researching this for a while, it seems there is no documented, native option to do this. The only option is to unlock all SSH keys all the time, which makes them less secure than the passwords for websites managed by the exact same keychain. Which, in my opinion, is weird.

Do they employees at Apple use a different system altogether? Because the built-in one doesn't seem very secure. Or maybe I am using it wrong, who knows.

In a similar vein, is there an exhaustive manual for macOS? It bugs me that Apple machines cost a small fortune, the OS is full of nifty features, but there is no non-superficial manual shipped with it.


> which makes them less secure than the passwords for websites managed by the exact same keychain

That's NOT true. While giving out less information to untrusted parties is obviously better than more, the private key itself is not transmitted directly to the server. This means that connecting to an attacker's SSH server doesn't give them a copy of your private key, so they can't then connect to your SSH servers.


The scenario here is with -A (agent forwarding). This means as long as you’re connected to a server with that enabled, that server can auth as you to a bunch of other stuff by having your client do it silently.

In regards to the ssh-agent stuff (and a lot of the other CLI tricks), the man pages usually document them. They may lag a bit for new features, but most of them will have a man page. Barring that they’ll usually print a help message.

One command off the top of my head that doesn’t really follow this is the undocumented/internal `airport` command. In that case it has two different help messages depending on how it’s called, and is also tucked away in a framework as well.


You used to be able to have multiple keychain files in keychain so you could create another one which doesn't unlock at login which you keep for more securerer stuff?

Ya I’ve done this before so I know it’s possible. The Mac keychain will even auto-lock the chains it can lock

There is a native way in .ssh/config. I just added a reply to a different post.

That only restricts the blast radius to one key.

One is unawares of when, how, and for what purposes that key is used - as forwarding the key means it's available for use by any user process (as the mechanism behind the forwarding is user-owned) or root (as root can see everything).

Touch-to-authorize helps mitigate that.

If one seees the prompt come up when they've just performed a git pull, it's expected and likely non-malicious. Allow.

If it pops up after having ran "ls" or "randomly" in the course of a session - what's going on? Deny.


You are correct. Touch to authorize is lower risk. I don’t know a way to do that natively. Sorry if I misunderstood.

To be clear, that would be a privacy issue (a malicious server could tell what keys you have), but wouldn't allow a malicious server to log in to anything else with your keys. You don't send the private key when you log in.

Note that GP said -A -- this means the agent gets forwarded, and processes on the malicious server can ask the agent to perform authentication operations.

Touch to auth means the agent (or hardware token) asks the user to to confirm they are expecting an authentication request to come in.

This allows you to forward your agent to a host and have slightly more protection against malicious processes on the host using your key.


Github shouldn't ever make any use of -A should it?

Nope. But if you -A to a malicious server it could use your key to push to github.

> random server

use this:

    Host *
      IdentitiesOnly yes

My general understanding is that -A is generally discouraged in the first place: especially when connecting to an untrusted server. But, it just seems to me to be a bad idea to ever let a private key leave your computer. Instead, you should generate a key on the new box and authorize that one somehow (the lowest friction way would be to use an ssh ca to sign the key and then have the servers you want to log into trust only public keys that have been signed by the CA’s key.

-A does not forward your keys to the remote server itself; it "just" lets the remote server make requests to your local SSH agent to sign things. But you that's still sufficient to allow the remote server to sign into things as you without your authorization, so probably not the best of ideas to sign into servers you don't trust with -A.

If you're using -A to log into other machines behind the SSH server (really, the only reason one would use -A), there are now better mechanisms to do that. ProxyJump if the server supports it; port forwarding or ProxyCommand if it doesn't.


For anyone interested in a standardised/Linux-compatible version of this, check out PKCS11 tokens. Smartcards can and do implement this spec, and if you use PKCS11, the secret is used from the token to sign an SSH login (for example), without being revealed.

This means the secret itself stays on the card. You can combine this with certificates if needed; the smartcard handles the authentication. Using PKCS11 tokens is supported in recent openssh versions. You can also use them for client certificate authentication in web browsers. For anyone familiar with DoD CACs, this is similar - I believe they use a particular card with a proprietary PKCS11 driver, but you can use opensc with any card running a PKCS11 applet.

Some refs:

https://zerowidthjoiner.net/2019/01/12/using-ssh-public-key-...

https://developers.yubico.com/PIV/Guides/SSH_with_PIV_and_PK...


You can also do it with PGP smartcards like the open-source NitroKey [0].

For a more analogous equivalent though, there's the tpm2-pkcs11 project [1] that uses anything conforming to the TCG TPM2 spec [2], which includes a surprisingly wide variety of hardware, including many motherboards. My XPS 13 for example has one.

What I'd really like to see though is something that uses the TPM for host verification rather than client verification. I'd love to be able to ensure that the only way for a machine to give me the host key I expect is for that machine to be hardware I own running verified software (e.g. verified by Secure Boot with a custom key).

[0]: https://www.nitrokey.com/

[1]: https://github.com/tpm2-software/tpm2-pkcs11/blob/master/doc...

[2]: https://trustedcomputinggroup.org/resource/tpm-library-speci...


Indeed, PGP smartcards should work too, as should cards using the open-source ISOApplet.

I also played around with tpm2-pkcs11 last year and it worked nicely, and has support on a lot of devices like the XPS (which works really well on Linux!)

Your idea for using TPM as host verification makes a lot of sense to me - as it stands right now, I'm playing around with this for a side project right now, and a TPM-backed key which is bound to the right PCRs should give you assurance secure boot is enabled, with your own custom keys, and that it loaded your signed Grub, booting your signed kernel.

Got all the secure boot stuff working, now need to figure out what to use the TPM for - options are remote attestation, where server verifies the attestation signature (but this wouldn't be particularly standardised), network level authentication (actually easier - IIRC NetworkManager supports 802.1x authentication using PKCS11, so you can use tpm2-pkcs11 for that), or disk encryption.

Assuming you mean for the remote-end host key, and assuming your server has a TPM available, I reckon that could be quite interesting, but it doesn't look like openssh supports using PKCS11 for HostKey access. Would need to see if the key is used often, or if it's just used to establish connections (since PKCS11 crypto is usually pretty slow, but fine for one-off authentications).

As a closing note for time-travellers from the future etc, worth remembering TPM is far from perfect, and there's quite a few nice attacks if you can "sniff" the serial lines from the motherboard to the TPM itself. And if using the fTPM (firmware TPM), the regular Intel SGX/TXE holes will likely compromise the TPM security properties.


This works ok and is the best option at the moment, but as more systems upgrade to newer versions I think the Fido/U2F support is probably going to take over. It's nice to not have anything key specific, any initialization steps, and so on.

Edit: Maybe OpenSSH does offer resident keys. On the one hand their release notes say they do but on the other hand I was 100% sure somebody who should know insisted they didn't. A trawl of my records cannot find such a communication so perhaps I dreamt it. If resident keys are an option then you need to make sure to buy FIDO2 authenticators and to explicitly tell OpenSSH you want resident keys.

The current SSH FIDO behaviour doesn't do resident keys AFAIK. The OpenSSH team has discussed it, but not as I understand it written an implementation.

In a PIV setup Jim, Sarah and Gary can each have a device (say a Yubikey) with their own private key inside it. Both their workstations and shared desktop/ interactive servers can use these keys, when they're plugged in. So when Jim is sat at this workstation, Jim's key uses Jim's private key to authenticate Jim over SSH. If Sarah wants to use a machine she's never previously used, but it was set up for Jim and Gary, her key, Sarah's key, just works to authenticate as Sarah over SSH. Simple and easy to think about.

With FIDO non-resident keys a similar team Beth, Larry and Zoe have personal FIDO authenticators, let's say from Titan, they each set up their personal laptop to require their personal FIDO authenticator. Both elements are needed to authenticate. Beth's laptop, plus Beth's Titan key is enough to authenticate via SSH. But if Beth is at a Workstation she's never used before she can't use her Titan key to authenticate, it has no idea how to help on its own. If Beth sets that workstation up (a multi-step process) this isn't re-usable, Zoe can't use her Titan key without also going through that enrolment process.

The reason why is a mixture of differences between typical Web authentication journeys and SSH, and the deliberate (privacy preserving) feature paucity of FIDO Security Keys when used without resident keys.

The FIDO authenticator genuinely has no idea what your private key is (this is unlike for resident keys like Apple's recent announcement). It first needs to be presented with an ID, a large random-looking byte string. It has in fact encrypted your private key using AEAD mode with a secret symmetric key that's baked inside it and then used that as the ID. So when the ID comes back it decrypts it, the AEAD checks out, it now has a private key and can use that to authenticate you before it forgets it again.

On the Web in this case (again not the resident case like Apple's cool Safari feature) the ID is being delivered by a remote server to your web browser when you enter a username and the browser passes it to the FIDO authenticator. But in SSH authentication doesn't work that way.

So, OpenSSH stores the ID locally on a computer. It isn't secret, so it's not a huge deal if someone somehow steals it. But without that ID the authenticator can't do its job.

Maybe you could arrange to synchronise the IDs across machines in an organisation. But AFAIK nothing exists to sort that out today. So without either resident credentials (which need a more expensive FIDO2 authenticator) or some further synchronisation framework PIV is easier today if you want employees to authenticate from machines that aren't "their" personal machine.


The 8.3 release notes cover resident key support with compatible hardware.. I have older hardware, but I will still be likely to switch to my u2f key being primary as soon as most cloud services support them. Making sure the right library can load with ssh-agent on nixos (and having a gpg applet/key for devices that don't work with opensc) is less convenient for me than just having separate key identities at work and at home.

Cool but the enclave only supports the secp256r1 curve, whose coefficients are NSA chosen.

While I don‘t think that this is relevant for my personal thread scenarios, I think that this is unfortunate at least and Apple should change it with new hardware.

Ya problem is it’s all NIST “approved” and US companies love that because it means the government is happy to use their products.

I've been using a YubiKey device for SSH using yubikey-agent [0] and it's been great since it’s not possible to extract keys from hardware keys and the device is carried with me. The agent is also integrated with pinentry so it requires a pin code for the session.

[0] https://github.com/FiloSottile/yubikey-agent


You don't even need the agent with the latest SSH versions, SSH can now use U2F natively, which is wonderful.

Oh cool didn't not know that. That’s awesome.

Does this mean that the Secure Enclave is accessible by the user? If so, it prompts so many questions. How much disk space is available on the Enclave, for example.

You don’t actually need to store Secure Enclave protected data in the enclave. Can “wrap” it with the enclave’s key and store it on your own disk.

Unwrapping it puts the private key into memory, from which it can be extracted (it’s hard, of course, but looking at the state of Intel security, everything is possible). Secure Enclave is supposed to sign with non-extractable private key without putting it into RAM.

This is not what it does though. It says the private key can't be exported, and this wouldn't be the case if it was just a regular SSH keyfile encrypted using the Enclave's key.

You can ask the Secure Enclave or a smart card to perform operations using a key pair which is generated on and never leaves the device. In the case of SSH, you can use that directly as long as your SSH client and server support that algorithm: the client advertises the key fingerprint and just passes the server’s challenge through without modification. Apple has had native support built-in for using PIV/CAC tokens for years.

For storing arbitrary sized things, you’d typically use that key pair to sign an AES key used to encrypt the actual file so your device only needs to store the comparatively small keys.


There is an API to store custom keys in the Secure Enclave. It only supports 256-bit ECC keys and the private keys cannot be read or written, only operated on by the Enclave itself.

There is no official storage limit, it seems, but one Stack Overflow answer put the number at around 400. Interestingly, there appears to be no quota system for preventing one app from using them all.


Totally a tangent, but it's interesting that we still refer to things as "disk space" even in the era of flash storage

I know! But I wanted to write something very clear, and I chose the clearest word in my mind to convey my idea. That word is steeped in old tech, it so happens.

Oh and by the way, I was shopping for a cheap laptop for a non-profit this week, and I still had to be careful to pick a laptop with an SSD. Those spinning horrors are still jammed into laptop cases to this day.


Many words are like this. Music albums haven’t been actual albums for decades. Now we barely remember how a collection of songs was distributed as multiple shellac 78 RPM records stored in an album.

That's how language works. We call people "engineers" who have never and will never work on any engines.

"engineer" comes from ingenium (mind, talent) in Latin.

Engineer in most latin languages would be a completely different word to Engine (usually Motor)


engineer

/ɛndʒɪˈnɪə/

Origin:

"Middle English (denoting a designer and constructor of fortifications and weapons; formerly also as ingineer ): in early use from Old French engigneor, from medieval Latin ingeniator, from ingeniare ‘contrive, devise’, from Latin ingenium (see engine); in later use from French ingénieur or Italian ingegnere, also based on Latin ingenium, with the ending influenced by -eer."


This is also where the term 'civil engineer' comes from [1]. It was civilian-focused "engineering" of fortifications like bridges or roads. I've wondered for a while if there was any controversy around the first use of the term "electrical engineer" or "chemical engineer" like how there was with software engineering not being considered "real engineering" around 10 years ago, but I can't really find any information on that.

[1] https://www.britannica.com/technology/civil-engineering


I wonder what a better term for disk storage would be. Block storage, in the way that cloud services refer to it? Just plain "storage"?

In the old Multics papers it was just long term storage as there was no difference between segments in memory and on disk. Which we are finally starting to catch up with 50 years later.

Non Volatile Memory

Volatile Memory gets “erased” when powered off


Wait until people confuse that with NVRAM.

Accurate, and in usage in technical circles already. But I've always found it awkward, and names based on what something is not are always unsettling to me.

Then just Memory. That’s what elderly and nontechies usually call it. And it’s correct. Even more correct with always-on-computing and automatic state saving

"Stable storage", perhaps; I seem to remember the SMTP RFC uses that name.

"Drive" is probably the best option

Not sure about how much is actually available but you can generate about 1000 keypairs before it starts to throw errors at you

thanks for not saying ‘begs’.

no, the secure enclave is not available to the user as such. a few crypto primitives are exported. that’s what this uses. there have been any number of such “agents” over the last few years.


A similar product that works on macs without a secure enclave by storing keys on the secure enclave of your iPhone is krypton.

https://krypt.co/developers/


Been using this since the beginning and it’s great. However, there has been no updates ever since they were acquired by Akamai. A bit worried it’ll suddenly stop working one day...

Ouch! I am a happy user too. But Akamai is all about the enterprise, it seems, and most likely don't care about us common folks. We can only hope that they will allow the former kryptco developers to spend time maintaining it, if it becomes necessary.

Thankfully (i believe) everything that goes into it is open source https://github.com/kryptco

The code may be open source, but not everyone can compile and install the iOS app easily, or have the required infrastructure that allows the push notification to happen.

The github repo seems to be updated only to version 2.5.6 (according to commit message) or 2.5.7 (according to a tag), whereas the latest version on the app store is 2.6.0.

But also, the LICENSE file simply says “All Rights Reserved”.


From the FAQ:

> Q: How do I import my current SSH keys, or export my Secretive Keys?

> A: The secure enclave doesn't allow import or export of private keys. For any new computer, you should just create a new set of keys. If you're using a smart card, you might be able to export your private key from the vendor's software.

I don't get it. If so, how am I supposed to back up my keys in case of a hardware failure or hardware theft?


Like the others have said: You just use multiple. You can just add multiple keys to the authorized_keys file.

This is actually the perfect scenario because an attacker can never get hold of the private key. That means that the key is unique: If it's in your hands, it means an attacker doesn't have it. Most smart cards work this way, they generate the private key inside and it can never leave the hardware, you can only prove you have it by using it to sign or encrypt something.

This is very different from SSH keyfiles on disk which can be stolen in many ways, and as such can be compromised without you ever knowing about it.

It's really cool that we can now use this functionality for SSH too. I currently use Yubikeys in OpenPGP mode for this but I might switch to this once I get a T2-enabled Mac.


"now" is at least 16 years now (with PKCS11 and opensc)

By "you just use multiple", you mean I would have to generate extra private keys, add them to authorized_keys, and store them somewhere other than my mac, right?

Because just by using multiple keys I would still be locked out if all of them were stored in this 'Secretive' app.


If you only have a single SSH key as the only method of authenticating somewhere, you already have a dangerous single point of failure.

Not if you have multiple copies of the private key. But yes, there are advantages to having any given key only exist in one place.

If you are making multiple copies of a private key in different places, why wouldn't you just keep a different private key in each of those places along with its corresponding public key though?

That way, you can remove trust from just one of them if you (e.g.) have your computer stolen.


Well, not exactly, I use multiple physical keys. Yubikeys and OpenPGP smartcards in my case. I use multiple different types too (the OpenPGP cards are quite cheap too).

Is there a convenient way to manage and maintain those keys? Keeping the public key of each device and easily select which ones to place on servers.

In the case of SSH keys, when you export the key fingerprint you can edit the comment. I like the convention of email address & hostname to disambiguate the entries in a centrally-managed authorized keys list – when you get a new device, update the list and push the new version out.

Very good question. I've been thinking about a central way to manage keys. There doesn't really seem to be one, and it would be a big point of attack because an attacker might abuse it to add their own key.

Right now what I do is I just log into my servers and copy the list :P I don't have that many anyway.

The openssh people are also advocating certificates now, which means you'll have to set up a PKI, which will take care of revocation and such.


You aren't. It is the same with Yubikeys, if you use them. You are supposed to generate a certificate per user and device they are using for authentication, which is not a big deal at least for remote login on servers, as you can set an arbitrary number of valid SSH keys.

You are supposed, yes, but it is not necessary with Yubikeys. You can still import private keys into your Yubikey. At work we are using this for group access to some appliances that annoyingly limit the number of SSH authorized keys you can teach them.

Also, there are arguments against generating RSA keys with closed-source software (don't tell me that's all pure hardware in a smartcard). For one, generating keys that are vulnerable to something exotic is very much a viable threat, and yubikeys in particular had an issue where some accidentally created very weak keys.

Sure, you need to make sure the key doesn't leak on it's way, but that's not really the issue.


> and yubikeys in particular had an issue where some accidentally created very weak keys.

So did Debian between 2006 and 2008 https://certlogik.com/debian-weak-key-check/


In OpenSSL. Not in GnuPG. I'm not claiming the latter necessarily has better code, but the former's bad code quality is known.

Yes, but software is easier to fix and the problem is easier and more probable to find.

More importantly: you can check the key generator to not include a Dual_EC_DRBG.

Keep a separate spare set of keys

I have been using sekey for this for a while. Its pretty nice, I mean you do need to auth quite a bit if you use git clone over ssh a lot, but it does mean you know when keys are used.

Yep, it’s so much better than just key files: keeps key material in a secure system (Secure Enclave), you know when a key is used (standard macOS security prompt), and requires a physical interaction (Touch ID) to use a key.

So, a non-portable YubiKey?

More like a built-in YubiKey. It's a convenience. When using one's MacBook, you don't need to pull your YubiKey out of your bag/drawer/etc to authenticate, and if you're traveling you can just leave YubiKey at home/the office, giving you one less thing to lose.

Does anyone know of a way to guard rclone passwords that would be stored in rclone.conf? Is it possible to leverage the keychain or this tool for this? Typically passwords for sftp logins are stored with a reversible hash (see `rclone reveal`), which isn't ideal.

Is a similar service available for yubikeys connected to linux machines?


Thanks.

Yes - as the other commenter mentioned there are some docs from Yubico. Also worth looking at PKCS11, as this lets you do HTTPS clent certificate authentication, as well as SSH. And it's all standardised too under the PKCS11 standard, which is nice for portability or diversity of token providers.

> Auditable Build Process

> Builds are produced by GitHub Actions with an auditable build and release generation process. Each build has a "Document SHAs" step, which will output SHA checksums for the build produced by the GitHub Action, so you can verify that the source code for a given build corresponds to any given release

What will stop an attacker from hard coding code source file hashes? I'm not saying they will, just that it's a bad method of making binaries auditable


Nothing. The point is that you can replicate the same steps on your local machine and the hashes should be the same. If they're not it means either github actions and/or the repo owner has been compromised.

For the love of God, please stop using SSH keys. Almost every "company X is hacked" title on HN can be traced to leaked SSH credentials.

Use auto-expiring certificates that are issued after a proper SSO+2FA flow:

https://gravitational.com/blog/how-to-ssh-properly/


> Almost every "company X is hacked" title on HN can be traced to leaked SSH credentials.

I don’t think that’s anywhere close to true but I’d be interested if you have reason to believe it is or some examples. Or maybe it’s intentional hyperbole?


Intentional hyperbole to sling blog spam. That link is to a site the OP claims in profile.

Most of the time, you're lucky to get people to move off of passwords

This is the sort of thing I would have expected Apple to do back when they didn’t care about macOS at all. Or that they should do now that they are refactoring it.

So, macOS keychain simply doesn't have the ability to store in secure enclave?

Could this be possible on iOS devices, which also have a secure enclave?

You could check https://krypt.co

(not affiliated)


Note that they were bought by Akamai last year. I wouldn't count on the Krypton apps staying around, at least not in their current incarnation. A fork could survive, though.

Thanks, that reminded me of upcoming iOS 14 support for WebAuth/FIDO. Perhaps that can be extended to SSH auth.

Do you mean with Face ID? Since WebAuth/FIDO already work with iOS 13. I have been using U2F with my YubiKey (whichever one has NFC).

Not quite. The point is to use just the iPhone as the authenticator, without an external token.

See announcement here: https://developer.apple.com/documentation/safari-release-not...

I think it was posted on HN a few days ago, but I can't find it right now.


Termius[1] already has this feature available in iOS 13.

[1]: https://www.termius.com/


Is is possible to recreate this using Windows Hello and TPM?

hmm, interesting question. should be. Windows Hello generates RSA keypairs I think. Can you just use those as SSH keys ? The OS will take care of securing them in TPM if available.

If you wanted to use conventional file-based keypairs and secure the "passphrase" instead, maybe use the Hello private key to encrypt the passphrase into the credential store ? I am not up-to-date on whether there's a newer more secure way to store credentials, but seems like forcing a Windows Hello action to decrypt the data in the store should be sufficient.

I could be missing something though. Otherwise you'd think there'd be a solution from Microsoft already.


Nice to see this going mainstream after only

checks almanac

... a quarter-century of standards and practices.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: