> You can configure your key so that they require Touch ID (or Watch) authentication before they're accessed.
That, to me, would be a key thing to want to have: something that tells me "hey, Terminal just wanted to access your Github key. Is that okay?"
If I'm git pushing, that's fine. If I just connected to a random server... that's not okay. What is that trying to do? Deny.
2. You can control which private keys are used for which remote server using .ssh/config. You can look up the man page for more.
3. There is a risk of using ssh-sgent key forwarding that while you are connected to a server with key forwarding turned on, a super user sudo to your user and log in to a second host. This risk can be minimized by only enabling key agent forwarding to hosts you trust and limiting the keys available to each host.
Do they employees at Apple use a different system altogether? Because the built-in one doesn't seem very secure. Or maybe I am using it wrong, who knows.
In a similar vein, is there an exhaustive manual for macOS? It bugs me that Apple machines cost a small fortune, the OS is full of nifty features, but there is no non-superficial manual shipped with it.
That's NOT true. While giving out less information to untrusted parties is obviously better than more, the private key itself is not transmitted directly to the server. This means that connecting to an attacker's SSH server doesn't give them a copy of your private key, so they can't then connect to your SSH servers.
One command off the top of my head that doesn’t really follow this is the undocumented/internal `airport` command. In that case it has two different help messages depending on how it’s called, and is also tucked away in a framework as well.
One is unawares of when, how, and for what purposes that key is used - as forwarding the key means it's available for use by any user process (as the mechanism behind the forwarding is user-owned) or root (as root can see everything).
Touch-to-authorize helps mitigate that.
If one seees the prompt come up when they've just performed a git pull, it's expected and likely non-malicious. Allow.
If it pops up after having ran "ls" or "randomly" in the course of a session - what's going on? Deny.
Touch to auth means the agent (or hardware token) asks the user to to confirm they are expecting an authentication request to come in.
This allows you to forward your agent to a host and have slightly more protection against malicious processes on the host using your key.
If you're using -A to log into other machines behind the SSH server (really, the only reason one would use -A), there are now better mechanisms to do that. ProxyJump if the server supports it; port forwarding or ProxyCommand if it doesn't.
This means the secret itself stays on the card. You can combine this with certificates if needed; the smartcard handles the authentication. Using PKCS11 tokens is supported in recent openssh versions. You can also use them for client certificate authentication in web browsers. For anyone familiar with DoD CACs, this is similar - I believe they use a particular card with a proprietary PKCS11 driver, but you can use opensc with any card running a PKCS11 applet.
For a more analogous equivalent though, there's the tpm2-pkcs11 project  that uses anything conforming to the TCG TPM2 spec , which includes a surprisingly wide variety of hardware, including many motherboards. My XPS 13 for example has one.
What I'd really like to see though is something that uses the TPM for host verification rather than client verification. I'd love to be able to ensure that the only way for a machine to give me the host key I expect is for that machine to be hardware I own running verified software (e.g. verified by Secure Boot with a custom key).
I also played around with tpm2-pkcs11 last year and it worked nicely, and has support on a lot of devices like the XPS (which works really well on Linux!)
Your idea for using TPM as host verification makes a lot of sense to me - as it stands right now, I'm playing around with this for a side project right now, and a TPM-backed key which is bound to the right PCRs should give you assurance secure boot is enabled, with your own custom keys, and that it loaded your signed Grub, booting your signed kernel.
Got all the secure boot stuff working, now need to figure out what to use the TPM for - options are remote attestation, where server verifies the attestation signature (but this wouldn't be particularly standardised), network level authentication (actually easier - IIRC NetworkManager supports 802.1x authentication using PKCS11, so you can use tpm2-pkcs11 for that), or disk encryption.
Assuming you mean for the remote-end host key, and assuming your server has a TPM available, I reckon that could be quite interesting, but it doesn't look like openssh supports using PKCS11 for HostKey access. Would need to see if the key is used often, or if it's just used to establish connections (since PKCS11 crypto is usually pretty slow, but fine for one-off authentications).
As a closing note for time-travellers from the future etc, worth remembering TPM is far from perfect, and there's quite a few nice attacks if you can "sniff" the serial lines from the motherboard to the TPM itself. And if using the fTPM (firmware TPM), the regular Intel SGX/TXE holes will likely compromise the TPM security properties.
The current SSH FIDO behaviour doesn't do resident keys AFAIK. The OpenSSH team has discussed it, but not as I understand it written an implementation.
In a PIV setup Jim, Sarah and Gary can each have a device (say a Yubikey) with their own private key inside it. Both their workstations and shared desktop/ interactive servers can use these keys, when they're plugged in. So when Jim is sat at this workstation, Jim's key uses Jim's private key to authenticate Jim over SSH. If Sarah wants to use a machine she's never previously used, but it was set up for Jim and Gary, her key, Sarah's key, just works to authenticate as Sarah over SSH. Simple and easy to think about.
With FIDO non-resident keys a similar team Beth, Larry and Zoe have personal FIDO authenticators, let's say from Titan, they each set up their personal laptop to require their personal FIDO authenticator. Both elements are needed to authenticate. Beth's laptop, plus Beth's Titan key is enough to authenticate via SSH. But if Beth is at a Workstation she's never used before she can't use her Titan key to authenticate, it has no idea how to help on its own. If Beth sets that workstation up (a multi-step process) this isn't re-usable, Zoe can't use her Titan key without also going through that enrolment process.
The reason why is a mixture of differences between typical Web authentication journeys and SSH, and the deliberate (privacy preserving) feature paucity of FIDO Security Keys when used without resident keys.
The FIDO authenticator genuinely has no idea what your private key is (this is unlike for resident keys like Apple's recent announcement). It first needs to be presented with an ID, a large random-looking byte string. It has in fact encrypted your private key using AEAD mode with a secret symmetric key that's baked inside it and then used that as the ID. So when the ID comes back it decrypts it, the AEAD checks out, it now has a private key and can use that to authenticate you before it forgets it again.
On the Web in this case (again not the resident case like Apple's cool Safari feature) the ID is being delivered by a remote server to your web browser when you enter a username and the browser passes it to the FIDO authenticator. But in SSH authentication doesn't work that way.
So, OpenSSH stores the ID locally on a computer. It isn't secret, so it's not a huge deal if someone somehow steals it. But without that ID the authenticator can't do its job.
Maybe you could arrange to synchronise the IDs across machines in an organisation. But AFAIK nothing exists to sort that out today. So without either resident credentials (which need a more expensive FIDO2 authenticator) or some further synchronisation framework PIV is easier today if you want employees to authenticate from machines that aren't "their" personal machine.
For storing arbitrary sized things, you’d typically use that key pair to sign an AES key used to encrypt the actual file so your device only needs to store the comparatively small keys.
There is no official storage limit, it seems, but one Stack Overflow answer put the number at around 400. Interestingly, there appears to be no quota system for preventing one app from using them all.
Oh and by the way, I was shopping for a cheap laptop for a non-profit this week, and I still had to be careful to pick a laptop with an SSD. Those spinning horrors are still jammed into laptop cases to this day.
Engineer in most latin languages would be a completely different word to Engine (usually Motor)
"Middle English (denoting a designer and constructor of fortifications and weapons; formerly also as ingineer ): in early use from Old French engigneor, from medieval Latin ingeniator, from ingeniare ‘contrive, devise’, from Latin ingenium (see engine); in later use from French ingénieur or Italian ingegnere, also based on Latin ingenium, with the ending influenced by -eer."
Volatile Memory gets “erased” when powered off
no, the secure enclave is not available to the user as such. a few crypto primitives are exported. that’s what this uses. there have been any number of such “agents” over the last few years.
But also, the LICENSE file simply says “All Rights Reserved”.
> Q: How do I import my current SSH keys, or export my Secretive Keys?
> A: The secure enclave doesn't allow import or export of private keys. For any new computer, you should just create a new set of keys. If you're using a smart card, you might be able to export your private key from the vendor's software.
I don't get it. If so, how am I supposed to back up my keys in case of a hardware failure or hardware theft?
This is actually the perfect scenario because an attacker can never get hold of the private key. That means that the key is unique: If it's in your hands, it means an attacker doesn't have it. Most smart cards work this way, they generate the private key inside and it can never leave the hardware, you can only prove you have it by using it to sign or encrypt something.
This is very different from SSH keyfiles on disk which can be stolen in many ways, and as such can be compromised without you ever knowing about it.
It's really cool that we can now use this functionality for SSH too. I currently use Yubikeys in OpenPGP mode for this but I might switch to this once I get a T2-enabled Mac.
Because just by using multiple keys I would still be locked out if all of them were stored in this 'Secretive' app.
That way, you can remove trust from just one of them if you (e.g.) have your computer stolen.
Right now what I do is I just log into my servers and copy the list :P I don't have that many anyway.
The openssh people are also advocating certificates now, which means you'll have to set up a PKI, which will take care of revocation and such.
Sure, you need to make sure the key doesn't leak on it's way, but that's not really the issue.
So did Debian between 2006 and 2008 https://certlogik.com/debian-weak-key-check/
The yubikey (at least some models) can work as a smartcard.
> Builds are produced by GitHub Actions with an auditable build and release generation process. Each build has a "Document SHAs" step, which will output SHA checksums for the build produced by the GitHub Action, so you can verify that the source code for a given build corresponds to any given release
What will stop an attacker from hard coding code source file hashes? I'm not saying they will, just that it's a bad method of making binaries auditable
Use auto-expiring certificates that are issued after a proper SSO+2FA flow:
I don’t think that’s anywhere close to true but I’d be interested if you have reason to believe it is or some examples. Or maybe it’s intentional hyperbole?
See announcement here: https://developer.apple.com/documentation/safari-release-not...
I think it was posted on HN a few days ago, but I can't find it right now.
If you wanted to use conventional file-based keypairs and secure the "passphrase" instead, maybe use the Hello private key to encrypt the passphrase into the credential store ? I am not up-to-date on whether there's a newer more secure way to store credentials, but seems like forcing a Windows Hello action to decrypt the data in the store should be sufficient.
I could be missing something though. Otherwise you'd think there'd be a solution from Microsoft already.
... a quarter-century of standards and practices.