FYI, this happens because SSH automatically presents a public key to the server when trying to authenticate. If the server doesn't know that key, then SSH tries the next one. You can enumerate all of someone's keys this way (like this SSH server does)
If you want to disable this sort of behaviour you can disable SSH from sending keys automatically, and then tell SSH which identity files need to be sent to each host.
In your .ssh/config, something like:
# Ignore SSH keys unless specified in Host subsection
IdentitiesOnly yes
# Send your public key to github only
Host github.com
IdentityFile ~/.ssh/id_rsa
This is also handy if you're security conscious and like to use a different private/public key pair for each host you have an account with!
Exactly! Once I get the keys I just check them against a scraped database of GitHub keys and ask the API for your name.
(And if you have agent forwarding active I show you a big WARNING [0].)
There's an explanation in the README [1] but the actually interesting stuff is in server.go [2]. Finally I mentioned a few reasons it might not work for you below [3].
> (And if you have agent forwarding active I show you a big WARNING [0].)
It amazes me that people enable that for random servers. Seems like SSH should make that harder. Enabling it for a specific server you trust makes sense; enabling it for all servers doesn't. SSH could reject "ForwardAgent" outside a Host block, for instance, and force you to at least write a "Host *" block.
For those who don't get why this is terrible, imagine if GitHub were compromised and their ssh agent tampered with. When you clone a repo, they could use your forwarded agent to log into your production hosts. That's pretty bad.
I don't fully understand how this would work - the key being "forwarded agent". My (poor) understanding is that in order for compromised github to get to a host I'm connected to they would somehow need to invoke ssh on my host, somehow. The only way that would not be the case is if ssh maintains an in-memory persistent thing that a) maintains connections to foreign hosts, and b) can somehow be signaled from active connections. If that's true, then a) I didn't realize it could do that, and b) it would be quite handy sometimes, although the security implications are rather serious.
The SSH agent maintains your private keys and provides the necessary responses to ssh when it wants to authenticate to a server. If you ssh to a server and forward your SSH agent, that server can then run ssh themselves and impersonate you to a different remote server, and your SSH agent will supply the necessary authentication information.
Or, in short: never use ForwardAgent (or ssh -A) to a server you don't trust.
The remote ssh process asks your host to unencrypt it's traffic? Your process takes an encrypted stream, sends the plaintext, and your keys never leave your machine. That's...incredibly clever, although I can only think of one scenario where it would be necessary (navigating securely through a sequence of ssh sessions where some of the secondary hosts are inaccessible from your originating host. E.g. a kind of "secure trojan horse".
Sometimes I feel just so awed at the ingenuity of people, especially with software and computers.
Not exactly. The remote host, which you have forwarded your agent to, tries to log onto your production host using your public key, and when the production host sends it a signing challenge to prove it has the authority of your private key, it can simply ask your ssh agent to do that signing. Once that is complete, you are logged in and the session has a session key that has nothing to do with your pub/private keys. The ssh agent does not do decryption, it just gets the session started.
Another example is if I want to transfer something from server A to server B, without it going through my shitty little pipe, and without storing the private key on either server.
I know it as a "jump host" or "jump box". Regardless, it's still not a good idea to have agent forwarding - what about the boxes you forward to? Do you trust them not to abuse the forwarding trust (Is there a way to limit forwarding trust to just one machine?)
Safer than dumping all your private keys onto the jump box and using that to validate the final target? Why yes. This way, your local ssh client validates the final target public key, not the jump box.
The whole point of agent forwarding is that you don't have to place your keys on the jump box. With -c for per use confirmation it seems much more secure.
There's something I found on hacker news awhile back that can allow you to use ForwardAgent while limiting your identities so that you can forward your ssh agent, but it only forwards the identity you used to connect to that server. It starts up a separate agent for each identity when you invoke it.
I also use the -c (confirm) flag with ssh-add. I don't forward the agent willy-nilly, but if someone still manages to compromise the agent, I will hopefully know when I'm getting confirmation popups I didn't initiate.
It's been awhile since I set it up, but I believe I've disabled gnome-keyring's agent[1], use the OpenSSH one instead, use ssh-askpass-keyring[2] as the SSH_ASKPASS environment variable to ssh-add to read key passphrases from the keyring (manually invoked via a shell script helper; but only needs to be done once per login session), and GNOME's gnome-ssh-askpass installed as the system default, for key confirmations.
(This is on the Cinnamon desktop, so other GNOME setups could be different.)
Holy mother of god, can we somehow return to the time when almost nobody used *nix and Microsoft was the one struggling to keep systems of these people secure?
I don't think this time ever existed, I remember some fifteen years ago using google to search for unshadowed /etc/passwd and vulnerable cgi scripts, and there were tons of results.
Thanks for the tip. I use ForwardAgent for all my servers at work, but I had it defined in a "Host *" block. I've now setup separate blocks for my work domains and have moved ForwardAgent into there alone. Never knew it was a risk until this thread.
Double-reading the man page I noticed that IdentitiesOnly makes ssh only send IdentityFile keys, however IdentityFile has a default of "~/.ssh/id_dsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ed25519 and ~/.ssh/id_rsa".
The result is that with this configuration you would still send id_rsa to unknown hosts.
You also need to add "PubkeyAuthentication no" to your global stanza, and re-enable it for good hosts.
# Ignore ssh-agent keys
IdentitiesOnly yes
# Disable public key authentication
PubkeyAuthentication no
# Send your public key to github only
Host github.com
PubkeyAuthentication yes
IdentityFile ~/.ssh/id_rsa
Ah, good catch! I don't normally name my keys after the default "id_rsa" so I didn't notice that. You will need that configuration you showed if you have keys that use the default value.
Is there a way to set that in bulk so all hosts in your config send identities, but those not specified don't, without putting IdentityFile for every one?
You can create a host block that matches multiple hosts:
IdentitiesOnly yes
Host first second third fourth
IdentitiesOnly no
If you then wanted to have per-host configs you'd have to create more host blocks, but at least this way the added overhead of "IdentitesOnly yes" doesn't grow too fast with the number of hosts involved.
I added IdentitiesOnly to my config recently. Partially for security. But also because an non-Github git service would bail whenever too many incorrect identity files were attempted. I've also had SSH servers bail when trying too many keys/identities.
Yeah, IIRC the error with too many identities is hard to diagnose (something generic like "too many authentication failures"), until you add -v and see all your keys being presented.
Yes, provided that the server accepted the second key the rest would not be sent. However, the server can simply respond that each key is incorrect (regardless of if it is or not), and then get all five keys.
Once SSH has ran out of keys to try, it tries to move on to other authentication methods. The go app he has written then automatically accepts the connection at that point, once it knows it has seen all of your keys.
It allows you to do things like have varying passphrase strength on your encrypted keys, depending on how much you care about keeping each one safe; or putting only the private keys you usually need on each device, rather than one/few keys that will get all your systems pwned if it's compromised.
Exactly. Additionally, if I associate a new key pair with each host then I know I can discard + regenerate that key and it only affects that host. Each machine can have their own pairs as well.
If a key is compromised, it only provides access to a single host, not _all_ of them. This allows much more fine-tuned key management and reduces the scope of a key compromise.
Plus, it's not really that much more work. Just name your key after the host it's for, and then add an IdentityFile directive in your SSH config. I never have to worry about it, and get all the benefits.
If you want to disable this sort of behaviour you can disable SSH from sending keys automatically, and then tell SSH which identity files need to be sent to each host.
In your .ssh/config, something like:
This is also handy if you're security conscious and like to use a different private/public key pair for each host you have an account with!