Hacker News new | past | comments | ask | show | jobs | submit login

FYI, this happens because SSH automatically presents a public key to the server when trying to authenticate. If the server doesn't know that key, then SSH tries the next one. You can enumerate all of someone's keys this way (like this SSH server does)

If you want to disable this sort of behaviour you can disable SSH from sending keys automatically, and then tell SSH which identity files need to be sent to each host.

In your .ssh/config, something like:

    # Ignore SSH keys unless specified in Host subsection
    IdentitiesOnly yes

    # Send your public key to github only
    Host github.com
        IdentityFile ~/.ssh/id_rsa
This is also handy if you're security conscious and like to use a different private/public key pair for each host you have an account with!

Exactly! Once I get the keys I just check them against a scraped database of GitHub keys and ask the API for your name.

(And if you have agent forwarding active I show you a big WARNING [0].)

There's an explanation in the README [1] but the actually interesting stuff is in server.go [2]. Finally I mentioned a few reasons it might not work for you below [3].

[0] http://git.io/vOVYm

[1] https://github.com/FiloSottile/whosthere

[2] https://github.com/FiloSottile/whosthere/blob/master/src/ssh...

[3] https://news.ycombinator.com/item?id=10005169

> (And if you have agent forwarding active I show you a big WARNING [0].)

It amazes me that people enable that for random servers. Seems like SSH should make that harder. Enabling it for a specific server you trust makes sense; enabling it for all servers doesn't. SSH could reject "ForwardAgent" outside a Host block, for instance, and force you to at least write a "Host *" block.

EDIT: Check out this search: https://github.com/search?utf8=%E2%9C%93&q=ForwardAgent&type...

For those who don't get why this is terrible, imagine if GitHub were compromised and their ssh agent tampered with. When you clone a repo, they could use your forwarded agent to log into your production hosts. That's pretty bad.

I don't fully understand how this would work - the key being "forwarded agent". My (poor) understanding is that in order for compromised github to get to a host I'm connected to they would somehow need to invoke ssh on my host, somehow. The only way that would not be the case is if ssh maintains an in-memory persistent thing that a) maintains connections to foreign hosts, and b) can somehow be signaled from active connections. If that's true, then a) I didn't realize it could do that, and b) it would be quite handy sometimes, although the security implications are rather serious.

The SSH agent maintains your private keys and provides the necessary responses to ssh when it wants to authenticate to a server. If you ssh to a server and forward your SSH agent, that server can then run ssh themselves and impersonate you to a different remote server, and your SSH agent will supply the necessary authentication information.

Or, in short: never use ForwardAgent (or ssh -A) to a server you don't trust.

The remote ssh process asks your host to unencrypt it's traffic? Your process takes an encrypted stream, sends the plaintext, and your keys never leave your machine. That's...incredibly clever, although I can only think of one scenario where it would be necessary (navigating securely through a sequence of ssh sessions where some of the secondary hosts are inaccessible from your originating host. E.g. a kind of "secure trojan horse".

Sometimes I feel just so awed at the ingenuity of people, especially with software and computers.

Not exactly. The remote host, which you have forwarded your agent to, tries to log onto your production host using your public key, and when the production host sends it a signing challenge to prove it has the authority of your private key, it can simply ask your ssh agent to do that signing. Once that is complete, you are logged in and the session has a session key that has nothing to do with your pub/private keys. The ssh agent does not do decryption, it just gets the session started.

Another example is if I want to transfer something from server A to server B, without it going through my shitty little pipe, and without storing the private key on either server.

its very useful for so called "Bastion Hosts", an SSH server that allows further access into the network and is totally locked down.

I know it as a "jump host" or "jump box". Regardless, it's still not a good idea to have agent forwarding - what about the boxes you forward to? Do you trust them not to abuse the forwarding trust (Is there a way to limit forwarding trust to just one machine?)

Also, I used to add every jump combination into my .ssh/config file, but came across a wonderful trick that makes it unnecessary: https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_J...

You don't need agent forwarding for that. Just use ssh's dumb tcp forwarding and keep your agent on your local host.


    Host bastion.company
    	ProxyCommand none

    Host *.company
    	ProxyCommand ssh -W %h:%p bastion.company

Because enabling tcp forwarding is so much more safe on a jump box? That's just asking for another pile of unauditable trouble.

Safer than dumping all your private keys onto the jump box and using that to validate the final target? Why yes. This way, your local ssh client validates the final target public key, not the jump box.

The whole point of agent forwarding is that you don't have to place your keys on the jump box. With -c for per use confirmation it seems much more secure.

> The whole point of agent forwarding is that you don't have to place your keys on the jump box.

A socket that allows dumping the keys isn't really an improvement. If the box is compromised, agent forwarding can still be abused.

> seems much more secure.

Emphasis on "seems".

This is what I do for my home network. I hadn't realised it had a name (though not at all surprised)

Well, what you describe (pretty much) is the "agent", and many people do, in fact, find it "quite handy" ;P. (I do not use this feature, myself.)

There's something I found on hacker news awhile back that can allow you to use ForwardAgent while limiting your identities so that you can forward your ssh agent, but it only forwards the identity you used to connect to that server. It starts up a separate agent for each identity when you invoke it.

You can find it at: https://github.com/ccontavalli/ssh-ident

I also use the -c (confirm) flag with ssh-add. I don't forward the agent willy-nilly, but if someone still manages to compromise the agent, I will hopefully know when I'm getting confirmation popups I didn't initiate.

I wish gnome-keyring's integrated ssh agent supported that.

It's been awhile since I set it up, but I believe I've disabled gnome-keyring's agent[1], use the OpenSSH one instead, use ssh-askpass-keyring[2] as the SSH_ASKPASS environment variable to ssh-add to read key passphrases from the keyring (manually invoked via a shell script helper; but only needs to be done once per login session), and GNOME's gnome-ssh-askpass installed as the system default, for key confirmations.

(This is on the Cinnamon desktop, so other GNOME setups could be different.)

[1] See https://askubuntu.com/questions/63407/where-are-startup-comm... for how to override in your user dir, or find it in the GUI somewhere. [2] https://launchpad.net/ssh-askpass-keyring is the Ubuntu page

I know how to override it, but I prefer all the other behavior of gnome-keyring's agent. -c is literally the only feature I want that it doesn't have.

> EDIT: Check out this search: https://github.com/search?utf8=%E2%9C%93&q=ForwardAgent&type....

Holy mother of god, can we somehow return to the time when almost nobody used *nix and Microsoft was the one struggling to keep systems of these people secure?

I don't think this time ever existed, I remember some fifteen years ago using google to search for unshadowed /etc/passwd and vulnerable cgi scripts, and there were tons of results.

Thanks for the tip. I use ForwardAgent for all my servers at work, but I had it defined in a "Host *" block. I've now setup separate blocks for my work domains and have moved ForwardAgent into there alone. Never knew it was a risk until this thread.

It's a cool awareness experiment. Ultimately, public keys are public and people shouldn't be afraid of sharing them.

Agent forwarding sharing is a big one though. Getting people to stop doing that automatically takes a lot of education. https://wiki.mozilla.org/Security/Guidelines/OpenSSH#SSH_age...

The public keys should be ok. But the comment on them may be a problem.

The comment isn't sent to the server during the authentication

Note that -X is similarly bad on untrusted servers [1]. While it's an old article, it's still valid.

[1] http://www.hackinglinuxexposed.com/articles/20040705.html

Thanks for the link, adding a warning for that, too.

This might be more obvious than I thought, but could you explain how you scraped all of GitHub keys throughout the entire user-base?

There is a URL for the keys of a user. And phillipo used the dataset benjojo published a few month ago when he scraped all the keys: https://blog.benjojo.co.uk/post/auditing-github-users-keys

And now I've rotated my keys and fixed my ssh config file. Thanks :-)

why rotate them? They're just public keys anyways. Useless on their own.

Thanks for the warning by the way! Moved forwardagent to specific servers.

I almost always log into trusted servers, but it's good to be preemptive ;-)

Double-reading the man page I noticed that IdentitiesOnly makes ssh only send IdentityFile keys, however IdentityFile has a default of "~/.ssh/id_dsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ed25519 and ~/.ssh/id_rsa".

The result is that with this configuration you would still send id_rsa to unknown hosts.

You also need to add "PubkeyAuthentication no" to your global stanza, and re-enable it for good hosts.

    # Ignore ssh-agent keys
    IdentitiesOnly yes
    # Disable public key authentication
    PubkeyAuthentication no

    # Send your public key to github only
    Host github.com
        PubkeyAuthentication yes
        IdentityFile ~/.ssh/id_rsa
More instructions: https://github.com/FiloSottile/whosthere#how-do-i-stop-it

You also need to add "PubkeyAuthentication no" to your global stanza, and re-enable it for good hosts.

Or rename the id_* keys to something else. If you're using multiple key pairs will want more descriptive names anyway.

Ah, good catch! I don't normally name my keys after the default "id_rsa" so I didn't notice that. You will need that configuration you showed if you have keys that use the default value.

Doesn't seem to work for me, the global option is not overridden. Bog standard osx OpenSSH_6.2p2.

Ah well, I don't have any default key anyway.

I am one of these people with a newfound attention to security (I would not call it detail). I was unaware of the ~/.ssh/config param you mention.

Very cool, people like keep me hooked to HN. Keep it up!

Is there a way to set that in bulk so all hosts in your config send identities, but those not specified don't, without putting IdentityFile for every one?

You can create a host block that matches multiple hosts:

    IdentitiesOnly yes

    Host first second third fourth
        IdentitiesOnly no
If you then wanted to have per-host configs you'd have to create more host blocks, but at least this way the added overhead of "IdentitesOnly yes" doesn't grow too fast with the number of hosts involved.

Untested but you should also be able to use "Match" directives as well, assuming there's something in common you can match on, e.g.:

  Match Host *.example.com, 192.0.2.*
    IdentitiesOnly no

> This is also handy if you're security conscious and like to use a different private/public key pair for each host you have an account with!

This is a tiny tutorial I wrote ages ago:


I added IdentitiesOnly to my config recently. Partially for security. But also because an non-Github git service would bail whenever too many incorrect identity files were attempted. I've also had SSH servers bail when trying too many keys/identities.

Yeah, IIRC the error with too many identities is hard to diagnose (something generic like "too many authentication failures"), until you add -v and see all your keys being presented.

> If the server doesn't know that key, then SSH tries the next one. You can enumerate all of someone's keys this way (like this SSH server does)

If you have five keys, and the second is the one that's needed, doesn't that mean that only the first two keys are sent?

Anyway, this is very good to know, and I'm going to take action to make this more secure.

Yes, provided that the server accepted the second key the rest would not be sent. However, the server can simply respond that each key is incorrect (regardless of if it is or not), and then get all five keys.

Once SSH has ran out of keys to try, it tries to move on to other authentication methods. The go app he has written then automatically accepts the connection at that point, once it knows it has seen all of your keys.

> This is also handy if you're security conscious and like to use a different private/public key pair for each host you have an account with!

That's an odd definition of "security conscious". This looks more like a key management nightmare.

You're still sending the same default username to every host anyway, so what's the point?

It allows you to do things like have varying passphrase strength on your encrypted keys, depending on how much you care about keeping each one safe; or putting only the private keys you usually need on each device, rather than one/few keys that will get all your systems pwned if it's compromised.

Exactly. Additionally, if I associate a new key pair with each host then I know I can discard + regenerate that key and it only affects that host. Each machine can have their own pairs as well.

If a key is compromised, it only provides access to a single host, not _all_ of them. This allows much more fine-tuned key management and reduces the scope of a key compromise.

Plus, it's not really that much more work. Just name your key after the host it's for, and then add an IdentityFile directive in your SSH config. I never have to worry about it, and get all the benefits.

I never realized that SSH does this. Thank you.

This really need to be better known.

I do the same thing, plus I maintain a separate public/private key pair for every host where I need the pubkey auth.

This only worked for me after I renamed id_rsa (and id_rsa.pub).



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact