"Enter" then ~ then . will kill a hung SSH connection, instead of having to close the terminal tab. I use it all the time but most people haven't heard of it.
Also related: if you use ControlMaster, then the master session takes over any port forwardings defined in new shared sessions. So you can “ssh host” once to establish the master, then “ssh host -L… -R…” to make port forwardings in new shared connections; the shared sessions can be closed and the forwardings will stay active so long as the master doesn’t close.
To be more precise ~ is the default ssh escape character. It is only treated as the ssh escape character if it is at the beginning of a new line. From the ssh(1):
> The escape character followed by a dot (`.') closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once.
Ahhh that would explain why it didn't feel consistent for me, since I use ProxyCommand to hop through another box most of the time. I usually need to do that key sequence twice. But I guess now I know I just need the tilde twice.
ProxyJump (or the -J option as a shortcut) plus TCP port forwarding (-L/-R) is ridiculously powerful. You can chain jump hosts with commas, threading a port forward deep into (or out of) multiple server hops with a single command.
It can also be used for nested connections. E.g. if you SSH to server A, and then from server A to server B, ~. will kill the session on B. Alternatively, ~~. will kill the session on A.
And then some random garbage and NO CARRIER to confirm the disconnection :)
It was [wait]+++[wait] and then ATH or ATH0 by the way, on a modem that implemented the Hayes command set correctly. To avoid at least some inadvertent triggering of this in-band signalling.
>> "Enter" then ~ then . will kill a hung SSH connection
> is this the same as hitting CTRL-D?
No. ^d is a character that is sent to the remote shell (bash, zsh, etc) to tell it to exit. ~. is something for the SSH client to tell it to cut the connect if (e.g.) the shell has hung.
Thing of it like working at different OSI layers: ^d is HTTP, while ~. is like TCP or IP.
I have a similar setup but the tmux invokation is handled on the remote shell. Needless to say it does not break scp. (I will admit that a single config file for all hosts is more convenient though)
I am actually switching to having wezterm handling the mux functionality. It's nice to have this stuff running on the local client which allows all familiar keyboard shortcuts to work without conflict. Automatic Pane/Tab support is really nice. Mosh like functionality comes for free as well.
On a somewhat related note, Wireguard acts somewhat similar to Mosh in that it can handle poor connections and moving across networks without reconnecting anything. I use SSH over Wireguard on the home DSL which sometimes has significant packet loss and it works well. I also use ZNC over Wireguard on my phone so I can move between networks without having to reconnect (home wifi => cell network => public wifi => etc).
Mosh has more features than just handling unreliable connections though, so SSH over Wireguard is not a proper replacement. Mosh reduces input latency by just echoing text right away and fixing it if needed and only syncs the visible part of the screen instead of making output block while catching up. It does other interesting things too that I don't understand very well!
You can always run Mosh over Wireguard too! This is what I'm going to start doing I think.
I did some digging, and apparently there's a new release candidate available which is accumulating those 5 years of code changes that for some reason haven't been turned into a release:
If anyone here is more knowledgeable, it'd be good to know why it's been this long since an official release. It looks like most of the authors of mosh are academics, so my best guess is that mosh is a volunteer effort and it's just something that slipped through the cracks
Hi -- original Mosh author here. We're excited to be doing a new release, with a new team of maintainers (not academics!), fuzzing from oss-fuzz, 24color support, etc., and would appreciate all the testing of the release candidate that Hacker News is able to provide. Hopefully we'll have a 1.4 soon. Details here: https://mailman.mit.edu/pipermail/mosh-devel/2022-August/001...
Re: why has it been five years, I feel like I've written this message many times on HN (https://news.ycombinator.com/item?id=28151637 , https://news.ycombinator.com/item?id=31010005), but here's another go. :-) We take Mosh's security seriously. In the ten years that Mosh has been out (https://news.ycombinator.com/item?id=3819382), we've never had a real security hole -- that we know about. That's a fantastic (apparent) track record. I don't want us to boast about it because it's just tempting fate, and of course you never really know if you have a security hole (just the ones you find or people tell you about), but, in terms of "security holes discovered," Mosh's track record compares really well with OpenSSH, OpenSSL, etc. Of course those codebases (a) have a lot more features than Mosh, and do more than Mosh, and (b) release more often than Mosh, but I'm happy (and I think the rest of the team is too) that Mosh does the thing it does well and without having made our users vulnerable. Back in 2012 when Mosh first came out, with a novel C++ codebase, and a novel secure datagram protocol, a lot of people were skeptical that it was worth trusting, and I'm pretty happy with how things turned out. This was all before HeartBleed and before QUIC, when self-assured people told me to "use something vetted, like OpenSSL" instead of our own new protocol and codebase. It took a long time to earn the community's trust, and now there's a few million people using Mosh, and I don't want us to fuck that up.
I had handed off the project to another maintainer, and my understanding is that he had some health challenges or maybe just got burnt out. Given the choice between "release the code with lots of new features, but without the normal procedure and without an active maintainer to take responsibility for it" vs. "don't release," we chose the conservative option. I think that was the right choice. Of course many people equate "how recently was there a release" to "how secure is this software," and... I guess we are a counterexample? Not sure what else to say.
Thank you to HN and Patrick Collison for publicizing Mosh back when it first came out ten years ago (https://news.ycombinator.com/item?id=3819382) and hope the next 10 years goes similarly... uneventfully and full of secure, reliable, mobile terminal sessions. :-) And thank you to all of you who get a chance to test the release candidate!
Where does the expectation comes from that software needs to be permanently updated? Would you consider your car unsafe if it didn't need repairs for years?
I recently set up autossh + tmux for this. I don't use mosh, because I use X forwarding to share the clipboard with my remote session, and mosh does not support X forwarding.
Ehhh... tomato tomato. I can scroll perfectly fine in my Ubuntu terminal. I can scroll perfectly fine when I connect with ssh. Mosh doesn't allow me to scroll. Don't really care whose fault it is.
I did try tmux today. Also doesn't support mouse scrolling out of the box, but I did find the setting for it. Still screws with copy-paste and right clicking.
the article already mentions tmux, and tmux already handles ssh reconnection and more.
You might still use mosh if your connection is high latency and you are ok with occasional display artifacts, but its utility is much less if you already have tmux
Here's how to define a shell function for quickly attaching to an existing screen session after connecting via ssh (or creating a new one if none exists):
sshcreen () {
ssh -t "$@" screen -xRR
}
Works with bash and zsh. Usage is pretty simple:
$ sshcreen user@example.com
You can use normal ssh arguments, such as the port:
$ sshcreen root@localhost -p 2222
Detach the session with CTRL-A + D, reattach by re-running the sshcreen command you previously used.
I’m using the .zshrc trick too, what are the pros and cons of both? I would like to try out the RemoteCommand one though because it looks cleaner to me
It does: that's what the byobu-enable part does. It modifies the .profile or .bashrc or whatever. All subsequent remote logins to that user account will automatically start/reattach to byobu (which uses tmux backend by default iirc).
# If ~/.ssh/config does not exist, copy file
if [ ! -f ~/.ssh/config ]; then
cp ./.ssh/config ~/.ssh/config
else
# For each block in ./.ssh/config, if it is not in ~/.ssh/config, add it
while read -r line; do
if [[ "$line" == "" ]]; then
if grep -q "$block" ~/.ssh/config; then
# If block is not found, add it to ~/.ssh/config
echo "$block" >> ~/.ssh/config
fi
else
if [[ -z "$block" ]]; then
block="$line"
else
block="$block$line\n"
fi
fi
done < ./.ssh/config
fi
client$ sftp -S ./ncssh bogus
Connected to bogus.
sftp> put ncssh
Uploading ncssh to /ncssh
remote open("/ncssh"): Permission denied
sftp> cd tmp
sftp> put ncssh
Uploading ncssh to /tmp/ncssh
ncssh 100% 56 128.6KB/s 00:00
sftp> quit
I've also jacked this into stunnel. I haven't really benchmarked it, though.
You could probably chroot() this, if there was a desire to use it for something important.
For those who truly miss anonymous FTP, it was hiding inside of SSH the whole time. Shoehorning it back into the browsers is left as an exercise for the reader.
> By the way, do not enable this to all hosts, as it might be considered insecure, especially if you are using a shared user. I only do this on a local server in my home network.
Kind of buried the lede there. Two pages of text followed by "Don't do this, it's unsafe!" is poor form. At least the author shows it in a host-specific configuration.
After covid hit and I had already gone back to my home country, ssh proxy was the only way I could bypass my university's buggy 2FA to put my working hours in time management software every month. To get there, I had to bypass the university's bastion server and log into a cluster which was able to gain access to the internet. Wild times :D
How does that work? Does it try to autodetect whether you're speaking SOCKS to it? What if you want to reverse tunnel a socks proxy from another process like Tor?
will start a SOCKS5 proxy on A, to which you can connect on B through port 1080 that's being forwarded there. So your web client running on B will use traffic proxied through A while connecting to 1080 on localhost.
> Does it try to autodetect whether you're speaking SOCKS to it
I think it just uses SOCKS5 out of the box if you don't specify the host for -R. According to the man page for -R:
if no explicit destination was specified, ssh will act
as a SOCKS 4/5 proxy and forward connections to the
destinations requested by the remote SOCKS client.
That makes using SSH ControlMaster automatic, scoped to the lifetime of the terminal session, and allows for easy automatic syncing of local shell and editor rc files to the remote host, as well as easy opening of remote files in the local editor and can even clone shell sessions into new terminal windows.
I use kitty, but I can't use this functionality because I tend to ssh into root-only (or where a single user is shared) boxes. Copying config files there wouldn't be good because it would affect everyone, not just me. That said, it is pretty sweet that this functionality exists.
That's not really a solution TBH, it's basically just copying the files _somewhere else_ than the default. I think it's generally not okay to copy over files to prod servers. If it's a shared box, you probably shouldn't be leaving behind a trace.
Did I mention that the community support is incredibly good as well?
One of these rare non-toxic environments where you can get and answer to some simple problems (usually because you missed something in the excellent docs), or something more advanced for which there is a solution as well :)
The SSH agent symlink tip will not fully work. If you connect with a second SSH session and then disconnect it will point to a stale SSH_AUTH_SOCK that doesn't work. The only way I found to solve this is by having a prompt command that refreshes it every time your prompt is rendered. So worst case is you have a broken ssh-agent for a single command.
Add this function to your zshrc/bashrc:
# This function refreshes some env vars that go stale in old tmux sessions
# It must be run as a preexec function in zsh or a PROMPT_COMMAND in bash
function refresh_env {
local ssh_auth_sock=""
if [[ -v "TMUX" ]]; then
ssh_auth_sock=$(tmux show-environment | grep "^SSH_AUTH_SOCK")
fi
if [[ -n "$ssh_auth_sock" ]]; then
#shellcheck disable=SC2163
export "$ssh_auth_sock"
fi
}
Then for bash, add this:
if ! [[ "$PROMPT_COMMAND" =~ refresh_env ]]; then
PROMPT_COMMAND="refresh_env; $PROMPT_COMMAND";
fi
I often SSH into ephemeral servers like autoscaling EC2 instances. The host key checking is unnecessary and annoying, so I use a different SSH command for that:
function ec2_ssh
ssh -F ~/.ssh/ec2_config $argv
end
I feel compelled to be "that guy" for a second and just mention that while this is convenient, it makes you blatantly vulnerable to man-in-the-middle attacks. I mean, host keys exist for a reason. Sure, for many people security may not "matter" but I still think it's good to be aware when you're doing something that's way out of bounds.
I don't know how EC2 instances work in detail, but I imagine there must be a way to get a hold of the host key via an API or something when it's deployed (or maybe at any point) so that integrity could be kept without creating annoyance. I'd be much more comfortable with a solution that, for example, queries AWS over HTTPS before every connection and updates the host key (if necessary).
Yes! I’m not aware of any aws cli command that gives you the host key of a new instance. I’d definitely use that.
The script is designed for ephemeral instances, where I don’t intend to ever connect to the instance again, so saving the host key doesn’t help anything. So, really, anyone not using this hypothetical API is just as vulnerable to such a MITM attack in my threat model.
Back in the day `aws ec2 get-console-output --instance-id $ID` used to contain the public key that can be extracted using regex. Not sure if this still works.
In addition to the security aspect, it helped me understand a few times why I could not ssh into a server by IP.
I was in a secure environment, the question for the key appeared, I accepted the new key I sometimes erase known_hosts, change keys etc. so it was not suspicious). And could not log in.
It took me some time to realize that the IP I was trying to connect is not mine (as in "at home") but on the company network. I realized that when I recalled that I had the key prompt.
> I feel compelled to be "that guy" for a second and just mention that while this is convenient, it makes you blatantly vulnerable to man-in-the-middle attacks.
I don't believe that it makes you vulnerable to MitM attacks if you are authenticating with a key.
The server requests the client prove it holds the private key in a way that is bound to the session id, which is derived from the shared secret established with ephemeral key exchange.
For those who use (or want to use) SSH to connect to GitHub for Git, I wrote 11 years ago a tool to maintain my SSH settings for GitHub. I still actively maintain it.
* automated maintenance of the GitHub specific configuration (just run the latest release of github-keygen and your setup is safely upgraded)
* strong security settings
* eases the management of your SSH keys allowed to access your GitHub account
* performance settings (ControlMaster)
Even the yubikey suggestion is suspect. A hardware token can protect access to your private keys by firewalling them on the "first system", which the agent is doing already. And enabling agent support on a remote device is, if anything, only opening new attack vectors; it's improving convenience and not security (the remote system has no way to validate that the hardware is in use, it just knows from ssh key pairs). It's not a bad suggestion, but it's not meaningfully different than just using ssh-agent as intended.
Then it goes on and explains how to use ControlMaster to evade the physical access validation granted by the key! I mean... why even bother with the key in the first place?
ControlMaster itself is a bad idea generically unless you really know what you're doing, because the original connection process needs to stay alive until the last child connection exits. Hands up, all of us who ever inexplicably hung cron jobs and other automated systems by doing interactive stuff at the wrong moment. I know I have. (To be clear: it's still useful as a performance enhancement for remote work, but you absolutely have to know what you're doing and script it carefully. It shouldn't appear like this in a default config line.)
Similarly CanonicalizeHostnames is a recipe for collision with your DNS. If a name doesn't work the way you want it to you need to fix the naming and not just decide to speak a new language.
> but it's not meaningfully different than just using ssh-agent as intended.
Expect for where the key is stored. How much difference that makes depends on the use-case. For a developer laptop with a passphrase protected key? Not much of a difference.
ControlMaster is great for development too. That should have been prefixed with that caveat.
In fact, it seems that all recommendations are from the point of view of a developer, not automation. That would be a bad idea for automated systems, as you point out.
Yes, its from the standpoint of me, as a developer, using these things to improve my own workflow... not to use to automate stuff.
Even the control master thing, I mentioned it goes against the purpose of the yubikey, but then I explain how I use it only for github and only for a few seconds to prevent having to touch the key nonstop when updating nvim plugins and such... IMHO its a good tradeoff...
I would dearly like to have a way to unlock keys on the remote host with my local yubikey. I generate one key for every (user, host, remoteHost) tuple, so when I'm SSH-ed into a remote host, I can't use keys from that remote host to make further connections unless I use a passphrase.
Kind of related, but what I'd love is a way to share part of my local file system with the remote host without using FUSE or requiring root access on the remote system. Something like:
local $ echo world > ~/src/hello
local $ sharewith-ssh --dir ~/src:/src code.example.com
Last login: Wed Aug 16 09:33:55 2022 from 1.2.3.4
code ~ $ cd /src
code /src $ cat hello
world
code /src $
I think it could be built with a local agent, a binary that you download on the remote host to intercept file system calls for a path (run it and it execs your shell with LD_PRELOAD to the library), and then forwards file system and I/O requests back to the the agent over SSH to get at those files. You could even have a permission prompt UI on the local agent to permit/deny individual reads/writes.
vscode connected with ssh is close: It's not sharing because it only allows you to edit the server file system as if you had opened a project locally, but it does give you the feeling that you have the project there, and every time you save it goes directly to the server via ssh.
Emacs TRAMP mode can do that too; it supports SSH, sudo (to edit as a different user), and I think a bunch of other protocols. It also supports multi-hop, e.g. edit as sudo, on a remote server, connected via an intermediate bastion/proxy machine.
TRAMP works fine with shell-mode, version control (e.g. magit), dired (file browser), etc. too :)
- How is vim + tmux compatibility these days? Particularly with regard to powerline, etc. types of plugins? Is there a good resource that helps one set this up from scratch?
- I find that SSH times out / disconnects if unused but only on my mac, not on my linux machine. I don't have any of the configs described here on either machine, so I'm not sure why one disconnects and one doesn't - is there some out-of-the box macOS default that needs to be changed, or is it some arcane battery optimization macOS feature that's killing the SSH sessions?
I find I like having "set-option -g mouse on" in ~/.tmux.conf so that mousewheel scrolling feels more natural (like it does in a local terminal).
> I find that SSH times out / disconnects if unused...
This one is a little tricky - it partly depends on the default settings of the remote sshd_config for sending KeepAlive pings (changed Debian 10 to 11, e.g) and what your local vendor-compiled ssh_config defaults look like. In general, to just solve the problem add this to your macOS ~/.ssh/config at the bottom/end:
Host *
TCPKeepAlive yes
ServerAliveInterval 300
TCPKeepAlive is what it sounds like, it's the L3 level tweak. ServerAliveInterval is a higher level ping-pong on the SSH session itself; kind of overkill to have both configured, but it Just Works(tm) for most people to have them set on their client. You can look these up in the man pages (ssh_config, sshd_config) and discover even more tweakable options than just these two I presented - some you can set server side, some client side, some both.
Side note: bash has an envvar `TMOUT` -- if that's set, bash will auto-logout if you idle in a shell. It's usually not set on most Linux server installs, just be aware it exists and is a thing to look for if you're debugging some day.
If you want to copy your SSH keys to a different machine (so that you need not type your password everytime to login) you just need to do `ssh-copy-id user@remotemachine` and it works like a Charm.
I've worked in an environment where $HOME is mounted on a drive that's shared across multiple servers, so you might want to consider including the hostname in the socket name (logging in on another server while already logged in can result in confusing disconnections).
Slightly off topic, I want to shout out the author, Carlos Becker, for maintaining GoReleaser which has been incredibly valuable for us at Caddy to automate most of our release process. It really makes it a breeze. I sponsor him on GitHub, and I think you should too, if you use the tool as well.
Not the author of the article, but it allows you to use your yubikey wherever you would normally use agent forwarding. If I, for example. SSH to server foo, do some work, and then need to connect to server bar from server foo, I can use ssh -A bar. As long as both foo and bar are set up to accept my yubikey, it will work.
This can be useful if bar is configured to only be reachable by foo.
In general it's not a good idea to forward the keyring just for the purpose of jumping to a next machine. If the first machine is compromised someone may get access to your private key. For mere jumping, you should use "ssh -J jumpuser@jumpbox targetuser@targetbox".
Yup, this is true in general, but this question was specifically about yubikey-agent. When configured properly, yubikey-agent completely replaces ssh-agent, so you're no longer forwarding your private keys, you're forwarding the ability to request authentication through your yubikey. When this happens, again, assuming you have it configured right, your yubikey will blink, and will not grant the request unless you touch the little button on it. This makes agent forwarding safer, because an attacker can't impersonate you unless you agree to tap a physical button on your yubikey.
In some workflows, it's easier and faster to open a temporary connection to server bar from server foo using ssh -A, rather than opening a new terminal and using ssh -J.
This seems like the right place to ask: does anyone know of a hack or modification that will e.g. dump hostname on a ~? -- so many times I'm in a few layers deep and likely to miscount.
Not quite -- I want something that will show me a hostname or instantiating command from the SSH escape sequence subsystem (e.g. if I'm N SSH sessions deep and hit ~~~., where will the kill land).
Is there a way I can rig it so every machine I ssh into has already had 'set -o vi' run for my session, without having to change anything on machine I'm logging into?
I did try that but it just runs 'set -o vi' and exits. He shows using it to launch a multiplexer... I suppose it's an alternative to 'ssh -c command'. I tried to figure out a way to invoke 'set' followed by the remote user's $SHELL but couldn't figure it out. I'll keep fiddling; thanks for your reply
All of the single-character shell options documented in the description of the set builtin command, including -o, can be used as options when the shell is invoked.
So set RemoteCommand to something like /bin/bash -o vi