
SSH hacks – a little sanity for remote workers - mxfurman
https://smallstep.com/blog/ssh-tricks-and-tips/
======
m463
I always create and heavily use ~/.ssh/config

    
    
       Host x
           Hostname full.host.name.com  (or 1.2.3.4)
           User <myuser>
           IdentitiesOnly yes
           IdentityFile ~/.ssh/id_x_ed25519
    

I give hosts short names so you can `ssh x`

to do automatic login, I generate identities for some machines

    
    
      ssh-keygen -t ed25519 -f ~/.ssh/id_x_ed25519
    

use ssh-copy-id to copy the identity to the target machine so it lets you in:

    
    
      ssh-copy-id -i ~/.ssh/id_x_ed25519.pub x
    

or if your machine doesn't have ssh-copy-id (older macs);

    
    
      cat ~/.ssh/id_x_ed25519.pub | ssh x "cat >> .ssh/authorized_keys"
    

IdentitiesOnly means it will only send that one identity for that one machine
(otherwise it will try all of them, like a janitor trying to open a locker
with a big keychain of identical keys)

If you always want to use a password to log into a machine, but want to be
able to log in in other windows to the same machine without a password:

    
    
      Host x
          ...
          ControlMaster auto
          ControlPath ~/.ssh/master-%r@%h:%p
    

this will multiplex all activity to that host through one tcp connection

you can also use Host * at the beginning of your config to do this for all
hosts

to tunnel vnc over ssh to a remote mac(I do this with mac)

    
    
      Host foo
            ...
            LocalFoward 5900 localhost:5900
    
    
      ssh foo
    

then locally vnc to localhost:127.0.0.1

~~~
mrweasel
I love the multiplexing feature. We have a client who require password, ssh
key and MFA. All services are behind a bastion host, which only accepts trafic
from select IPs. SSH multiplexing and proxy configuration allows me to enter
the password and TOTP just once instead of every time I need to access a
service behind the bastion host.

~~~
zengargoyle
Of course if the multiplex stalls or goes down so do all of the connections
you have running through it. It's like screwing up Gnome Terminal or similar
that uses a main server and multiple clients. Screw one pooch and you're whole
session of things is similarly screwed.

~~~
mrweasel
True, but I never see SSH go down, unless there's a network issue, in which
case I lose all my connections anyway.

~~~
beagle3
Indeed; however, if you have a lingering control file (because an old ssh
process was killed, or there was a power failure and it was somehow not
removed), it will either refuse to use it, or (occasionally, and I haven't
been able to pinpoint when), would just wait there forever.

edit: a typo

~~~
dkdk8283
Sounds more like a bug than a disadvantage

------
segfaultbuserr
> _What are your favorite SSH tips & tricks?_
    
    
        $ ssh -J user1@host1 user_final@host_final
    

or

    
    
        $ ssh -J user1@host1,user2@host2 user_final@host_final
    

Not many people know it, you don't need to launch a SSH within a SSH session -
SSH has built-in support of using one SSH server as a proxy to another SSH
server. Useful for <del>hacking servers</del> accessing servers behind a
firewall, or using your own server as a proxy to bypass a bottleneck in the
network.

~~~
InvaderFizz
Another lesser known tunneling trick is that SSH will happily act as a SOCKS5
Proxy. I've been using this trick for going on 20 years now.

Just do: ssh -D9090 user@remote

Then, in Firefox, set it to use a SOCK5 proxy of localhost:9090 and "Proxy DNS
when using SOCKS v5".

Now, when you use Firefox it is as if you are using Firefox on the machine you
are SSH'd into(including DNS resolution!). This is really handy for things
like accessing otherwise unreachable resources or other internal resources
externally. I use it for accessing iDRAC/IPMI/ESXi (you can also tell Java to
use the proxy so VMRC works as well). It is also handy to be able to put all
your web traffic as originating from a remote VPS with no advanced setup
required.

~~~
orbitingpluto
I used to do this all the time over port 53.

My closest coffee shop would allow people to access Wi-Fi only if you gave
them full access to your Facebook account. DNS was the only port open to the
outside world.

~~~
MaxBarraclough
Interesting that they permit TCP port 53, rather than just UDP port 53.

~~~
Piskvorrr
My guess is that it stems from lax firewall defaults. "Allow port 53 -
[tcp/udp/BOTH]?" (Yeah, I know that DNS can also work over 53/tcp, but it's
rare compared to the 53/udp volume)

~~~
acdha
It's not as rare as it used to be a couple decades ago. If you block tcp/53
you will find a surprising number of things breaking as record sizes have
increased over the years.

------
montroser
One that has come in handy a few times: When a machine is so starved for
resources that it can't even allocate a pts for you, but you want to run some
forensics, use `-T`:

    
    
        $ ssh -T user@host <command>
    

Even if you're plumb out of file descriptors for example, you can run...

    
    
        $ ssh -T user@host lsof
    

...or whatever, and get your command output dumped to the screen, even if you
don't get the niceties of a terminal.

~~~
shawnz
Specifying a command to run automatically implies '-T'.

~~~
jlokier
Unless you have "RequestTTY yes" in your ~/.ssh/config.

------
acdha
I highly recommend using the canonicalization feature if you use multiple
networks:

    
    
        CanonicalizeHostname yes
        CanonicalDomains example.org
    

That ensures that you never have the same host listed twice under the bare
hostname and the fully-qualified version, avoiding the need to change keys
twice when you rotate them.

This setting allows you to automatically accept keys for new hosts but still
report conflicts for existing hosts:

    
    
        StrictHostKeyChecking accept-new
    

I highly recommend using the control-master feature to keep a persistent
connection open to servers you access a lot. This makes new connections and
tools like Git, scp, sftp, rsync, etc. much faster:

    
    
        Host *.amazonaws.com github.com *.github.com gitlab.com *.gitlab.com *.googleusercontent.com
         ControlMaster auto
         ControlPath ~/.ssh/control/%C.socket
         ControlPersist 600
    
    

On MacOS, you can use an x509 certificate on a device like a Yubikey as the
SSH key so you can authenticate everywhere with the private key never leaving
the token and, should you set it up that way, requiring a tap to use.

This will enable the provider:

    
    
        PKCS11Provider=/usr/lib/ssh-keychain.dylib
    

This will get the public key:

    
    
        ssh-keygen -D /usr/lib/ssh-keychain.dylib
    

(see
[https://piv.idmanagement.gov/engineering/ssh/](https://piv.idmanagement.gov/engineering/ssh/)
for other platforms)

------
asdff
My advice is to make heavy use of your .ssh/config

Here is an example of some of mine (double spaced to read clearly on HN):

Host *

    
    
            Ciphers aes128-ctr
    
            Compression yes
    
            ServerAliveInterval 120
    
            ForwardX11 yes
    

Host example

    
    
            HostName example.url.com
    
            User my_name
    
            ForwardX11Trusted yes
    

I keep global settings behind the glob, and more specific settings for all the
hosts I use.

No need to use aliases in your .bashrc or wherever. With this setup, typing
'ssh example' is equivalent to 'ssh -XCY -c aes128-ctr
my_name@example.url.com' which definitely saves some keystrokes.

------
madars
> When debugging a complex issue on your servers, you might like to share an
> SSH session with someone who is not in the same room. tmux is perfect for
> this for terminal sharing! Here are the steps: > [..] > Both of you will
> need to SSH to the machine using the same account.

If you want, it is very easy to do view-only tmux sessions with no third-party
tools required. That is, you start your tmux specifying a socket (tmux -S
...), and then have a dedicated ssh user which references it (tmux -S ...
attach -r, where -r is for read-only) as sshd's ForceCommand (a 10-liner
[https://gist.github.com/madars/e6b957ea508be1dcd9044fd2c7096...](https://gist.github.com/madars/e6b957ea508be1dcd9044fd2c70969ac))

~~~
mnw21cam
Or you can execute a shell inside ytalk.

------
tomc1985
I miss having a job that required SSH'ing into a server to get work done.
Nowadays everything is abstracted, push your code to the magic cloud and pull
in 1000x other stupid little APIs. It's so boring...

~~~
mvanbaak
So, go make one of those APIs

------
jldugger
IMO among the most useful ssh config settings is the Include directive, which
supports wildcards. Hence, the following is the entirey of my ~/.ssh/config:

    
    
      Include config.d/*
      Include hosts.d/*
    

config.d is basically one 'Host *' file, but hosts.d lets me keep the random
host/device settings for work and personal use separated.

------
Tehnix
I'd recommend wemux[0] for anyone wanting to share tmux sessions with other
people (or even just themselves). It supports several modes:

\- mirror: to attach to server in read-only mode.

\- pair: attach to server in pair mode, allowing the client to control the
terminal as well.

\- rogue: attach to server in rogue mode, which allows both editing with the
host and switching to windows independently from the host.

I personally use rogue to share tmux sessions between my own devices without
affecting whatever I'm doing on the other devices.

[0]: [https://github.com/zolrath/wemux](https://github.com/zolrath/wemux)

------
kazinator
My favorite SSH trick is to have a machine at work SSH back to my home domain,
and provide a tunnel back for Remote Desktop or what have you.

Wee, no VPN to deal with. No _lack_ of a VPN for remote access to deal with.

[https://cygwin.com/pipermail/cygwin/2020-April/244384.html](https://cygwin.com/pipermail/cygwin/2020-April/244384.html)

~~~
MaxBarraclough
That's likely to be a firing offence, no? If I were running things I wouldn't
want employees deliberately subverting my network's security measures in the
name of their own convenience.

If you have to spend time wrestling the VPN while you're on the clock, that's
their own time being wasted.

~~~
kazinator
All you would have to do is disallow outbound SSH.

~~~
acdha
That's tricky, though, since there are many uses for SSH which are not
circumventing security policy — blocking outbound SSH would also mean you
couldn't use Git, manage servers in the cloud or other locations, transfer
files, etc.

Using this to circumvent policy is exactly the kind of move which would lead
to those other uses being banned and making life worse for all of your
coworkers.

~~~
kazinator
> _blocking outbound SSH would also mean you couldn 't use Git, manage servers
> in the cloud or other locations, transfer files, etc._

That's right, and that's how it is with my current employer.

If you need outbound SSH to work with Git, that probaly means you're working
on a side project, not work. Fetching public code needed for work from a
hosting site can be done over https.

Managing servers in the cloud, ditto. If managing servers in the cloud isn't
part of your job description, why would your workplace enable that?

------
fredley
Is your SSH session hung, and you can't even ctrl-C out? Typing ~. Enter (hit
tilde, then period, then enter), will immediately kill the session and drop
you back to your local shell.

------
carapace
The coolest thing I ever saw in the wild, a guy I once worked with wanted to
transfer a directory of files from my machine to his (and maybe show off a
little.)

This was long enough ago that I didn't know quite how to proceed (time before
rsync, scp...? Nah...) so he asked if he could do it and I let him have the
keyboard.

He tar'd (with z) the dir, piped the output of tar to ssh, with a remote
command to cat it out there through tar again, all in one CLI line. Blew my
mind at the time. UNIX philo FTW.

~~~
rsync
From a _very old_ notes file I have ...

    
    
        tar cf - /mnt/data2 | ssh user@10.0.0.10 "cat > /mnt/data1/file1/file_data2.tar"
    

... and if the directory is large, you can 'split' it into multiple files:

    
    
        tar cfp - /mnt/data1 | ssh user@10.0.0.10 "split - -b 1024m /mnt/data1/file1/file_data2.tar"
    

This is really no longer relevant because, of course, we all just use rsync
... but in the _modern_ world, my favorite example of the unix philosophy is:

    
    
        mysqldump -u mysql db | ssh user@rsync.net "dd of=db_dump"

~~~
jandrese
None of those are quite right IMHO:

tar -czvf - <sourcedir> | ssh <user>@<remotehost> tar -xzf - -C <remotedir>

This is _much_ faster if you're sending over a directory with a lot of small
files, especially if the link has even a modest amount of latency. The 'z'
parameter can be omitted if the source files are not compressible (media files
or already compressed).

If the files are highly compressible but very large you might consider this
instead:

tar -cvf - <sourcedir> | pbzip2 -c | ssh <user>@<remotehost> tar -xf - -C
<remotedir>

------
axegon_
A few I can't live without off the top of my head:

* "-D" for ssh tunneling.

* "-L" for forwarding all traffic on a specific port.

* sshfs - mount small directories over ssh on your local machine (works with big ones but way too slow, for big ones I go rsync).

~~~
Piskvorrr
If you work with a large tree where a few files at a time change, you may want
to look at lsyncd - backed by inotify and rsync, syncs the local changes to
remote. Not really suited for interactive edits, but if you find yourself
running rsync in a loop, this is a better replacement.

------
kqr
Are the numbers for the ServerAliveCountMax and ServerAliveInterval
accidentally swapped? Wouldn't it make much more sense to check every second,
and fail if five consecutive checks failed, rather than check only every five
seconds, and then fail immediately if one check is dropped due to very
transient network issues?

~~~
zepearl
Yeah, I would set as well a higher value for "ServerAliveCountMax".

------
bogomipz
I would like to mention sshuttle if your access only is via a jumphost and you
don't want to have to create a port forward for every single host/port you
want to connect to on the internal network. It basically acs like a cheap VPN:

[https://github.com/sshuttle/sshuttle](https://github.com/sshuttle/sshuttle)

[https://sshuttle.readthedocs.io/en/latest/overview.html](https://sshuttle.readthedocs.io/en/latest/overview.html)

~~~
beagle3
More votes for sshuttle!

It's a poor man's one way VPN: It inherits encryption/integrity/authentication
(and some authorization) from ssh; It works incredibly well; For most
practical network purposes it puts you on the computer you are sshuttlling to;
And all it needs on that computer is the ability to ssh into it and some
version of python - no special privileges or prior installations.

The bad: It only does TCP (and does some UDP magic to make DNS work, but not
UDP in general). It's only one way (no one on the destination network can
"call you back", as you don't have an IP on that network). The only config is
which network addresses get routed across the sshuttle (no policy / rules /
firewall / anything else). You appear to come from the computer you shuttled
to (so, unlike a real VPN, for better or worse - no policy along the way can
tell you are coming from outside)

~~~
donaldihunter
Yep, sshuttle is awesome. It's also used under the covers by telepresence for
connecting into a k8s cluster.

[https://www.telepresence.io/](https://www.telepresence.io/)

------
gbrayut
One of my most used commands is Ctrl+d for closing the input stream and
causing a remote or local shell session to exit

[https://unix.stackexchange.com/q/110240/8692](https://unix.stackexchange.com/q/110240/8692)

------
14k12j41j211
The most gain I've had in the recent years was having vscode remote. Instead
of fiddling with terminal editors forever, just have a decent ssh config with
all your host, and connect instantly.

Still looking for a way to make sftp/scp work fast.

~~~
lucb1e
> Still looking for a way to make sftp/scp work fast.

Making it fast while still using it, not sure. But I can share an alternative,
since a friend had the same issue and this was a literally ten times faster
for a lot of small files (in the order of 30 minutes instead of 5 hours):

    
    
        cd path/to/target/location
        ssh user@target 'tar c /tmp/example' | tar x
    

Quick guide to tar, since it's super simple:

    
    
        c for compress
        x for extract
        f for file (since we send it to stdout / read from stdin, I don't use the f option)
        v for verbosity (not sure if that works on the remote side)
        z for zlib compression (ssh can do compression already, so also unused here)
        t for testing (reading) an archive without extracting it ("tar tv <your.tar" will show you the contents, it's almost like real TV!)
    

That's all I've ever needed.

So what this will do is run "tar create <directory>" on the remote system and,
tar being a classic tool, it'll just output that binary archive data to stdout
since you didn't specify a file ("tar cf your.tar <directory>"). On the
receiving side, you pipe it to another tar command that reads from stdin
(since, again, no file was specified) and extracts the files from the incoming
data stream.

------
jefftk
This skips my favorite reason to use mosh: it has predictive local echo and so
makes high latency connections much more useable

~~~
rcpt
Mosh would be so great if it worked with the mouse.

~~~
lucb1e
Isn't that called remote desktop?

~~~
rcxdude
Terminals can have mouse support. It works with some TUI apps like vim.

~~~
lucb1e
All the terminal software with mouse support I've found so far just messes
with things, I wasn't aware that people actually use this (e.g. in vim
enabling mouse support messes with the yank buffer and doesn't let me select
things for clipboard copying anymore, I immediately turn that off in setups
where some overzealous maintainer default-enabled it). The amount of software
with support is very low anyway, so if one really wants to use a mouse, remote
desktop might serve one better.

~~~
sesuximo
?? Sounds like mosh is the only program in the stack that doesn't support this

------
pinopinopino
I am missing certificates on this list, you never have to distribute
individual public keys. And you can sign certificates with an API. It is also
nice to give someone temporary access, because they are valid in a time
interval.

~~~
mmalone
Yea we like certificates at smallstep. We’ve got a couple[1] other[2] posts[3]
that cover them pretty well. Should have probably made a more prominent
mention though :).

[1] [https://smallstep.com/blog/use-ssh-
certificates/](https://smallstep.com/blog/use-ssh-certificates/)

[2] [https://smallstep.com/blog/diy-single-sign-on-for-
ssh/](https://smallstep.com/blog/diy-single-sign-on-for-ssh/)

[3] [https://smallstep.com/blog/smallstep-
ssh/](https://smallstep.com/blog/smallstep-ssh/)

------
rubatuga
You need to press Enter before using the SSH ~ escape key.

~~~
downerending
Yup. I have no idea why they call it an _escape character_. It's an _escape
sequence_ , always of length two.

~~~
tashian
And on an international keyboard it’s ~~, because ~ defaults to being a
character modifier. If you nest SSH sessions, then you add more ~s. So in your
fifth nested SSH session on an international keyboard the escape sequence
would be \n~~~~~~~~~~.

~~~
downerending
Hmm. Is that right? I thought you could type ~~ to send a ~ through to the
destination. So, ignoring the international aspect, I was thinking you'd type
~ to escape your first target, ~~ for the second, ~~~~ for the third, and
~~~~~~~~ for the fourth. (Too lazy to test it.)

Perhaps better is to set a different escape char for layers you care about.

~~~
rubatuga
I think the parent is still right, just a problem with international
keyboards.

------
outluch
mosh + tmux for sessions of uptime length

ssh-copy-id to never enter passwords again

~/.ssh/config add servers to never write hostnames etc again

and repo on gitlab with bash script that installs all of this and sets up own
config of zsh and tmux for me on any new ubuntu server machine.

what else to dream about? curious

------
kbenson
I was tired enough of losing connections to work systems I was working on when
network topology changes, or my laptop was moved, or it went to sleep, or I
moved to a new computer (e.g. I'm at home) that I wrote a simple script to
jump all my ssh connections through a VM at work, but with the extra step that
the connection from the jump VM happens in a tmux that's named based on the
desired host, and with options to reconnect to an existing session if it
exists.

With the script named "go", Here's what that allows in practice:

go foo.bar - Connects to host foo.bar

go foo.bar - Second connection to host foo.bar that uses same session, so
keystrokes show in both, even if they originate from separate locations, like
home and work.

go foo.bar 2 - Additional param is added to session name, so you get a new
connection to foo.bar.

go -list - Lists all connection sessions, and only connection sessions,
because there's a special prefix to distinguish them from normal tmux sessions
that might exist on the VM.

go -restore - Spawn a new terminal for all open connections. Useful for
getting all terminals back after the network drops, or you reboot, or you're
on your home computer instead of work, etc.

Currently this is implemented in a batch script on windows with some ugly
hacks to make it work well with what PuTTY's command line options support
(commands for the remote host need to be in a file you specify...), and it's
pretty ugly, but I'll share if if anyone is interested. It would be much
easier in bash with openssh (it's even possible OpenSSH supports enough
features to do this in the ssh config).

~~~
geocrasher
Why not just use Mosh? It's stateless connections that are persistent even
when internet connectivity isn't.

~~~
sh-run
Or better yet, eternal terminal. That way you can keep your scrollback

[https://eternalterminal.dev/](https://eternalterminal.dev/)

~~~
kbenson
I do have my scrollback buffer. The tmux session is on the VM, and within that
session is another ssh connection to the target system. If the target system
and the VM are disconnected (very unlikely without either the target or VM
restarting) then sure, I might lose my scrollback (since tmux is execed with
the SSH command, when it exits the session will end), but in the much more
common scenario that my side loses connectivity to the VM (or I change
locations), the VM still has an active connection going on in a tmux session
that I'm joining.

ET looks great for a lot of things, but not necessarily this environment,
which is a few hundred systems administered by multiple people, with extremely
high stability and security requirements. Honestly, all the extra stuff ET and
MOSH does is to give you that extra 1-2% of features to make it seamless, but
at the expense of separate protocols and new software, so you don't have to
expend new hardware (or in this case, virtualized hardware.

Connectivity problems almost always come from the last mile, whether that's
you moving to make the last mile somewhere else or your wifi or home
connection having a problem. A VM at Digital Ocean, or in my case the highly
redundant and available VMware cluster at work, is much less likely to have
any sort of problems, as are the servers that are generally being connected to
(and it those ARE having problems, you can't rely on sessions to them being
kept anyway).

For 99% of the cases, you can get by easily by just SSHing to a highly
available VM, starting a tmux session for the desired connection, and within
that session SSHing to the desired system. Jumping through other systems with
SSH is so common that OpenSSH has features built in to support it, even
transparently (where your config can just make it automatic for a class of
systems). In fact, I bet there's a way to get the OpenSSH Proxying SSH server
to keep the session open to reconnect to from the client if it's only the
client side that had a problem, so it doesn't even require the little script I
have. It's actually on my todo list to figure out the windows included OpenSSH
agent stuff and see how well the new Windows Terminal works as an SSH
terminal, but I haven't gotten around to it (or just use the WSL stuff, but I
haven't seen much need for it yet, I'm happy to do most my dev work in vim on
a dev server).

------
neilv
I've also long used SSH in various simple tunnels for my personal laptop's Web
browser and/or mail client, such as through EC2 instances (and at one point
also through a filtering HTTP proxy). Here's one version of it.

    
    
        while true ; do
            # TODO: make this do desktop notifications instead of osd_cat
            echo "TUNNEL CONNECTING" | osd_cat --pos=middle --align=center --lines=1 \
                --font="-unregistered-latin modern sans-bold-r-*-*-140-*-*-*-*-*-*-*" \
                --color=green1 --outline=2 --outlinecolour=white --delay=1
            ssh -2 -4 -a -k -n -x -y -N -T -D 127.0.0.1:1234 user@example.com
            sleep 3
        done
    

Separate from these little personal tunnels, there's some additional SSH
timeout options (sorry I don't have handy at the moment) that I've found
frequently helpful in my uses of SSH at work, plus an external timeout wrapper
that can kill the ssh process, for long-running scripts dealing with a non-
OpenSSH server, but they've almost never been necessary in practice for these
personal tunnels.

------
danielecook
If you are working on an HPC cluster:

[1] The ssh package for vscode is great

[https://code.visualstudio.com/docs/remote/ssh](https://code.visualstudio.com/docs/remote/ssh)

[2] Also check out rsub which allows you to open a file in your terminal
locally

[https://github.com/henrikpersson/rsub](https://github.com/henrikpersson/rsub)

------
xioxox
Everyone probably already knows this, but enabling compression (-C) makes
running remote X programs more usable over slower connections.

~~~
MaxBarraclough
Not a bad tip, but using gzip compression over the wire seems pretty stone-
age. The proper solution is surely to use a modern lossy video-compression
algorithm. Is that possible with X?

It's not something I know a lot about. Is this where VNC steps in?

~~~
lucb1e
I don't know much about it, but I'm fairly sure that compressing the Xorg data
stream with lossy compression is going to mess it up completely and would
require a complete overhaul of the protocol to make that work. VNC is indeed
the more standard unix thing (insofar as remote GUIs can be considered
standard on unix-likes) that applies lossy compression to the pictures being
sent over.

------
dijit
This reminds me of the ugliest hack I have ever written.

For context: passwords rotate every 90 days, there are different passwords for
client facing and "internal" servers, (and different passwords for linux
machines vs Windows machines).

All connections to client facing servers (which is my job) must go via:

1) a VPN

2) a "local" jumphost (both ssh/rdp), only accessible via VPN

3) a "remote" jumphost (also, both ssh/rdp), only accessible via the "local"
jumphost".

Additionally; The majority of my servers are Windows based.

So, what do you do when everything goes wrong? well, you VPN with your
"normal" password, and your 2FA RSA token. Then you rdp (or SSH) to the local
jumphost with the same password as the vpn, then you rdp (or, ssh) to the
remote jumphost with a different password, then you finally RDP into the
machine that is interesting.

So, being the lazy git that I am, I wrote a program that scrapes my passwords
from 1password, and ssh's into those jumphosts creating a tunnel all the way
through. Then I call freerdp on localhost.

For this to work I had to do a bunch of ugly things like:

1) figure out the dimensions of my display and scale everything, because
freerdp doesn't do this automatically.

2) call python from bash because getting a unique random socket requires
binding to "port 0" which is not something I think is possible inside of bash.

3) do the same on each hop.

4) determine which password is needed based on the "domain" of the machine

5) detect if the machine is actually accessible or not (IE; are you on the
VPN? is the machine actually "local"?)

Anyway, I should share the code, we can all revel in its ugliness.

~~~
1cvmask
When will you share it?

------
hawski
At your home install nss-mdns on Linux. It uses avahi for mDNS name resolution
on your LAN. You can then forgo /etc/hosts and DHCP reservations between your
machines at home. It is compatible with Mac's. I don't know what you can use
on Windows for the purpose.

Then to extend this a notch or two install Zerotier on all your hosts. Now you
have virtual LAN between all your machines even outside of your home. It is
P2P and does hole punching and whatever you need to work. You can connect to
your computer via tethering from a puny little laptop while sitting on a bench
outside your house. If you are a home admin for your family you can add those
computers to your virtual LAN. Or your friends can also join. Then you can
easily share photos or whatever straight with Samba or even an intranet of
sorts.

~~~
AnthonBerg
Strongly seconded! (Tossing some further dots to connect into the mix: Mosh
and tmux and iTerm2's tmux integration fit very well in too. I think Visual
Studio's Live Share should also work faster over ZeroTier - it should serve as
a direct connection. I'm also really keen to try Emacs multiuser editing on a
remote terminal over mosh and tmux.)

And: I don't know if it's the placebo effect, but it seems to me that
connections over ZeroTier are noticeably more responsive. Like slightly but
noticeably.

~~~
hawski
Connections on Zerotier itself often survive roaming. I was amazed when I went
from tethering outside to home Wi-Fi and ssh session was still responsive.
Although it can freeze and definitely can timeout. I'm yet to try mosh.

I did not hear about Live Share, but was planning on using Visual Studio Code
Remote Development. For my current work ssh -CX is often enough for me. I also
intend to use Xpra, as I found X2Go a bit more rough around the edges.

It could be interesting to measure the effect. Probably they did some testing
already, but probably as you said - placebo ;)

------
gumby
Just run a remote command, via ssh foo the_command

I commonly do ssh remote ‘(cd foo; git pull; make && run)’

Of course with tramp I don’t have to do this as much as I used to.

------
flyinprogrammer
[https://gravitational.com/teleport/docs/](https://gravitational.com/teleport/docs/)

Is another great tool for getting off SSH.

Similarly if you're in AWS and not using Session Manager, you're missing out
on a huge value add: [https://docs.aws.amazon.com/systems-
manager/latest/userguide...](https://docs.aws.amazon.com/systems-
manager/latest/userguide/session-manager.html)

~~~
mmalone
Disclosure: not OP, but work at smallstep

Smallstep has a product[1] that's a lot like gravitational teleport. That's
how we got deep enough into SSH to write this post. Teleport isn't bad. The
two biggest differentiators are probably:

\- Smallstep SSH is hosted (with HSM-backed private keys)

\- Smallstep SSH does user & group syncing from your identity provider (i.e.,
you don't need to adduser / deluser folks anymore) so you don't need to do any
user or key management on servers at all

We're also doing everything using standard OpenSSH, whereas teleport replaces
your SSH client & server (or at least it used to, skimming their docs it looks
like that might be changing). Authentication is via OAuth OIDC (single sign-
on), user & group sync is via SCIM, plus PAM & NSS on the host side. So it's
all pretty standard stuff.

Finally, Smallstep SSH is built on our open source toolchain, step[2] and
step-ca[3]. Actually, if you want something completely free that does all of
this you can just use those tools and do something like gravitational
yourself. We have a blog post[4] explaining how.

This product is only a couple weeks old, so feedback is very welcome!

[1] [https://smallstep.com/sso-ssh/](https://smallstep.com/sso-ssh/)

[2] [https://github.com/smallstep/cli](https://github.com/smallstep/cli)

[3]
[https://github.com/smallstep/certificates](https://github.com/smallstep/certificates)

[4] [https://smallstep.com/blog/diy-single-sign-on-for-
ssh/](https://smallstep.com/blog/diy-single-sign-on-for-ssh/)

------
pawurb
2FA for SSH sounds great but I can only imagine how cumbersome it must be for
using on a daily basis. I've recently started permanently locking my SSH ports
and only briefly whitelisting them for only my IP with a bash scripts whenever
the access is needed [https://pawelurbanek.com/ec2-ssh-dynamic-
access](https://pawelurbanek.com/ec2-ssh-dynamic-access)

~~~
mmalone
Check out our single sign-on for SSH stuff at smallstep (where I work). Either
in open source[1] or our product[2]. The hassle of 2FAing all the time is one
of the big reasons I love single sign-on for SSH. Basically, you do 2FA when
you're pushed through single sign-on. But then you're issued a short-lived
certificate that gets put in your ssh-agent. You only need to 2FA to _get_ a
certificate. So you can tune 2FA challenge frequency based on certificate
lifetime.

It's sort of like a browser session where you "login" and then you can browse
until your cookie expires. Here you "login" and you can SSH until your
certificate expires. So you have strong authentication, but you're only asked
to do it periodically.

[1] [https://smallstep.com/blog/diy-single-sign-on-for-
ssh/](https://smallstep.com/blog/diy-single-sign-on-for-ssh/)

[2] [https://smallstep.com/sso-ssh/](https://smallstep.com/sso-ssh/)

------
d33
Surprise not to see tmate.io get more attention:

[https://tmate.io](https://tmate.io)

It's packaged for most Linux distributions and offers a seam-less "share your
terminal" experience over the web. Has an optional read-only mode. Users can
access the terminal via https and ssh. Great for low-bandwidth
videoconferencing.

------
ransom1538
screen.

If you are out there and you DO NOT want to lose your session due to a network
error screen is for you! Even better, you can have your coworkers join your
screen or reattach to it and watch you use ssh.

~~~
lozf
I loved screen in the 90's, but many of us prefer tmux these days.

~~~
aarbor989
Can I ask why? I've tried it out (albeit, very briefly), and it just seems
like the same thing but with different command shortcuts. Is it just because
it's newer?

------
chaps
If you let ssh connection dies, these keys, in order, will kill the ssh
process before the timeout kicks in: esc ~ .

------
heavyset_go
Port tunneling, X forwarding, keys + agents, sshuttle and byobu all make SSH a
good solution for getting stuff done.

~~~
lucb1e
It would be nice if you could include at least one sentence about these
product names to give an idea of what they do. For other readers:

sshuttle:

> Transparent proxy server that works as a poor man's VPN. Forwards over ssh.
> Doesn't require admin. Works with Linux and MacOS. Supports DNS tunneling.

> Forward all traffic:

> sshuttle -r username@sshserver 0.0.0.0/0

byobu:

> Byobu is an enhancement for the GNU Screen terminal multiplexer or tmux
> [...] that can be used to provide on-screen notification or status, and
> tabbed multi-window management. It is intended to improve terminal sessions
> when users connect to remote servers.

------
joshlk
It’s a bit annoying that the word hack is used to mean tips and tricks here.
As hacking ssh is big area

~~~
m463
hacker as in "hacker news" is closest to "tricks" of tips and tricks

[https://stallman.org/articles/on-
hacking.html](https://stallman.org/articles/on-hacking.html)

However if someone said "hacking ssh" without the context of hacker news, I
can see why the general computer person would probably think of the newer
definition, which implies cracking or seeking security vulnerabilities.

------
dustinkirkland
Try using ssh-import-id to fetch your (or a friend or colleague's) public key
from Github!

~~~
mmalone
If you like ssh-import-id to pull keys from GitHub, you’ll _love_
AuthorizedKeysCommand to pull keys from GitHub.

Depending on use case, though, this can be a bit sketch. At smallstep we like
SSH certificates, which make life similarly easy on everyone with a bunch of
other benefits. You can find a couple relevant posts on our blog if you’re
interested.

Incidentally, GitHub now supports SSH certificates (for enterprise edition, at
least).

------
skocznymroczny
ssh not having support for --password argument is a big drawback. Usually
requires to use some weird workarounds, especially when copying ssh keys to
the machine are not an option.

~~~
Cthulhu_
It would put your password into your `~/.bash_history` and show up on the
screen though; using a commandline argument to pass passwords is inherently
unsafe.

Sure you could have a secure system that only you use, but the people behind
ssh cannot make that assumption. Removing the risk entirely is better than
trusting the users.

I'm sure there's a shell trick you could use to pass a password on the prompt
anyway. For all other use cases, copy SSH keys securely.

~~~
lucb1e
> It would put your password into your `~/.bash_history`

I'm fine typing (or pasting) my password in an interactive prompt when I'm
interactively using it; that's not the problem. What I would like a --password
option for is when I'm not interactively using it, like from a script. It'll
still show up in the process list (ssh could overwrite it but there are some
µs where it's there) but my laptop is single-user so that's no big deal.

> I'm sure there's a shell trick you could use to pass a password on the
> prompt anyway.

There is software that does it, but it's a real pain to find a short command
that does it. Simply echo password | ssh user@host does not work, the openssh
authors disabled that on purpose.

Ssh keys are, of course, the solution whenever possible, but that's not always
possible. I'll be the first to admit that the legitimate uses for --password
are rare, but they're definitely there and having to install extra software to
make that crap work is just a real pain. I'd rather be able to shoot myself in
the foot with unix tools.

~~~
dredmorbius
It's quite likely that what you want in this case is to use SSH forced
commands with sshkey auth and a remote account exclusively dedicated to
serving this one request.

[https://binblog.info/2008/10/20/openssh-going-flexible-
with-...](https://binblog.info/2008/10/20/openssh-going-flexible-with-forced-
commands/)

~~~
lucb1e
But that would be server-side, if I'm reading it right? I'm trying to remember
the last time I needed password auth and wanted to do it in an automated
fashion (like I said, it is rare), I think it was a router where the
filesystem was read-only (only /var and /tmp writable, or something like that,
so can't set authorized keys).

~~~
dredmorbius
That's annoying. Many such cases have an overlay filesystem or other mechanism
for preseving specific settings (ssh configs almost always included) or
reflashing the image with desired config changes.

Forced commands are implemented on _both_ sides of the session, as the
previously linked reference ... doesn't entirely make clear. Locally you need
to create, and generally configure, a specific key with the remote user@host
Remotely, you associate that key with a specific command, in an
authorized_keys file.

------
eneveu
At my previous customer, we had to SSH through a bounce gateway (SSH key
auth), then a bastion host (LDAP password auth), then the target host (LDAP
password auth). Since it was quite annoying, I used multiple ssh_config tricks
to make it work without having a 1000 lines SSH config, and I wrote a doc to
share best practices. I anonymized it and posted it below.

\----------------------

 _ssh_config_best_practices.md_

    
    
      CanonicalizeHostname yes
    
      ##############
      ### GitHub ###
      ##############
    
      Host github.com
        User jdoe
        IdentityFile ~/.ssh/id_rsa_github
    
      ##################
      ### My Company ###
      ##################
    
      Host myproject-dev-*
        ProxyJump bastion-dev
    
      Host myproject-prod-*
        ProxyJump bastion-prod
    
      Host bastion-dev
        HostName bastion.myproject-dev.mycompany.com
        ProxyJump bounce.myproject-dev.mycompany.com
    
      Host bastion-prod
        HostName bastion.myproject-prod.mycompany.com
        ProxyJump bounce.myproject-prod.mycompany.com
    
      Host *.mycompany.com myproject-dev-* myproject-prod-*
        User john_doe
        IdentityFile ~/.ssh/id_rsa_mycompany
    
    
      ##############
      ### Common ###
      ##############
    
      Host *
        ControlMaster auto
        ControlPath ~/.ssh/sockets/%r@%h
        ControlPersist 2h
    
        # On OS X, UseKeyChain specifies that we should store passphrases in the Keychain.
        IgnoreUnknown UseKeychain
        UseKeychain yes
        AddKeysToAgent yes
    

\- "CanonicalizeHostname" ensures the config is re-parsed after hostname
canonicalization. This means that when you SSH into "bastion-dev", SSH re-
parses the config using the full hostname "bastion.myproject-
dev.mycompany.com", which then correctly matches the entry "Host *
.mycompany.com".

\- "ProxyJump" was added in OpenSSH 7.2 (2016) and is simpler and more
powerful than "ProxyCommand".

\- "bastion-xxx" hosts are the only ones whose hostname can be resolved from
the bounce gateways. To connect to other hosts, the trick we use in this
config is to do two ProxyJumps: your machine --> bounce --> bastion --> target
host.

\- "ControlMaster" lets you do SSH multiplexing, which in our case is
particularly useful when channeling multiple connections through a bastion
host. It also persists SSH connections for a while after we disconnect, which
speeds up future connections, and avoids typing the password all the time.

\- When you ssh into a host, you must enter your LDAP password twice: first
for the bastion, then for the target host. If you then ssh into a second host,
you must enter your LDAP password only once, since ControlMaster reuses the
SSH connection previously established to the bastion. Also, if you close those
SSH shells, the connections will persist for two hours (see ControlPersist),
so you won't need to type your password for those two hosts if you try to SSH
into them again in the next two hours.

\- Using this ssh_config, there is no need to add an Host entry for each host.
It is not even needed to specify the IP addresses, since they will be resolved
using the DNS on the bastion host.

\- With this configuration, you can easily copy a file using scp between your
local machine and the target host, without needing to first copy it to the
bastion, then ssh to the bastion, then copy it to the target host, then remove
it from the bastion...

PS: an ssh_config is parsed from top to bottom, so specific comes first,
generic comes last. That's why "Host *" must be at the bottom.

------
terrywang
Some tips:

\- Customise ~/.ssh/config to suite your needs (be careful with storm - manage
ssh like a boss, it helps when scripting or searching hosts but has a
outstanding bug converting keywords to lowercase [1])

\- Use ed25519 key over RSA

\- OpenSSH 8.1 added support for FIDO/U2F (use your YubiKey or equivalent)

\- Put `IPQoS lowdelay throughput` in your ~/.ssh/config if you run a rolling
release (e.g. Arch, Gentoo) or your openssh rolls via homebrew on macOS.
latest openssh client with older version of sshd may produce weird
disconnection issues (server reset connection, client side is able to connect
but terminal hangs in 5~10s). Spend quite some time digging only to find that
it was caused by default change for IPQoS (to IPQoS af21 cs1) introduced in
OpenSSH 7.8p1 [3]

\- leverage ssh-copy-id

\- ssh -vvv | ssh -G (troubleshooting from client side)

\- /usr/sbin/sshd -p 2222 -f /path/to/sshd_config -D -ddd (troubleshooting
sshd server side)

\- Be careful with `UsePAM no`, make sur

\- use AllowUsers / DenyUsers vs DenyGroups vs AllowGroups , mind the order

\- know how to use ssh-add / ssh-keygen / ssh-agent / ssh-keyscan

\- audit SSH config (ssh-audit / lynis), version control ssh_config /
sshd_config properly if possible

\- openssh + tmux ;-)

Personal favourite tips/tricks:

\- ssh -D (used to use this dynamic port forwarding, open a local Socks5 proxy
to punch hole in firewall, encrypt traffic, it worked for a while against the
infamous GFW, only a little while though)

\- ssh -L | -R TCP forwarding

\- ssh -X | -Y X11 forwarding (run X11 apps remotely and display it on X
Server locally)

\- More personal SSH tricks put together over the years, surprise to find that
my person OpenSSH notes are 150+ pages in Google Docs, sorry can't put all in
a comment... [2]

\- RTFM works, OpenSSH is worth the time ;-)

[1]:
[https://github.com/emre/storm/issues/157](https://github.com/emre/storm/issues/157)

[2]:
[https://sites.google.com/site/imterry/computer/tools/ssh](https://sites.google.com/site/imterry/computer/tools/ssh)

[3]: ssh(1)/sshd(8): the default IPQoS used by ssh/sshd has changed.
[https://www.openssh.com/releasenotes.html](https://www.openssh.com/releasenotes.html)

------
Camillo
If you have a Mac, the correct way to use SSH is:

1\. Install iTerm.

2\. ssh -t remote.workstation 'tmux -CC new-session -A -D -s main'

~~~
rootusrootus
I recommend iTerm2, not iTerm.

------
tgb
I often ssh then open a vim on the server. But it would make more sense to me
if vim had support for open remote files via its own ssh connection instead.
Does this exist (for vim or some other editors)? Then I could always use my
local config and it would be easier to type on a bad connection.

~~~
enriquto
> Does this exist (for vim or some other editors)?

It does:

    
    
            vim scp://server//path
    

Run

    
    
            :h netrw
    

inside vim to learn all about editing remote files.

~~~
tgb
Thanks, this looks like what I want.

