File format: as many of the following blocks as you like
Host $ALIAS <-- whatever you want here
ProxyCommand ssh $USER2@$PUBLIC -W %h:%p
laptop> ssh email@example.com
public> ssh dev@myworkstation
laptop> ssh work
netcat worked fine.
Yeah, the article sets separately first
ProxyCommand ssh <user-bar>@bar.domain nc %h %p 2> /dev/null
Thanks though, I didn't know about the ?? syntax.
The patterns are similar to shell globs: * matches zero or more characters, ? matches exactly one.
scp $ALIAS:/var/log/mylogs/logfile ~/backups/logs/
Nautilus / Connect to Server / Server Address: work/home/user
(here 'work' is the alias for the work computer)
Host 192.168.* *.foo.*.com *.bar.net
alias compile-ssh-config='echo -n > ~/.ssh/config && cat ~/.ssh/*.config > ~/.ssh/config'
alias ssh='compile-ssh-config && ssh'
Maybe it is just me, but I prefer to dump all my custom commands and aliases into .zshrc so they are easy to backup/track/find.
One reason is so that invocations of ssh outside of the context of user invocation of ssh at the command line will also have these customizations included. This is especially important for ssh, which has emerged as a main security interoperability tool for Unix systems.
For example, if you use rsync, the tunneling and host alias conventions you set up in .ssh/ will carry over transparently to the ssh tunnel used by rsync.
Another example would be invocations of ssh in scripts (sh/bash scripts, even) that will not or might not read your .zshrc.
The idea of creating dependencies on a configuration profile inside a bash script is the exact opposite of what I would do.
I also could not rsync things to my local machine [bandwidth constraints] and would be rsyncing between remote machines, which being a shared environment, I would rely on explicit invocations instead of creating configurations/aliases.
Thank you for telling me how/why other people make different choices. I always do seem to have the blinders of my process is the only process I consider when commenting on HN. :)
Just specify the alias host name as configured in ~/.ssh/config and the user, identify file, and anything else you put there will be used as set.
The excellent (though unfortunately named) keychain
utility provides a ready and powerful abstraction for
both ssh-agent and gpg-agent.
> No more password prompts
For continuous deployment you can't easily work around using ssh, but at least the access can be limited to specific commands only.
In this case you would limit this ssh-key to only be able to execute the nagios monitoring scripts. Nothing else.
You do this in ~/.ssh/config on the remote machine.
For anyone interested here's an SO question with an example:
I have an account tests@host which runs the tests (scripts)
that need to login to an array of machines.
In order for keychain to be helpful here, you need
1) You need to be able to interactively login to tests@host
once after bootup; after that you don't need to touch the
2) Then, the test scripts need to say
edit: I removed the Nagios references as other posters
rightly point out that there are more endemic ways to
collect information with Nagios.
This treatment ssh does not mention ssh-agent and,
more importantly perhaps, implies that there is a certain
virtue in having private keys unprotected by sturdy
passphrases lying around.
There is not; most emphatically not.
ssh-keygen -f id_rsa -p
As a bonus Ed25519 keys unconditionally use bcrypt for protecting the private key
- X11 Forwarding
- Reverse forwarding (bind listening sockets on the remote machine,
redirecting to a local service)
- SSH-Based VPNs
Disconnected from your host but not timed out yet? Press Enter, ~, . and the client will quit.
I've tried this before, and what effectively always happened (to me) is that as soon as I started copying a file, I couldn't continue working in Vim anymore until the file was done transmitting because the copying would eat all the bandwidth. There may be a flag or setting around this, but I've never found it. When I open two connections, it is usually fine.
Also note that the docs recommend against using publicly accessible dirs such as /tmp/ for storing your mux sockets. I'm not sure of the exact threat (maybe just info leakage about what hosts you're connected to, since the socket permissions themselves are strict), but I use ~/.ssh/mux/ for mine.
As for the rest of the article, really nice stuff. Nice tricks for ssh newbies. I wish he also talked about setting up a nonce system with ssh or move sshd to a non-default port to prevent attackers spamming port 22, or even remove password authentication altogether.
In other words, the inconvenience this brings is not adequate to the infinitesimal increase in security.
Failed password for root2 from 126.96.36.199 port 44990 ssh2
Failed password for admin from 188.8.131.52 port 44990 ssh2
Failed password for sysdb from 184.108.40.206 port 44990 ssh2
Failed password for scott from 220.127.116.11 port 44990 ssh2
(It's fully configurable - the number of failed attempts, the length of the autoban, etc.)
Note that I'm not advocating against a port change; just saying that it's the very last of available options, as it's essentialy security-by-obscurity, and thus only gives you a feeling of higher security (due to less spam in the logs).
Which would prevent even that first password failure attempt from occurring.
Effortless...except you need to configure every client to use the non-default port. How much effort is that? IDK, depends on your use case.
That said, I consider it harmless; which is to say, the benefits and drawbacks are just about equal, IMNSHO.
I've never seen this as extra effort given I'm already in the ~/.ssh/config file adding an "IdentityFile" line anyway? The only time you wouldn't is if you are using the same (default) private key for every configured connection. I will faithfully assume that no-one is advocating for that :)
You are wrong. Please refrain from giving security advice.
Changing or filtering the SSH port prevents your host from being compromised by automated netrange sweeps in the event of a pre-auth ssh vulnerability. For this reason changing the SSH port is considered best practice.
Changing the port does nothing against targeted attacks and it's not about 'hiding' anything. The purpose is to take your host out of the scope of automatic scans which almost exclusively focus on the most common ports (22, 2222, 22222 ...).
If one needs to SFTP as root, they could enlist `internal-sftp` target, too (although I haven't tested this, I don't SFTP as root and if I must update some files — I setfacl on them)
While one can't peek from behind your shoulder, if they got a keylogger on your machine, they could steal ~/.ssh/id_* files as well (and sniff their security passphrases too).
Getting by /without/ direct SSH root access is often impractical (think about scp), and without-password is a secure way to have it.
Also, the more people know about "without-password", the less people will set PermitRootLogin to "yes".
It's far easier to audit what's been done to the server, which is important not just for compliance but also for figuring out why something's broken suddenly.
It also means that you get to have your own shell history, your own shell settings, your own vim settings, etc, etc.
In general, having proper deployment, log collection and config management tools in place tends to mean you rarely need to scp files around at all - and the cases when you do, you can work around this by scping them to some other dir, and moving them locally with a sudo command.
A user login followed by su to root is a valid alternative, but I wouldn't have a problem with allowing key-only root access via sshd either.
You'd want the root key/password to be very tightly controlled for the reasons you mention, but having it set is (IMO) a worthwhile backup plan for when things go wrong.
I said "impractical", not "impossible". Of course I can use sudo. But it's more work. I require root access a lot. It adds up quickly.
And I hate typing passwords/passphrases. In fact, many of my passwords I can't remember. I've got an SSH agent for that, which reduces passphrase entry to yes/no (tab-space/space, actually).
Also, I prefer my normal user account not to be a sudoer at all.
Besides, please consider that disallowing root access actually only gets you protection against root password guessing anyway. The "stolen key + passphrase" scenario in a sibling subthread is so absurd I felt the urge to bang my head against my desk. Sudo won't help you there either.
 Now please don't suggest "passwordless sudo".
 And there is another inelegance: /home is usually on a different partition than /, so your way will involve an additional copy. If /home is even large enough to fit that file.
I understand you didn't say impossible, but this doesn't really seem to be impractical to me at all.
@Partitions: Seperating /home and / prevents normal users from filling up /. (And if you put both on LVM, you can grow them as needed.) Yes, I've only had this on some of the servers I've run.
@Impractical: it's one additional command for something I do quite often, and I still don't see the benefit (reminder: I fully agree with never using "PermitRootLogin yes").
 Granted, it does provide some context seperation in the sense that if you want to perform an administrative task, you have to explicitly use sudo. But it doesn't increase security, and it offers no advantage over "direct root access + normal user account".
 Not just scp, but also things like "less /var/log/messages" or "git clone root@host:/etc".
And again: what does "PermitRootLogin no" gain you over "without-password"? Why restrict it for no additional benefit?
Though direct remote code execution is probably much, much more likely than authentication bypass.
Allowing root login can be a user-management headache in multi-user environments, but strong SSH security can exist for root just the same as for any other user.
If you need to conveniently update some files on regular basis — chown or setfacl on them to your usual user or group. If you need to update root-owned file once in a blue moon — scp && ssh sudo mv it, it's not that hard, but better for security.
Oh, and obviously there are exceptions to the rule - say, if you're configuring some freshly-installed system and doing heavy config editing by hand (say, Puppet is not your thing or you just don't care about re-deployment), temporary lifting security barriers is perfectly fine in most cases.
Passwordless root SSH is perfectly fine, which is why it is enabled by default. By people who have thought a little longer and harder about all this than you. (Sorry for the tone. Maladvice like yours on public forums is demonstrably harmful.)
My own fix was to use 3G to do the SSH work via a tethered phone and to use the wifi adapter to run the bulk of any other web traffic. It'd be great to have a workaround for DPI, though, if anyone has any experience there.
There's no reason they should be refusing your traffic, and they are probably only causing a problem because some overzealous consultant cranked up the setting too high.
In my city, the compliance requirement that must be met is to have a policy to address "obscene, indecent, violent, or otherwise inappropriate for viewing in the library environment". Blocking SSH access is not required meet that compliance requirement.
In our case, our library actually doesn't filter, it's left to the discretion of the librarians. And there is a time limit for access.
Read up about SSH VPNs. Probably some kid set up a proxy accessed over SSH port forwarding, to access some pr0n site, got caught, and next thing you know, no SSH allowed anymore. If they were really smart they'd allow it but rate limit it to 2400 bit/s, which is fairly fast for console work but not so great for downloading animated pr0n gifs.
Whats weird is librarians typically are pretty hard core against censorship. The same place thats willing to go to court to keep "to kill a mockingbird" or "huckleberry finn" on the shelves, will simultaneously spare no expense to block adults from accessing a breast cancer awareness site. A strange bunch, librarians.
If the original poster brings in an air card, and starts watching porn with the volume cranked up, the library doesn't have a right or obligation to jam the cellular network. They do whatever their policy calls for (usually ask the guy to leave).
Librarians are very rarely the problem -- the trustees or other governing body usually is. Make a fuss and in most cases the problems will go away. YMMV depending on the community, of course.
The SSH over SSL solutions others have pointed out may do the trick for now.
In any case, you can try proxying SSH over SSL using stunnel: http://askubuntu.com/questions/423727/ssh-tunneling-over-ssl
Or you could try setting up OpenVPN, it's easy enough.
But if someone expects to gain deep, expert level knowledge of how OpenSSH works then he'll be disappointed. What expert-level books can you recomment?
Perhaps I was a bit hard on that book, my expectations were that it would be a deep, expert type of book. If that's not what you are looking for, it is perfectly fine. Actually, now that I think about it, the epub version is good to keep on your mobile phone as a handy, easily-accessible reference for use in the coffee shop, so I guess it does get a credit from me.
Note that on the tip of ~/.ssh/known_hosts providing ssh auto completion, adding SSH server config to ~/.ssh/config will also enable auto completion.
It does have a few quirks though. One that I've noticed is (IIRC) that closing a shared session isn't sufficient to pick up new groups membership when you reconnect. You actually need to kill the master connection as well.
The syntax for shutting down a master connection is a bit clunky as well:
ssh -O stop -S ~/.ssh/mux/socketname hostname
 There might be other ways of refreshing group memberships, but I don't know of any.
It shouldn't be hard to find a program which runs fine with tun2socks but breaks completely or subtly with tsocks.
(disclaimer: tooting my own horn here, but it is a mighty useful trick)
laptop - user (userid: me)
F - firewall (userid: me)
A - machine 1 in colo (userid: colo)
B - machine 2 in colo (userid: colo, machine I want to access)
C - machine 2 in colo (userid: colo)
100s of machines.
I am able to log into colo@A via F w/o password as I copied the ssh key there manually. (path me@laptop -> colo@F -> colo@A)
QUESTION: Is it possible to ssh to other machines (B,C..) via A while assuming full identity of colo@A? (Path would be me@laptop -> colo@F -> colo@A -> colo@B/C/..) With my current config when I try to ssh to B it knows request is originating from 'laptop' and still asks me for password.
ProxyCommand ssh ...
However, apenwarr's sshuttle https://github.com/apenwarr/sshuttle is a briliant semi-proxy-semi-vpn solution that, in return for local root and remote python (but not remote root), gives you transparent VPN-style forwarding of TCP connections (and DNS requests if you want). It works ridiculously well. Try it, if you haven't yet.
I have a shell script that helps with setting up trusted keys: trusted keys help if you need to run automated tests, that involve several machines, or simply if you would like to skip typing in a password on each connection.
# sshfs -o direct_io,nonempty,allow_other,cache=no,compression=no,workaround=rename,workaround=nodelaysrv user@remote:/place/ /mnt/somewhere
For even more performance:
* On server, start socat:
# socat TCP4-LISTEN:7001 EXEC:/usr/lib/sftp-server
* On client, do:
# sshfs -o directport=7001,direct_io,nonempty,allow_other,cache=no,compression=no,workaround=rename,workaround=nodelaysrv user@remote:/place/ /mnt/somewhere
above creates dynamic tunel (for use as socks proxy) through jumphost to reach http hosts available only to some_host_inside.lan machine