
SSH: Best practices - cujanovic
https://blog.0xbadc0de.be/archives/300
======
LinuxBender
The article didn't make mention multiplexing and MaxSessions defaults in
OpenSSH. The default is 10 which means you auth once, and all subsequent
logins are without auth and without syslog entries. If you manage secure
systems and have 2FA, this allows bypassing 2FA and logging.

All I have to do is trick your folks into testing a ruby / python / perl /
bash script for me that will drop a key on your machine, fire up ssh using
that key and tunnel back to my host. Now I have full control of your secure
(banking, government, eCommerce) environment, completely bypassing 2 factor
authentication. Just one link to one of your email distros and up to 10% of
your folks will run it.

Combine this with sudo credential caching and now I have root on all of your
systems without having to bother finding vulns.

Thx to Prandium for the demo of this simple social engineering exploit.

~~~
Hello71
so... your point is "don't run untrusted code"?

~~~
pmoriarty
You run untrusted code whenever your browser loads a web page that uses
javascript. With more and more of the web requiring javascript to work,
running untrusted code becomes less and less a real option if you want to get
any work done.

Arguably, plain HTML is also untrusted code, and I certainly don't trust every
single one of the authors of my operating system and all of the os-level
utilities and all the applications that I use, especially not Microsoft or
Apple. But I'm forced to use them for work.

A year or two ago there was some hoopla about Canonical collecting some
information about Ubuntu's users or something like that (don't remember
exactly) and a lot of people were up in arms about how they don't trust
Canonical. Canonical's reply was one of the most insightful I've ever read on
the internet. They said something like, "um... but we've got root".

That's a great point. The makers of your OS, the builders of your apps, they
have a lot of power over you that you grant them simply by using their stuff.
You're effectively trusting them, even if you don't trust them.

You could react by not using anything that anyone you don't trust writes, or
by thoroughly auditing all the source code of everything you use (you do use
only open-source software which can be audited by you directly, right?). But
this is not really practical for 99% of everyone on the planet.

~~~
AnthonyMouse
> They said something like, "um... but we've got root".

Which is incredibly misleading.

Sure, they could put malicious code in their distribution. It could do
whatever they want. And then it would be on your machine, where you can
potentially discover it and publish what you've discovered and allow others to
verify your discovery. Which could cause Canonical to _not_ have root anymore,
because people would immediately switch to Debian or CentOS or something else.

But if they _send data_ from your machine to their servers, no one has any
ability to determine what they do with it from there, how well they secure it,
who they give or sell it to, etc. It's a completely different situation
because it goes from trust-but-verify to blind trust.

~~~
pmoriarty
How many Ubuntu users actually verify everything Canonical sends them? I'd say
very close to none, especially for the binaries.

Sure, in principle you could verify. You could read through all the source,
you could even disassemble the binaries and read through the assembly (and
don't forget to verify the hardware too). Again, who actually does that?
Virtually no one.

So the two cases are a lot closer than you paint them to be.

Also, corporations do stupid, illegal, and unethical stuff all the time, even
when it's clearly (especially in hindsight) not in their long-term interest to
do so. Even when it will clearly destroy their reputation if the public found
out they even considered it.

Even if you trust "the system" (which is more of a hope that if there's
something malicious, someone out there will detect it sooner or later), there
are many cases of vulnerabilities in even open source code going undetected
for years... nevermind vulnerabilities in binaries that you don't have code
for and who the authors of aren't generous enough to clue you in on.

Canonical (or whatever source you actually get your OS or apps from) could
also send you some specially crafted something that no one else gets. Now
who's going to verify it for you if you don't do it yourself? How are you ever
going to find out, if you're not one of the ultra-paranoid and super-skilled
0.001% of Ubuntu users with infinite time and determination who actually
verifies 100% of what Canonical sends them? You just won't. And if you did,
there'd be a hundred new Ubuntu versions out by the time you finished
verifying even one of them.

People trust, but they very rarely verify. The size of operating systems and
applications and the skill to verify have all just grown too large to make it
practical for the overwhelming majority of people.

I do see your point about Canonical doing whatever they want with the data
they collect. That's certainly problematic. But that doesn't mean that there
aren't major privacy or security issues with trusting Ubuntu (or any other OS)
not to have malware on it.

~~~
AnthonyMouse
> Virtually no one.

Which is not the same thing as no one. And it only takes once to ruin your
reputation forever (see also Sourceforge, Lenovo, etc.) Moreover, the
probability of detection doesn't have to be very high at all because the cost
to Canonical would be catastrophic, so any possibility whatsoever acts as a
deterrent. And more so in this context because of the nature of the user base.

Your argument seems to boil down to the position that it doesn't hurt anything
to go from detection not perfect to detection not possible.

------
hackercomplex
Developer laptop compromise is probably the biggest security risk that any
startup company faces, because the developer laptop is an uncontrolled
environment with a lot of "attack surface" which may have been previously
compromised.

In my view there are emerging best practices in this area. There are two ways
to reduce this risk and both are controversial:

1\. Force developers to only develop software using an SSH terminal (by first
connecting to a developer VPN via 2FA and then sshing into their secure
development environment where may use tools like tmux, vi, and their
programming language of choice to get the job done). In this scheme copying
source codes or security credentials to a developer laptop is considered a
violation and becomes a fireable offense no questions asked.

2\. Require all developers to run a private USB-bootable linux desktop shell
which is known to be clean. In this case they remain free to utilize modern
desktop editors and code emulators (such as android simulator). It's even
possible to setup secure persistence in these environments so that the
developer's browser configuration, network/vpn config, dotfiles, apt installs,
etc are stored on an encrypted filesystem on USB device. The reason why a USB
image is preferred is because it's annoying to ask a new employee to
repartiation her personal harddrive.

My suspicion is that as more of the tools developers need to rely on are
cloud-based: (example: github, cloud9, jenkins, etc) we will eventually see
these modern best practices against client-side attacks being adopted more
broadly. The quality and reliability of hot-bootable ultra secure cloud
operating systems has gone thru the roof over the past couple of years and I
assume this trend will only accellerate due to the fact that Google Chrome OS
continues to penerate more of the market and consumers are getting used to it.

TLDR: ssh keys existing on developer harddrives is an info-sec anti-pattern,
they should only ever exist in system memory or in an encrypted partition on a
USB stick.

~~~
superuser2
(Disclaimer, unicorn employee.) I'm confused. Are you implying people write
company code on personally owned laptops? That's insane!

1\. Company walks you through setting up FileVault (with key escrow) and VPN
client, generating and uploading SSH public key immediately after unboxing
laptop.

2\. OneLogin + Duo (or pick your SSO/2FA scheme) for everything - internal
webapps, GMail, etc.

3\. SSH keys managed by Puppet.

4\. 2FA verification of SSH logins (using pam-interactive), through a bastion
(provided .ssh file makes it transparent), with OSX's default SSH agent, with
pretty much all of those best practices configured except smartcards/certs.

5\. Engineers have SSH access only to utility/development boxes. You can
deploy code to production through a webapp (identifying the commit ID) but you
can't get a shell as your application's user, and certainly not root.

6\. Webapps moving behind a VPN.

In the rare case that you need to debug in production beyond what you can get
from metrics/logs, you pair with a "blessed"/senior sysadmin type.

~~~
hackercomplex
Thank you for this. I suppose I should have included the caveot that my rant
applies mostly to early stage shops which are aren't ready to build out and
ship a shiny new dev machine for every employee. Another problem for smaller
startups is that if you're not careful almost everybody ends up being
"blessed". And simply being a senior sysadmin type doesn't mean that you're
automatically immune from all forms of social engineering attacks. I suppose
your point is that even very early companies should opt for standardized
hardware, I think you're right.

I <3 Duo Security.

------
heinrichhartman
Cached version
[https://webcache.googleusercontent.com/search?q=cache:BZ3Zeq...](https://webcache.googleusercontent.com/search?q=cache:BZ3ZeqaqyLAJ:https://blog.0xbadc0de.be/archives/300+&cd=1&hl=de&ct=clnk&gl=de)

~~~
fareesh
yeah - the link seems to be hugged to death or something

~~~
aris_ada
I just noticed this an hour ago. Bad config on the server and too little ram.
The link should work better now.

------
jsn
The part about gateway hosts says:

    
    
      > Host B
      >   ProxyCommand ssh -W B:22 A
    

That could be improved:

    
    
      > Host B
      >   ProxyCommand ssh -W %h:%p A
    

Then you can use wildcards for B ("Host 10.11.12.*"), and use custom ports
("ssh -p 2222 10.11.12.13").

~~~
ptman
I just read this
[https://glandium.org/blog/?p=3631](https://glandium.org/blog/?p=3631)

------
r0muald
I thought using per-service SSH keys was an useful mitigation against e.g.
GitHub public keys being exposed:

\- [https://blog.benjojo.co.uk/post/auditing-github-users-
keys](https://blog.benjojo.co.uk/post/auditing-github-users-keys)

\- [http://arstechnica.com/security/2015/06/assume-your-
github-a...](http://arstechnica.com/security/2015/06/assume-your-github-
account-is-hacked-users-with-weak-crypto-keys-told/)

\-
[https://news.ycombinator.com/item?id=9645703](https://news.ycombinator.com/item?id=9645703)

~~~
clinta
Public keys being exposed isn't something I think needs to be mitigated.
That's the whole point, they're public.

~~~
FiloSottile
Security is not binary. In this case it depends on whether disclosing your
identity to the servers you connect to is a problem in your threat model.

Saying "they are public so it's ok" is technical oversimplification.

~~~
hobarrera
> Saying "they are public so it's ok" is technical oversimplification.

It's not. The whole idea of the scheme is that you can publish them everywhere
with no risk.

~~~
oleganza
In case of Bitcoin pubkeys you don't want to publish them anywhere for privacy
reasons (that turn into real physical security reasons once you have enough
BTC). Also, notice how pubkeys are normally hidden under hashes and revealed
only at spending time. This drastically limits attacks on ECC if/when some
weakness in curve math is discovered. And if quantum computer is invented
tomorrow, people can safely transition funds to a new signature scheme by
introducing 2-phase commitments to safely reveal (now crackable) ECDSA pubkey
after another transaction was made with commitment to concrete signature
(preventing double-spend attempts using a cracked key).

------
noondip
Here's a brief guide for setting up SSH with YubiKey as a smartcard which
complements this article nicely - [https://github.com/drduh/YubiKey-
Guide](https://github.com/drduh/YubiKey-Guide)

~~~
hobarrera
Looks like I'd need to configure gpg to use it locally first.

One of the main uses of smartcards is to use it on other machines (eg: not
mine). What's the added benefit of adding one if I'm logging in from my local
machine?

~~~
rkeene2
For what it's worth, most smartcard applets don't typically store GPG objects
but RSA keys (and also usually X.509 certificate objects that go along with
them, as well as less-used RSA public key objects).

I use PIV (NIST SP 800-73) compliant smartcards with a PKCS#11 module I wrote
(CACKey), it just works with "ssh-add -s /path/to/libcackey.so", then SSH
away.

Additionally, there is a fork of OpenSSH called PKIXSSH that adds X.509
certificate support (in addition to the relatively recent, compared to the
fork, support for OpenSSH certificates) and then I can authenticate to remote
systems using my certificate -- which is helpful when my card is replaced, or
if my certificate is revoked the CRLs can be used.

------
eeZi
Good advice, except the "remove all system passwords" part. Do not do this.
It's a really bad idea. There are occasions where you _do_ need to
authenticate using a password, most importantly on the local console - what if
your server lost network access?

And that sudo workaround is bad since it requires agent key forwarding, which
you shouldn't use for the reasons the author himself noted a few paragraphs
up.

------
bisby
Weird, where I work we're not allowed to use key auth. They claim it's for PCI
since they can't enforce that the keys are passworded/encrypted.

~~~
deathanatos
That's an odd tradeoff. I'm going to have to disagree with the sibling noting
that "compliance != security".

Here's the thing: they say they can't enforce users to encrypt passwords
(hogwash; you can have a local agent that checks that), but do they check that
users are providing good passwords? I have much more faith in a randomly
generated key than I do in a human-generated password. The former pretty much
_requires_ compromising the machine; the latter I can just take pot-shots at
the server with. (They _do_ rate limit your SSH password attempts, right?
right?)

Even if we assume that the password is good, and that the server won't allow
infinite tries to guess it, the security is only equivalent: compromise of the
user's local account will reveal the password (just sniff the password) or the
key (just read the keyfile, and sniff the password if it is encrypted). I
don't see any world in which a password is _more_ secure than a key.

~~~
infinite8s
The parent comment is discussing encryption of the private key with a
password, not passwords for connecting to the server.

~~~
deathanatos
The parent is discussing both, and I am too; specifically, the parent says,

> _we 're not allowed to use key auth._

i.e., they're using password based auth.¹ The point of my post is that while
certainly users leaving the private key unencrypted isn't good, trading that
for password auth leaves you in a worse state overall (IMO).

¹Of course, they could be using an auth scheme that is neither password nor
key based. But given that the parent didn't come out and _say_ that, my gut
says that's not the case here.

------
akerro
I bookmarked something similar: [https://stribika.github.io/2015/01/04/secure-
secure-shell.ht...](https://stribika.github.io/2015/01/04/secure-secure-
shell.html)

~~~
majewsky
The same link is somewhere in this guide, too. This guide has a broader scope,
though.

------
rphlx
Also, move sshd to a random & rarely-used TCP port. Won't help much against a
skilled & determined targeted attack, however the "security through obscurity"
is not entirely worthless. Using something other than 22 is quite effective at
avoiding spray-n-pray scans and exploits.

For extra credit, set up port knocking.

~~~
hackercomplex
Choosing to only bind SSH to a VPN interface is another option. If you can
utilize a VPN that incorporates 2FA then that's even better. Projects such as
zerotier are going a long way towards making this kind of thing easier to
setup.

~~~
eeZi
Then the VPN breaks, and you're out of luck.

------
STRML
SSH's new AuthenticationMethods directive is extremely useful for pairing SSH
keys with a password and/or 2FA. You should absolutely use keys everywhere,
and encourage your users to encrypt their keys, but enforcing a password as
well ensures that logins are "something you have" (the SSH key) and "something
you know" (the password) as a sort of 2FA.

As a cherry on top you can put the password in LDAP or RADIUS server and hook
up traditional 2FA (Google Auth, Yubikey, Email, SMS) for that legendary 3FA
(ah... "something (else) you have"). Sounds hokey, but defense is best in
depth.

~~~
pascalmemories
Something you have plus 2 things you know does not turn 2FA into 3FA. It's
still 2FA - it's just somethings you know instead of something you know.

3 FA is :

* Something you have (normally one OR MORE user IDs)

* Something you know (normally the associated password or passwords for the user ID(s))

* Something you are (normally biometric)

[edit: technically, it's multi-factor when using multiple user/passwords -
here's a useful link [https://pciguru.wordpress.com/2010/05/01/one-two-and-
three-f...](https://pciguru.wordpress.com/2010/05/01/one-two-and-three-factor-
authentication/)]

~~~
pmoriarty
I look at biometrics as just another category of of the 2nd factor: things you
have.

You have your fingers. You have your eyes.

In fiction and movies, these things that you have which could be taken from
you (your fingers severed, your eyes plucked out) and used for getting past
biometric scanners.

In real life there are easier, stealthier, and less gruesome methods for
getting those things: just copy them. (gummy bear fingerprints, anyone?
[1][2][3])

[1] -
[http://www.theregister.co.uk/2002/05/16/gummi_bears_defeat_f...](http://www.theregister.co.uk/2002/05/16/gummi_bears_defeat_fingerprint_sensors/)

[2] - [http://www.cryptome.org/gummy.htm](http://www.cryptome.org/gummy.htm)

[3] - [http://www.it.slashdot.org/story/10/10/28/0124242/aussie-
kid...](http://www.it.slashdot.org/story/10/10/28/0124242/aussie-kids-foil-
finger-scanner-with-gummi-bears)

~~~
aris_ada
Unfortunately biometrics are near to useless for remote authentication. The
only context in which biometrics work is when the whole authentication chain
(reader, cable, computer, network) is tamper-proof. When one of these elements
can be tampered with (e.g. unplugging the fingerprint reader and sniffing to
the USB traffic), it becomes "something you know, that anyone can collect,
that you can reproduce with an HD picture, and that you cannot revoke without
losing your physical integrity".

I think biometrics may have a place in tamper-proof devices like iphones
(infamous error 53) or biometric smartcards (need fingerprint to unlock
secrets).

------
rodionos
Org question. We use a shared key and a shared account to manage our servers.
When someone leaves the team we have to regenerate the key for the account.
I'm sure it's not recommended, so what's a good way to manage ssh access in a
team setting? I'd like avoid sharing the key so that access can be revoked
from user without regenerating the key for everyone?

~~~
aris_ada
Have each team member generate his own key. Copy everyone's key in a folder
and have a script to generate an authorized_keys file. Use puppet or chef (or
anything else) to dispatch the authorized_keys on every server.

A single shared key is a security disaster waiting to happen.

~~~
throwaway2048
did you read the article? ssh certificates with revokation are a much more
workable solution

~~~
Piskvorrr
There's the up-front cost of setting them up, and perhaps tool support
(pubkey/agent auth is widely supported; X.509...not so much).

~~~
throwaway2048
ssh certificates are not x.509, though they are designed around a PKI.

~~~
Piskvorrr
Thanks for the correction.

------
tbe
> f=$(mktemp) && ssh-keyscan korell > $f && ssh-keygen -l -f $f && rm $f

> Unfortunately ssh-keygen does not support input from stdin, so this example
> is slightly more complicated than it should.

Any shell that supports process substitution can fix this for you with
something like;

ssh-keygen -l -f <(ssh-keyscan korell)

------
mfontani
How about [https://github.com/ccontavalli/ssh-
ident](https://github.com/ccontavalli/ssh-ident) ? Is that still a "best
practice", since the author doesn't mention it in this blog entry?

~~~
aris_ada
I didn't know that tool. Having different agents for different projects may be
useful, especially if you're using Agent Forwarding.

I don't like that you have to alias ssh or replace /usr/xxx/bin/ssh with a
symlink to make it work.

------
sapereaude
Great post!

BTW, you can also now use hardware devices (like TREZOR and KeepKey) to
perform "ssh-agent" functionality in hardware, with minimal setup work.

Blog post: [https://medium.com/@satoshilabs/trezor-
firmware-1-3-4-enable...](https://medium.com/@satoshilabs/trezor-
firmware-1-3-4-enables-ssh-login-86a622d7e609)

Source code: [https://github.com/romanz/trezor-
agent](https://github.com/romanz/trezor-agent) (the README has several demo
screencasts).

------
antoniomika
I remember reading about a tool Instagram used for SSH management, but that
might not be a "best practice". ([http://instagram-
engineering.tumblr.com/post/11399488246/sim...](http://instagram-
engineering.tumblr.com/post/11399488246/simplifying-ec2-ssh-connections))

------
balgan
And yet we still see things like
[https://blog.binaryedge.io/2015/11/10/ssh/](https://blog.binaryedge.io/2015/11/10/ssh/)

------
gdamjan1
It would be a cool to have a `forward agent connection once` option to ssh.
when I know I'd have to `git pull` once on the server but don't need the agent
forwarding after that.

------
carlisle_
>Do not SSH cross-server

This doesn't make sense. You can safely setup SSH agent forwarding to ssh from
server to server without storing your ssh private key anywhere but your local
host.

~~~
sitharus
If the server you're forwarding your agent to is compromised it can now talk
to your agent. This point could also be 'assume gateway servers are
compromised'.

~~~
carlisle_
If your gateway servers are compromised to this degree you have a much larger
problem than your ssh agent being compromised.

------
RRRA
Has anyone used ProxyCommand to chain through 2 or more gateways and has a
config example to show?

~~~
glandium
I had one a long time ago, from before ssh -W, which I updated for the
occasion:

Host <asterisk>+<asterisk>

ProxyCommand ssh -W $(echo %h | sed 's/^.<asterisk>+//') $(echo %h | sed
's/+[^+]<asterisk>$//;s/\\([^+%%]<asterisk>\\)%%\\([^+]<asterisk>\\)$/\2 -l
\1/;s/:\\([^:+]<asterisk>\\)$/ -p \1/')

Use with:

$ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l
login4

[https://glandium.org/blog/?p=3631](https://glandium.org/blog/?p=3631)

Edit: there is apparently no way to escape asterisks, so I replaced them with
<asterisk> (hopefully I didn't miss one), but you'd better check out the link.

------
hobarrera
What amazes me about these articles is that they keep recommending NOT to use
password-based logins for SSH: Why are people still using that?

~~~
Piskvorrr
People are still using _FTP_ , for what it's worth. In other words, the same
old: ignorance ("we've always done it like this") and/or apathy ("meh, we're
not a cracker target").

------
n1000
Still a big fan of the BetterCrypto guide:
[https://bettercrypto.org/](https://bettercrypto.org/)

~~~
infodroid
I'm not sure what there is to recommend about this Applied Crypto Hardening
guide, which seems to be in draft form. The section on SSH is just a couple of
snippets from sshd config files with no explanation about the choice of
parameters. And copy-pasting config file snippets from a random guide book
hardly consistutes security best practice.

