The argument LVH makes here ("worse than plaintext") is that because you have to type that password regularly, it's apt to be one of those important passwords you keep in your brain's resident set and derive variants of for different applications. And SSH is basically doing something close to storing it in plaintext. His argument is that the password is probably more important than what it protects. Maybe that's not the case for you.
I just think it's batshit that OpenSSH's default is so bad. At the very least: you might as well just not use passwords if you're going to accept that default. If you use curve keys, you get a better (bcrypt) format.
While I have you here...
Before you contemplate any elaborate new plan to improve the protection of your SSH keys, consider that long-lived SSH credentials are an anti-pattern. If you set up an SSH CA, you can issue time-limited short-term credentials that won't sit on your filesystems and backups for all time waiting to leak access to your servers.
Indeed, I pointed this out in my May 2009 talk about scrypt (http://www.daemonology.net/papers/scrypt-slides.pdf). There were even OpenSSH developers in the room!
What does this mean? Does it mean it's hard to use, or easy to mess up or something else?
How does scrypt fair against the recent TLBleed etc? Iirc intels claim was that TLBleed only affected poorly implemented crypto. But is not the memory access pattern of scrypt vulnerable to TLBleed and hard to make constant access?
Too low and it's worse than MD5, too high and your login prompt takes a whole minute to check the password.
Exactly. Consider switching to auto-expiring SSH certificates. You can build your own certificate management using a few open tools or switch to Teleport  which is 100% certificate based and doesn't even support keys. Disclaimer: I am one of the contributors.
I mention this mostly for the sort of people like me who read 'SSH CA' and their eyes roll into the backs of their heads and they start rocking and making saliva bubbles at thought of PAM modules and LDAP servers and so on. But this doesn't look nearly as bad. Go ssh implementation sounds like a nice ancillary bonus.
Almost all the Windows guides suggest using PuTTYgen with defaults, which gives you a 1024 bit RSA key, and that might be worse in the long run than these password shenanigans.
A lot of places make it difficult to contact them about their docs. I had to create accounts and file support tickets for some, and others only had a generic feedback form. So far Oracle and "w3docs" have been the most difficult; the latter only has a Facebook and Twitter for contact.
This whole process is really annoying. All these sites are giving the same advice. Why isn't there one Creative Commons wiki just for technical writing that people could link to?
I do recollect that using RSA keys of 4096 bits slows down SSH more than the gain in security might be worth.
Some people need SSH to move a lot of data, e.g. for SFTP but some people just want their connection to a nearby machine to feel "snappy" and not take a beat to do the key exchange and authentication steps.
We're in the weeds here, we're agreed that if your weakest point is a 2048-bit RSA key you're in unexpectedly good shape, definitely anyone who feels 4096 even "might be" too slow should just use RSA 2048 (or get an elliptic curve algorithm that's nice and fast on their CPU). I was just pointing out that "too slow" doesn't necessarily mean "Not as much peak throughput as I would like". Station wagons full of tapes remain sub-optimal for video conferencing :D
I have a few home servers, but if one of my devices were compromised I don't think it would take much longer for the whole network to fall.
I'd love an end-to-end example that shows how you're storing everything, in both meatspace and your devices. Do you use hardware authentication devices? How do you handle backups?
At key generation time you can dump the key (once!) and then feed it to paperkey as a backup.
In high school, I definitely kept password-protected private keys on a USB key that got plugged into whatever machine happened to be available. (Now I am affluent enough to carry around a real Security Key and a trusted laptop.)
If you were the maintainer, would you really want to change the defaults and deal with the backlash of complaints from users who do copy keys around?
How do I install it?
Is that method actually more secure than carrying around an encrypted key?
Single use login codes on paper probably the easiest way around the problem. https://www.digitalocean.com/community/tutorials/install-and...
Also, you can configure the google-authenticator TOTP module to request key and token IIRC. GA also has OTPW backup codes.
So creating a one time use key for that computer is probably a good idea, you can revoke it once you are done using it and then it won't cause you any problems in the future.
ssh-keygen -p -o -f (oldfile)
I don't think that's necessarily true, provided that your keys are:
- Properly encrypted
- Protected by a decent password
- You use ssh agent to avoid 1) copying the key everywhere, and 2) typing your password all the time.
Of course it depends how critical security is. Access to a few dev servers inside the company firewall is not the same as managing your client-facing production infrastructure.
# Generate key
$ gpg2 --card-edit
change both user and admin PIN to a secure password (can be the same, it's called PIN but you can just use a regular alphanumeric password)
choose RSA, 4096 (or whatever you consider sufficient)
# Add this to your .bash_profile (use GPG agent instead of SSH)
export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)
# Export your SSH public key
$ ssh-add -L
Assuming Linux or OsX where your distribution has an opensc that already supports your device, ssh is only about 3 incantations of magic:
But there is no safe way of getting away from having 1 pin.
If you don't want to have to learn gpg (because why should you?) the master/sub keys, PINs, keyservers, and all that can be dumped, just like ssh-keygen is able to create keys without passphrases - not exactly recommended, but still better than the alternative.
If that's better... I dunno :-)
Wouldn't a SSH CA just introduce a whole different kind of complexity?
- How to protect the SSH CA and its key and make it highly available? I don't want to be locked out of my bastion host after something the CA depended on broke and my certificate has just expired.
- How to authenticate users against the CA? Most solutions I've seen use a longer-lived client-side secret, which is just as susceptible to theft than a regular SSH key, or some sort of OAuth or SAML SSO. A malicious Chrome extension can now compromise the SSO process, you still need a U2F token (like a YubiKey 4) to properly secure the SSO account, etc.
- How to make it work with our SCM and random things like a storage appliance and various JunOS devices, which support regular SSH keys, but don't know about SSH certificates?
I would assume that Latacora is using a SSH CA, and I'm legitimately curious how you approached these challenges.
The context for SSH CA/Teleport is SSHing into a box. When you do actually need an SSH key, Yubikeys are the best answer. (I like using gpg-agent's ssh-agent emulation mode because I find it works better on Macs, but that's irrelevant to the security analysis.)
However, I would argue that unless this is the case, operating a SSH CA is riskier (both from a security and availability point of view).
I don’t really play in this space, but I have some business partners who were struggling with it.
A writeup would be awesome.
I have been trying to set up keys for road warrior VPN as well as our access to cloud, and it has been FAR from straightforward.
Generally, everything consists of "Somehow communicate with a Radius server" that Yubi no longer supports(YubiRadius) or "Trust Google".
One day we had bit of time skew because of bad ntpd & lo we couldn't login because of our short lived certs :-)
I'm currently using a yubikey to hold my key (in the GnuPG smartcard applet) so I felt pretty confident security-wise but now you're making me doubt.
Not really. In general you need the same level of access for setting up the sshd host key, for setting up/enabling sshca: the server must trust the ca pub cert, and there's some configuration needed wrt principals (although AFAIK you can/should embed most of that in the certificate).
There is this old tinyCA that comes with OpenVPN, but it's awful and can't do
much (I don't even remember if it could revoke a certificate). There are a few
instances of WWW-only CAs, and there are desktop/GUI applications. But command
line? /usr/bin/openssl only, and it's unwieldy. Even worse situation with
a CA library.
People like to fetishize OpenSSH's CA (for both client keys and server keys),
but there still a lot to do before it becomes usable. (Though the same stands
for the traditional save-on-first-use method, honestly.) You're basically
proposing to deploy software that maybe will be usable in a few years, with
a big "maybe", because until now it haven't materialized.
SSH CAs improve efficiency and convenience.
A hardware key that requires touch per login is a game-changer. When you go do lunch you know that your key did nothing, no matter how compromised your workstation is. When your machine is turned off you know that there's no copy of the key somewhere. That key cannot be used.
A software cert-based key may be valid for only hours (if you set it up that way), but that means that there are 7 billion possible attackers who could use your key. They could break into your workstation and wait for the screensaver to kick in, and then log in to every single host you have access to, and do their naughty business.
For a hardware key someone has to take a plane from China and break into your house to use your key.
> It's still a long-held credential
Doesn't have to be. But if it is, so what? Given physical locks that are unpickable and keys uncopyable, would you rather instead change locks every day, where the keys are copyable? (even if cost of changing locks scales O(1) with price)
> that long-held key can even live on a Yubikey if you use U2F/WebAuthn
Like I said, one does not exclude the other. You can't prove that A is better than B by saying A+B is better than B.
There's also devices that don't support SSH certificates (e.g. embedded devices), but supporting pubkeys is vastly more common.
One of the servers I've had the misfortune of using responds to even proposed public key auth by failing all subsequent authentication on that connection. So you need to immediately do password auth if you want to get in. Brilliant.
I presume the WG specifically wanted to see SSH with public keys deployed widely rather than a world where most places upgrade from telnet to SSH with passwords and think that's the job done.
Yeah, but even if the sshd supports it that doesn't mean that the product has a way to configure it. There may be no non-volatile space for a pubkey.
I have encountered it, but yeah it's rare.
The rest of the attack is very technical, very network applicable - copies of key files, guessing passwords - your adversary may be the far side of the world, and they may have done all this in seconds.
But suddenly a hardware token means ground assets. Different skill set. Some adversaries may be able to buy all the Cloud Compute and Network Bandwidth they can ask for (especially if it's all with somebody else's credit cards...), but putting even one black bag job together in a foreign country is beyond them. And even for adversaries that are able to do this you can't just spin up ground assets instantly.
Yes, in "Rainbows End" Rabbit actually does (if you pay attention) build a ground team to execute the lab infiltration plan despite apparently not having any corporeal existence. But that's science fiction. Here and now that's not how it works.
Yikes that statement has a lot of moral overtones. Is it a good idea to use a Yubi? Arguably yes. Does one need to find "excuses" for not doing so? The vast majority of the time, no.
I've been using a GPG smart card for a long time, and it required a separate card reader, and both card and reader were easy to break. A YubiKey 4 fits on a keychain, is hard to break (though some of my colleagues succeeded) and you just plug it in.
My understanding of CA principals is that they identify the user or role that requested the signing, but not necessarily the login ID on the server that is allowed to be logged into. Ideally there'd be a 1:1 mapping between the principal and the login ID on the server. I think there's some sshd configuration that needs to be done, but I haven't seen any clear instructions for doing so.
Do you know how to accomplish this?
And re-read the (imnho rather obtuse) ssh-keygen man pages.
[ed: and maybe this too:
And for the ssh ca part, bless and teleport (as others have mentioned).
There's the option of putting stuff in ad/ldap - but if you're already using ad, kerberized ssh (and sudo etc) might be the way to go.
I like the idea of a system that's simpler than ad/ldap+kerberos - and ssh certs fits most of the bill.
The challenge becomes auth/authz beyond just login - ldap basically requires ssl ca anyway - and at that point, especially with kerberos set up - I think one might be better off sticking with one complex auth/authz system rather than two...
At least it should be possible to avoid passwords with something like:
With due respect: have you considered the myriad systems where you need to upload your SSH key to an UI? If my key is short term then I need to do that all the time. I can't set up an SSH CA on github for example.
But it’s important to remember that SSH is a lot less fucked overall than the VPN situation was. People mostly grok SSH. SSH mostly doesn’t negotiate BF-CBC.
All my other SSH keys I don't have in there are plaintext on the disk. The ssh askpass is in userspace and easily spoofed, any local attacker could easily fish it out anyway. Full disk encryption at rest ought to be enough for most people.
That's why I came up with a small script/service waiting for an "inotify" event on a honey-pot SSH key (and some other files like ~/passwords.doc) which will immediately shutdown the computer on any kind of access of those files.
inotifywait -e open --fromfile ~/.config/killer/killer.cfg --outfile ~/.config/killer/killer.log
if [ $? -eq 0 ]; then
Although I think the best way to deal with ransomware is a strong backup policy, not a tripwire shutdown trick.
I guess I will just set up a development VM (as @pas suggested) instead of remote development though, so thank you to both of you.
It's not going to do much against a determined adversary, anyway. If he's prepared to turn the NIC back on, he would just kill your inotifywatch first.
Disabling the NIC is a nice idea too, it should also be much quicker than shutting the machine down, so, please, feel free to make the script better.
The only "clever" thing I did was making sure the honey-pot key comes first in the ~/.ssh directory and that it's big (to gain time to poweroff while it's being transmitted to somewhere).
Clearly, this is not a protection against a determined adversary.
It's just that a lot of npm packages can have an excessively long dependency chain which make it harder to audit. And the npm ecosystem is massive.
I mean, sure. Change the KDF default to something modern. But the threat we're discussing is marginal, and it's not as if the security of the SSH network protocol, which is paramount, is under threat.
If you care about this class of attack, you probably care about it enough to use an SSH CA anyway.
That said; yes: you should have long-held auth on a hardware token and then use an SSH CA for temporary auth.
> an SSH key password is unlikely to be managed by a password manager: instead it’s something you remember.
Why is that the case? I've always been just as likely to use a password manager for my SSH key password—after all, I'm usually prompted for my SSH key password from my terminal, which accepts pasted passwords just fine.
Am I missing something?
I didn't know much about ssh keys when creating, so I followed directions online. The directions I followed created my key with no password. Soon my Mac encrypted the ssh key by itself, without my intervention (!), to use my Mac logon password. The encryption format was the old ssh format the article complains about, even though my openssh supported the new one. The password I use to sign into my computer is not grabbed from a password manager.
Given that (as far as I know) the decryption is handled only on the client side, why does an old default matter? Does it break any kind of compatibility to change that? Why only use the better algorithm for EC keys?
And even if it did break compatibility it's not like it usually stops the OpenSSH folks, old ciphers and hash functions are deprecated regularly (I have to whitelist a bunch of ciphers at work to connect to legacy systems running an old dropbear).
For this type of applications security should trump backward compatibility, especially when in this case the breakage is fairly limited and easy to work around.
It's basically bad usability, where the default is unsafe, and in order to make it safe, you have to know what you're doing. Or is it like the author writes in 'TFA' that "we're stuck with it", because it used to be a default in OpenSSL?
Some people will say, "if you're using ssh-keygen we can expect that you know what you're doing". But this is a wrong assumption that gets people into trouble. I'd even say a minority of people who invoke ssh-keygen really know what's going on in detail and what providing a flag like "-t ed25519" actually does. I certainly don't, and I've had to generate quite a few ssh keys so far.
The argument that it's worse than paintext hinges on you both using a weak password and using the same password for something else. If you don't do either of those things then it's much better than plaintext, although technically if you use the same strong password for something else then you may have weakened the password by using it for old-format ssh ... but it still has to be brute forced.
The problem with these word lists is, that they yield rather low entropy compared to random strings, when the attacker knows that a word list is used. That's because one word has just a little bit more entropy than 2 alphanumeric characters.
And the Oxford dictionary has ~171k words, so not sure where a 10k list even comes into play
DICEWARE uses ~8k words, chosen so that people can remember and spell them.
edit: seems I'm wrong and they do actually have a passwords section: https://haveibeenpwned.com/Passwords
The simple fact that the HIBP password list contains 512 MILLION unique passwords...
But most of them were weak. And most of them have been cracked by hobbyists.
512 MILLION is not a large number of passwords to check. What do you think the hash rate of modern GPUs/ASICs is?
Extrapolating, 10k:6hrs == 512m:35 years
Martin Kleppman explained the problem back in 2013:
I also was using the OP numbers, verses trying to do any math or research myself first ::second facepalm::
I do. However, for it to be "worse" than plaintext you still need to be using it somewhere else, and (previously unmentioned) for the attacker to have a feasible way of attacking the other use. For instance if it's your desktop password, an attacker may have no way of leveraging that (not that I'm saying this overlap is a good thing security-wise). Whereas if they have your unencrypted ssh key then they 100% have your ssh key, no work required, no AWS instance, no cost.
I don't want to get too bogged down in this. I was using the old format, and I've now changed, so thank you!
In this case, the alternative would be not storing the password at all. Note that "plaintext" in the title refers to the SSH key and not the password protecting the SSH key.
That said: Ed25519 is a local optimum: the bigger win is to not have long-held credentials at all, and instead use an SSH CA or something like Gravitational Teleport. Doesn't work for GitHub though.
Master user records in an identity/SSO system. We use ADFS.
Use a protocol which provides a portable “signed”, temporary credential. We use SAML but OIDC also works.
Have a command-line client which auths to the SSO and then relays an identity proof to a trusted component (lambda, cloud function, dedicated instance, whatever your policy says is ok). The identity proof is just a saml assertion or oidc token.
Have trusted component validate the identity proof from SSO and generate time limited credentials (we issue ssh, kube and iam). Our client then auto-xfers those creds up to jumpbox but you could drop them on workstation instead if your policy allows.
Though it might work with a second, more limited (not-so-)SSO system for specific users and environments.
A jumpbox with 2FA might be workable. Servers only accept logins from the jumpboxes, but you don't have to get 2FA working on them. Essentially it's an internal VPN gateway. (Be on the lookout for the knobheads in IT that set up parallel login mechanisms without 2FA, like mounting the file share, virtual desktop logins from the hypervisor, Kerberos auth.)
Basically, you use long-held credentials to generate short-term credentials, and then use the short-term credentials themselves to access whatever server or service you need.
FYI it supports : ansix9p256r1, ansix9p384r1, ansix9p521r1 and brainpoolP256r1, brainpoolP384r1, brainpoolP512r1
That’s due to Curve25519 having a non-unit co factor. There’s a half dozen other weird properties of the curve chosen for smal efficiency gains or whatnot which also might lead to security holes in any application which is not the single signer or 2 party key agreement use case.
I get that this isn't a good practice, but plain brute-forcing a 20 character pw (~120 bits of entropy as alphanumeric, same as md5 hash size) is still impracticle even if it's a low cost md5 calculation.
You’re right that you can imagine passwords that you still can’t enumerate. But suggesting that people have a non-reused memorized 120 bit password is, uh, optimistic.
And this should be one of the top priority passwords, so it's not too hard to imagine it being in a password manager or using a similar quality.
Also the 120 bits was an example and a good amount more than necessary. A 12 character password isn't exactly standard but it's not ridiculous to expect. Once you exclude the passwords that are so bad even bcrypt can't save them, the number of users where the password algorithm makes a difference starts to look a lot smaller than 100%.
In other words I wanted to know, if my SSH keys with md5 algorithm are fine, if I use a high entropy pw (which I do, especially for my ssh keys).
Sure some people will use short and reused passwords for ssh keys, but the persons using password protected ssh keys are way more informed about security than your average user. Password statistics are usually based on pw database leaks of your average user and/or sites that are not worth protecting.
There is an interesting short paper about password meters (I'm traveling without a laptoo right now, but can link it later) that found out people cared more about the strength of their password, if the login it's protecting is more important.
Combining these two points I think the percentage of ssh key users that use at least 12 character passwords is rather high, but of course I have no evidence. I spoke however about prod pws with some devs at my company and the answers ranged from 12 to ~20 chars and completely randomized 40chars in pw managers. Nothing especially short. You can also copy your ssh key pw from a pw manager.
You still make a good point in your article and using MD5 is not ideal, but I was missing information for the evaluation of risk I'm in. High enough entropy passwords seem not only better than plaintext, but actually still safe to me when MD5 is used.
Anyway, it doesn't matter that your password is ygGucg,guc52f when your colleague's is bigbum99.
Also there are some ways to create memorable and long passwords. E.g. I'm using short pictureable phrases and their translation in another language, sometimes adding a numeral if required. Example: "2YellowChopsticksZweiGelbeEssstäbchen" (even with spaces if allowed)
Very easy to remember for me, very high entropy, decent entropy if the pattern is known and requires a hand-crafted dictionary attack that even needs decent translation. E.g. in the example above chopsticks has two common German translations and the ä can also be written ae. Bonus points if you use it for a language you are currently learning.
Though I have thought of having a mechanical typing device, actuators for each key, that would just type them for me. But it would be conspicuous.
I do use keepass for all the hundreds of other passwords where possible.
How can I tell if my file is vulnerable once leaked?
Switch to the newer format, roll your keys regularly, improve your logging, honeypot with the old keys, put AppArmor/SELinux controls on the .ssh directory.
You can't tell if it's leaked or not unless osmeone starts using it. You can tell if it's in the safe format right now by looking at the first few lines of the file.
On recent versions, need this in your ~/.ssh/config:
# for Keychain (acting as agent) to load the ssh keyfile
# For ssh to then use Keychain as an agent
Private keys stored on filesystems is an antipattern.
Could someone elaborate on this? Is this saying that the ssh-keygen utility shells out to the openssl-command line tool which at one point defaulted to md5 for encryption? If this is the case why would be stuck with it?
I didn't change my macOS password to 8-characters (to save myself 20 seconds a day) until I thoroughly researched what key-stretching was being used under the hood.
>"Finally, most startups should consider not having long-held SSH keys, instead using temporary credentials issued by an SSH CA, ideally gated on SSO."
Can someone explain what "gating" SSH CA issued key with SSO means? How would SSO "gate" the keys?
EDIT: I previously incorrectly claimed that you need to pay Gravitational for Teleport SSO. That is incorrect: you only need to pay for _SAML_, specifically -- I forgot that you can GitHub auth into it, which is a form of SSO. (Though for most of our customers I think that single trust store is a core part of SSO, and GitHub isn't a good SSO by that metric, by vritue of account reuse and the fairly tenuous links between users and organizations. GitHub does a great job of modeling open source interactions, but that model falls over a bit when translated to commercial software engineering orgs.)
Better yet, use U2F where you have control of how you are authenticated.
It’s tricky to find out what this DEK-Info stuff means
ssh-keygen -o -p -f file_name
ssh-keygen -o -p -f ~/.ssh/id_rsa
Installing random software from the internet is problematic.
(I could just give it a shot, but I reckon there will be gotchas)
Docker/LXD or a dev VM lets you access both simultaneously. Or create a separate user, but not as secure or convenient.
Yeah I was playing with a separate user account, could get some basics working but I wonder how far could I get with that.
What are the bigger security risks for that approach? Assuming constrained file permissions, and that no secrets are in ENV (https://gist.github.com/telent/9742059 )
On macos you can use Apple's native sandboxing. See for example http://mybyways.com/blog/creating-a-macos-sandbox-to-run-kod...
At least npm has the package-lock.json feature that lets you lock down the entire dependency graph. If the package-lock.json was created with a safe set of dependencies, then any dependencies compromised in the future won't affect you (because already released versions can't be modified). Maven and gradle both seem to entirely lack a comparable feature as far as I know; if they do have that functionality as an obscure feature or plugin, then I still criticize them for not making it more prominent. (Npm generates and uses package-lock.json by default!)