Hacker News new | past | comments | ask | show | jobs | submit login
Upgrade your SSH keys (g3rt.nl)
444 points by mariusavram on Sept 23, 2016 | hide | past | favorite | 149 comments

Seriously, the default options to ssh-keygen should be all anybody needs. If you need to pass arguments to increase the security of the generated key, then the software has completely failed its purpose. Passing arguments should only be for falling back on less secure options, if there is some limiting factor for a particular deployment.

There is absolutely no reason to pass arguments to ssh-keygen. If it is actually deemed necessary to do so, then that package's installation is inexcusably broken.

Not all systems that you might want to use your keys on support Ed25519, and this was especially true when it was first introduced to OpenSSH. Similarly not everything can handle the new key format. (Interestingly, there's another way to increase the resistance of SSH private keys to password brute-forcing that uses PBKDF2 and is more widely supported, but there's no way to create keys that use it using OpenSSH itself.)

Are you referring to something like this? http://blog.patshead.com/2013/09/generating-new-more-secure-...

By default, ssh-keygen leaks info about your computer (``user@host''). Passing -C "" takes care of this.

I just throw a script in my ~/bin folder called `keygen`:

  exec ssh-keygen -t rsa -b 4096 -C "" "$@"

If you're concerned about hostname leakage in your pubkeys, you're almost certainly doing something wrong.

If you want SSH access to Github or Gitlab, you'll need to paste your pubkey there. Leaking your user@host can be a concern if you're trying to maintain anonymity, e.g. Gwern, particularly if your username is your real name. You can strip out the user@host part from the paste, but it's safer to just get rid of it. It's easy to accidentally paste the whole thing into an .ssh/authorized_keys file, for example.

Admittedly this isn't an issue for most people.

If you're trying to maintain anonymity, why is your username your real name?

Even if it's not, your default hostname when using a MacBook Pro is typically "<username>s-MacBook-Pro.local" which reveals you're using a MacBook Pro. That info leak probably doesn't matter, but generally you want to reveal as little information as possible.

What kind of person cares enough about anonymity to change the comment in their ssh key, but not change the default hostname of their Mac? That person is very inconsistent.

Why does ssh-keygen include unnecessary information by default?

Because it helps usability when the user is editing ~/.ssh/authorized_keys - if the default is not changed, the key comment has some context, so finding the right key is easier.

I strip comments from my SSH keys too for the same reason you do. But most people don't seem to care and there is a reason for the default.

Hardly unnecessary. Very useful to see which key is what when you have multiple.

Your right. I just checked mine and it was:


Not a big deal but it does reveal my ISP and state.

As one example, a hostname leak might hypothetically be useful for a spearfishing+XSS attack.

GitHub does not display the comment of the SSH key to the public, check yourself:


It's worth noting both GitHub and GitLab strip the comment off the keys.

https://github.com/Lattyware.keys - GitHub leave it blank:

  ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGAx5F7iJTDwbPrdhrTtVdQRtozcRDvGNuU7BB+4+mHi
  ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL6WfINFFvzT+Z+l5sYq9zJoyXPLL27v9vvE1+p1XOiW
https://gitlab.com/latty.keys - GitLab add a generated comment:

  ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL6WfINFFvzT+Z+l5sYq9zJoyXPLL27v9vvE1+p1XOiW Gareth Latty (gitlab.com)
  ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGAx5F7iJTDwbPrdhrTtVdQRtozcRDvGNuU7BB+4+mHi Gareth Latty (gitlab.com)
In my case, I'd actually quite like the comments to be left in, one of those is my personal key, one is for work. Differentiating is useful depending on why I have access.

Certainly, but your user@host will be leaked to the staff of Github / Gitlab and to anyone that compromises your account. The unfortunate thing about anonymity is that a small mistake can be costly. It depends what your threat model is, though.

Comment section is set to what it is because this is easier way to tell where the public key comes from. It vastly simplifies key management, which is much more important than the tiny bits of anonymity you get from not setting the comment.

Anonymity is a different issue from security. You might get a little extra of that by taking user info from keys, but IMO for anonymity you should ensure your user info does not link to the real person.

You're writing as if a person can only ever have one pubkey, you can create a new pair just for github.

How so? Why should I give out my username and host with the key? In many applications I see no need for them.

No, this is a completely unnecessary piece of information that ssh is unnecessarily jamming into the pubkey. What possible use does including the hostname of the generating machine serve for public key authentication?

Today it's my hostname. Tomorrow ssh-keygen (with default arguments) could start including more sensitive information, like the IPv6 address of my machine that's open to the internet and its exact OS version, and leave me open to a lot of attacks. Hence, I'm not going to trust the default arguments for it.

First, this field is important. It's not used for authentication in any way, as it's just an optional comment, ignored by sshd otherwise. It's not an information for sshd, though: it's information for human user. Without this comment you don't have an easy way to tell where the keys come from.

Second, if you don't trust ssh-keygen, why the heck do you use it in the first place?

I didn't say I don't trust it, just that I don't trust that it won't leak private info when using the default arguments.

Why it would suddenly leak such information? User+host is put in the comment field for dozen years already, if not longer (and has quite good explaination why it is there). Where does this distrust of yours come from? It's completely opaque to me.

Why not just use an alias?

    alias keygen="ssh-keygen -t rsa -b 4096 -C ''"

Aliases are per shell process. You need to reload your .bashrc (or whatever file you define aliases in) in every shell. Shell scripts are instantly available in all shell instances. Also, shell scripts can be invoked by shell scripts, which aliases can't.

I agree, but for something as simple as that I’d prefer a script to use `ssh-keygen` instead of relying on the presence of a one-line script in the PATH.

> Passing arguments should only be for falling back on less secure options

This is the case with the secure-delete package for srm and sfill. Each argument changes the writes to make them less secure.

Well. OP says ->Generate your new sexy Ed25519 key

I too, say, No f'ing thanks https://en.wikipedia.org/wiki/Curve25519 In cryptography, Curve25519 is an elliptic curve offering 128 bits of security - The curve is birationally equivalent to Ed25519, a Twisted Edwards curve.

Ecrypt II - the EU project into encryption security says: https://www.keylength.com/en/3/

That gives you

Very short-term protection against small organizations Should not be used for confidentiality in new systems

and is equivelent to an 816bit RSA key

So how is this new key "Sexy" in any way.

First of all the Ecrypt project is is really outdated. Second you're reading the table wrong. Curve25519 gives you 128 bit of symmetric security, but the curve is 255 bit long. So from your table this compares to 3248 RSA. Third I'm no fan of such key number tables, it's a bit arbitrary and doesn't really reflect the complexities of modern cryptography.


Please comment civilly and substantively or not at all.


The paper of the designers of ed25519 specifies the design of the curve: the modulo is 2 to the 255th power - 19 (note that it's how the curve got its name), in the table you quote it's equvalent to (roughly) 3000 RSA bits:


You are misunderstanding. Curve25519 offers 128 bits of symmetric security, or the equivalent of a 256 bit elliptic curve key.

even if that is true. its still not enough to protect anything of value.

That's not what your own source says about 128-bit symmetric keys:

> Long-term protection

> Generic application-independent recommendation, protection from 2016 to 2040

total nonsense.

25519 is an asymetric key using eliptic curves

As both rockdoe and hannob have mentioned, ed25519 provides the same attack resistance as a 128-bit symmetric cipher. This is even mentioned in the original DJB ed25519 paper:

> High security level. This system has a 2^128 security target; breaking it has similar difficulty to breaking NIST P-256, RSA with ≈ 3000-bit keys, strong 128-bit block ciphers, etc. (The same techniques would also produce speed improvements at other security levels.) The best attacks known actually cost more than 2^140 bit operations on average, and degrade quadratically in success probability as the number of bit operations drops.

->similar difficulty to breaking NIST P-256

which takes our comp sci lab 35 seconds on a cluster of 8 machines.

so still no.

If that's possible, I'm sure you'll have no difficulty finding a public source to cite that states the same.

All the public sources I've found say it is secure.

Except for the ones recommending 512bit EC's

Perhaps you can find the parameters for a 512bit,non NIST EC and we can both be happy?

Otherwise I'll stick with 8000 and 15424 RSA thnx.

Absolutely incorrect. Anything above 90-100 bits of effective (symmetric) strength is currently more than enough, even against nation state actors.

112 bit effective strength has been the required MINIMUM for FIPS/NIST since 2014. ->and that is just to protect you from NON nation state actors.


Plus, we KNOW they were routinely breaking 64bit effective strength since the 90s, 30 years on we need a lot lot more than twice the strength, even against half decent hackers - but changing the backbone is considered "too expensive".

So that isn't ME saying the US doesn't have anything valuable enough to protect. Its the US standards agencies.

128 bits isn't twice as strong as 64 bits. 65 bits is twice as strong as 64 bits. 2^64 ~= 10^19 - or 10 million million million times stronger.

technically its around ln(n)ln(ln(n)) stronger "best case"

thats a lot closer to 2 than 10 million million million

whoever taught you integer factorisation and the dlp are order c^n was either lying to you or had been lied to.

If it really was equivalent to 816bit rsa, messages encrypted with it would be cracked within hours, and nobody would use ecc... I'd like to see proof that a 256bit Curve25519 is really that weak.

Something I don't understand is the "hate" that RSA gets. Yeah, Elliptic Curves are promising, have benefits (smaller/faster).

But RSA isn't broken, it is well understood, is "boring" (a plus on security, usually), has bigger bit sizes (according to people that know a lot more to me that's a plus point, regardless of EC requiring smaller ones, because of certain attacks), isn't hyped and sponsored by the NSA and isn't considered a bad choice by experts.

Not too many years ago Bruce Schneier was skeptical about EC, because of the NSA pushing for it. Now, I also trust djb and i an sure that ed25519 is a good cipher and there are many projects, like Tor that actually benefit from it, increasing throughput, etc., but for most use cases of SSH that might not be the issue, nor the bottleneck.

So from my naive, inexperienced point of view RSA might seem the more conservative option. And if I was worried about security I'd increase the bit size.

Am I going wrong here?

The bit size of the RSA modulus and the bit size of ECC keys aren't really comparable, as what matters is the number of operations required to break the primitive.

A better comparison exists between multiplicative group crypto (DH/DSA) and their Elliptic Curve variants. In multiplicative group crypto there is a trivial, obvious mapping to the ring of integers (in less mathematical terms, "factorization makes sense"), so you can use techniques like Pohlig-Hellman (https://en.wikipedia.org/wiki/Pohlig%E2%80%93Hellman_algorit...) and Index Calculus (https://en.wikipedia.org/wiki/Index_calculus_algorithm). In Elliptic Curve groups we use, such a trivial mapping does not exist, although there are some special cases where you can achieve this (http://crypto.stackexchange.com/a/8344). Consequently, larger numbers are needed for security in these cases (along with strong primes).

The "bad press" RSA gets is likely due to the fact that in however many years of existence there have been a number of attacks on implementations - for example, side channel attacks, Bleichenbacher's attack on RSA PKCS#1_5 padding etc. Here's a survey: https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf . A simple explanation is that implementing RSA is a fun exercise for personal experimentation, but implementing RSA for use in the wild is fraught with difficulties.

Ed25519 has some nice properties aside from the size of keys and signatures. It is deterministic, removing the requirement for cryptographically random (and therefore almost unique) k - if you forget this with plain ECDSA you end up in the Sony PS3 scenario. The provided software does not rely on indexes or branch instructions, removing timing issues in those cases and making the code constant time. It has what Bernstein calls "twist security" (https://safecurves.cr.yp.to/twist.html). Ordinarily, an ECC algorithm should check that a calculated or received point, particularly where point compression is used, exists on the curve in question - however with DJB's "twist security" and in particular in ed25519 this is unnecessary, simplifying the job of the implementer.

In other words, it is harder to get an implementation of ed25519 wrong.

As for the NSA pushing ECC, more intelligent people than may have different views, but I believe that ECC in general is sound, taking into account the sum of our knowledge so far. As with any algorithm there are specific choices of curves and constants which weaken the setup and we have a case study where NSA pushed poor choices deliberately: https://en.wikipedia.org/wiki/Dual_EC_DRBG . It comes down, therefore, to whether you believe the constants chosen for the NIST curves are "cooked" or not. I can't see any evidence to support this, but this doesn't mean it hasn't happened. However, Curve25519, Ed-448, Curve41717, FourQ and Brainpool curves are all curves designed outside the NIST system and not endorsed by the NSA and Bernstein includes in his security evaluations the concept of "rigidity", i.e. a full explanation of how curve parameters are generated (see https://safecurves.cr.yp.to/rigid.html) (although, apparently, there are some issues with Brainpool curve generation: https://bada55.cr.yp.to/brainpool.html).

If you want to measure "time of life" as a measure of safety, using the general assumption that the longer something remains unbroken the safer it is, RSA certainly scores better in that sense (although ECC goes back to the 90s and Curve25519 to 2005, so have been around a while too) and for a correct, good, well vetted implementation of RSA I see no reason personally to stop using it, with the caveat that, as you've noticed, if you have a high throughput system, you get equivalent security more efficiently with ed25519.

An interesting recommendation in the original article is to use smart cards for high assurance environments. I personally use an OpenPGP card (v2.1) and an authentication subkey on it acts as my SSH key. For this I use 4096-bit RSA keys - the OpenPGP v2.1 spec as implemented by this card does not support ECC at all, even though the actual hardware chip (http://www.basiccard.com/overview.htm) is capable of 512-bit ECC curves. The OpenPGP Card 3.0 standard includes ECC support but so far as I am aware no cards implementing it are available. The same is actually true in the JavaCard world; hardware support exists, but most JavaCard implementations you can buy are "limited" to 2048-bit RSA (If anyone knows differently I would love to know, please yell), which should be good enough for the moment but it would be nice to have some wiggle room.

So to summarise, the "hate" that RSA gets is likely due to the number of times people have seen mistakes in implementations. That said, the article makes a good point regarding the strength of brute forcing your key passphrase if you are not storing your keys on smartcards. If you are going to regenerate your key, I see no reason not to use ed25519 as you will likely be using it for host authentication anyway (modern remote hosts likely generate ed25519 keys by default, rather than RSA). I'd use it on my smartcards if I could, since there the timing difference for a signature would be noticeable. I have multiple ssh keys and those not stored on a smartcard are all ed25519. If you are considering embedded environments or high throughput ones, or many signature checks, ed25519 makes even more sense.

    but implementing RSA for use in the wild is fraught with difficulties
Side question, is there a comprehensive list of those issues/requirements anywhere? I built a list of about five requirements based on the Cryptopals challenges I'm fairly sure there's more. Before anyone asks, this is an academic interest rather than an attempt to implement crypto.

Mathematically you do not need to determine primes to brute force break an RSA key...some fast GPUs and you can break some RSA keys daily...all I have to do is GUESS A PRIME and try it on the resulting encrypted message and measure the result for word and letter frequencies in modern human languages...Scary is it not?

I disagree with the author. Before you go upgrading into ed25519, beware that the NSA/NIST is moving away from elliptical curve cryptography because it's very vulnerable to cracking with quantum attacks[0].

"So let me spell this out: despite the fact that quantum computers seem to be a long ways off and reasonable quantum-resistant replacement algorithms are nowhere to be seen, NSA decided to make this announcement publicly and not quietly behind the scenes. Weirder still, if you haven’t yet upgraded to Suite B, you are now being urged not to. In practice, that means some firms will stay with algorithms like RSA rather than transitioning to ECC at all. And RSA is also vulnerable to quantum attacks."

Stick with the battle tested RSA keys, which are susceptible but not as much as ECC crypto. 4097 or even better 8192-bit lengths.

There's no perceptible user benefits to using ed25519 and it's not even supported everywhere. Also you won't have to rotate all of your keys when workable quantum computers start crackin' everything.

[0] https://blog.cryptographyengineering.com/2015/10/22/a-riddle...

Both RSA and Elliptic Curve will fall to practical quantum computing, so the idea that you should use (weaker, slower) RSA today instead of curves because of QC is... dubious.

If you see quantum computing as a practical threat (because: you're encrypting and storing static data that needs to resist cryptanalysis for 20-50 years), you need to use a post-quantum cryptosystem. Unfortunately: nobody knows which pq systems will truly hold up to practical quantum computing, and it's possible --- even likely --- that implementations of pq schemes have dumb errors that nobody has thought to check for yet.

That's why Google's first foray into deploying pq crypto feeds both a curve computation and a Ring-LWE PQ computation into its KDF, so that unknown bugs in the RLWE exchange can't destroy the security of TLS. That's how you would seriously account for quantum computing in SSH.

The recommendations in this post are solid.

There is one argument to be made in favor of large key RSA when it comes to quantum computers: ECC may fall a bit earlier. There can be a situation where it's feasible to produce a quantum computer large enough to tackle a 255 bit ECC key, but not a 4096 RSA key. I have heard from (a few) cryptographers that this is a reason to stick with "old school" DH/RSA crypto.

I discussed this with various people involved in the postquantum debate and the general feedback I got was that this is likely not a big issue, because once quantum computers can be scaled it'll probably not be that hard to scale them up to RSA-breaking size. I think DJB once said something like "this will buy you a year" to me.

"battle tested" and "resistant to quantum computing" are complete opposites


Well, at the moment, they are "opposite" as well.

No battle tested crypto schemes are post-quantum safe, and no post-quantum crypto schemes are battle tested.


IAD recommends not upgrading to ECC in case this would incur significant costs, only to have to upgrade again once they settle on quantum resistant algos. So not because of crypto reasons, not because ECC is more vulnerable to quantum attacks relative to RSA, but because of practical operational and economic reasons. They think they will have quantum algorithms soon enough so that RSA >=3072 is OK in the meantime. At least that's my understanding of what they're saying here: https://www.iad.gov/iad/programs/iad-initiatives/cnsa-suite....

Are we not questioning the NSA's motives here given recent events?

"Don't use elliptical curve crypto because it's vulnerable to quantum attacks... keep using the stuff that is vulnerable to our in-house attacks"

Noob question here, why move just one step ahead. Why not 8192 or hell 16,384? I can see it can lead to higher CPU consumption on often used keys but for keys that are not accessed more than a couple of times a day, why is it such a bad idea to overdo it?

Diminishing returns. Here is what GnuPG says about it https://www.gnupg.org/faq/gnupg-faq.html#no_default_of_rsa40...

I have multiple RSA keys of various lengths for servers that don't support my Ed25519 key (some cloud services sadly limit key length AND require RSA, despite their underlying ssh server actually supporting better). I routinely use 16384 bit RSA keys daily and have never had a noticeable performance impact. Sure, there's clearly the diminishing returns argument, but as there would be no impact for most folks why not?

In software just because isn't sufficient justification.

4096+ is noticeably slower.

someone correct me if i'm wrong, but I believe its not just used once a day. Its used for every packet you send while connected.

The ssh protocol specs are actually quite readable. No, the slow asymmetric key is only used once to derive a fast symmetric key for the session. If you a set up a control master, then repeated logins to the same server (within some narrow window) will all multiplex over the same channel, too.

For those who have never heard of ControlMasters, I highly recommend them. SSH can multiplex multiple sessions (you invoking ssh at the terminal) over a single TCP socket — this feature is called ControlMaster. The first ssh command takes the normal amount of time, but every command after that is just ~a round-trip. No slow asymmetric key exchange. If you close all your connections, there a (configurable) timeout until the ControlMaster's connection closes.

You can combine this fact with zsh's autocomplete powers, and get pretty-close-to-instant (on a good connection) tab-completion of directories on the remote server, which is extremely nice when trying to scp something, as you can tab complete the paths in that command.

(It also saves a few PIDs on the server, as a single sshd child deals with all your connections.)

> You can combine this fact with zsh's autocomplete powers,

I found that recent versions of SSH have begun hashing `~/.ssh/known_hosts` which nerfed zsh autocomplete pretty badly. You need to set "HashKnownHosts no" in `~/.ssh/config`. The SSH change was made to prevent a key compromise from giving an attacker a ready-made list of vulnerable next targets.

But can't they also just find the hosts by looking at your bash history?

Bash history rotates (or can be managed, which a lot of people do), whereas known_hosts will have every server you ever accessed and is rarely ever touched.

I'm pretty sure you're wrong. These asymmetric crypto keys, with which it takes a long time to encode/decode, are used only at handshake to securely negotiate a per-session key that's used in symmetric crypto, for which encoding and decoding is extremely fast.

I am indeed mistaken, thanks!

Those keys are used at the beginning of each connection, primarily to authenticate and negotiate a session key. The session key is then used for every packet.

Can someone explain to me why RSA 2048 is "recommended to change"? It's still the default for gpg keys and as far as I know is widely thought to be secure for at least few hundred years!

It could have to do with the NSA's August 2015 plan for transitioning to quantum-resistant algorithms. In their new Commercial National Security Algorithm (CNSA) Suite, they advise a minimum 3072-bit RSA modulus:


(Doh... I don't know why I'm getting an invalid certificate authority error when trying to access that site, but Qualys SSL Labs confirms it's a real error. Yikes.)

The certificate is issued by the DOD's internal CA. Not sure why they're using one for a public facing site though.

They're DOD so why not. There's a lot of them that do. What confused me was that browsers in US didnt trust DOD PKI... probably quite reliable... while they have plenty of shady, less-secure CA's on their list.

Also: with quantum computing still in its infancy, how do we actually know which types of keys would be adequate?

Koblitz and Menezes explore various theories about the NSA's new policy in their paper, A Riddle Wrapped In An Enigma:


For anyone reading the thread, this is an absolute must-read paper if you're at all interested in the near-future of production cryptosystems in high-risk settings.

Not a direct answer to your question, but http://pqcrypto.org has a lot of great information on post-quantum crypto.

I did not read it due to the certificate problem. Could someone post an abstract here?

Without commenting at all on the practicalities of attacking it, it's reasonable to expect a higher level out of security from GPG than a standard TLS connection.

Your web server needs to key and manage many connections/sec, and any delays feed back into my web developers further arguing against the use of TLS. You can generally expect no one will be trying to break your current TLS sessions ten years from now. If that doesn't apply to your situation, you probably have bigger problems, such as the broken CA model.

Both of these scenarios may be different with GPG.

2048 bit is fine.

It's probably not secure for a hundred years because of quantum computers. But none of the currently supported algs in openssh protects against quantum computers, therefore there's nothing you can do against it right now.

No, not even close to a few hundred years. No more than ~20 years at this point. https://www.keylength.com/en/4/

Read: you are safe using 2048 now.

The arguments for using a longer modulus all reduce down to "you'll have to eventually, why not now?" rather than any new threat against 2048-bit modulus keys.

If you have servers too old to work with the latest keys, you can easily modify your ~/.ssh/config to automatically use a per-machine private key file:

  Host foo.example.com
    Keyfile ~/.ssh/my_obsolete_private_keyfile

Additionally, you can have substitutions in there:

  Host *
    IdentityFile ~/.ssh/%r@%h
This is indeed better, as a compromise means only the private keys on the affected host are affected. That of course works better with a keypair per source/destination combination.

Note that this _does_ cause issues sometimes, as it means your ssh-agent (you _are_ using one of those, right?) will have more keys than a typical auth attempt limit, and software that assumes trying all your keys before prompting for input is a sane thing to do will fail.

That you should be doing anyways, shouldn't you?

You certainly can, it's almost certainly unnecessary though. SSH keys are not like passwords, compromise of a server with your pubkey in no way effects the security of your private key. Many people will post pubkeys online so they're easy to add to new servers.

Adding to this: GitHub publishes the public SSH keys of all of their users, for example: https://github.com/teozkr.keys

I use a different keypair for every service, and for each device connecting to the service because keys are cheap and easy to manage. One advantage is that if one or more private keys is compromised or potentially compromised, you don't need to revoke and reissue all keys. Another, possibly more practical advantage is that I prefer to have a per-service persona that is not tied to other services. Shared public keys makes it easier to link together different accounts as belonging to the same person.

RSA 2048 is still the openssh default, i.e., best current advice from the openssh authors. The fact that this article's author labels that as "yellow" is a red flag.

I agree. It would be nice if the line:

> RSA 2048: yellow recommended to change

was followed by at least some link that explains why is it unsafe. Could anyone elaborate further?

UPDATE: there is now a link up in this thread, from the user fjarlq, which points to an NSA recommendation from 2015. The recommendation seems to be related to the need of having a "quantum computing resistant" key. But with quantum computing still in its infancy, how do we know which types of keys would be adequate?

Many theoretical aspects of quantum computing are well-understood, just as a lot of early work on computational complexity etc. predates the existence of (nonhuman) computers.

I think we don't know which one will be adequate but we do know that RSA won't be, hence the recommendation.

A ssh private key does not need to protect future secrets, it just signs an ephemeral challenge. So it doesn't really make sense to worry about future quantum crypto etc. I'd posit that even 1024 is probably still safe enough there (unless you have quite scary enemies targeting you in particular)

That is a concern for the DH key establishment though, that might be decrypted in future.

> That is a concern for the DH key establishment though, that might be decrypted in future.

If you're paranoid, configure your SSH server to only accept Curve25519-based key exchanges, only use AES with authenticated modes or CTR+ETM or chacha/poly1305, and only take ed25519 or long RSA authentication keys.

Assuming your clients are up to date it should work without any major impact. I also strongly recommend rejecting NIST "random" curves in your hostkey verification, better RSA or ed25519 than the current default of the somewhat questionable ECDSA-based keys.

Won't the quantum computer break the curve25519 key exchange?

Yes. This advice is incorrect with regards to quantum computers.

Yes, eventually, but there's a lot bigger concerns than quantum computers currently.

The same arguments were being made in some Reddit threads on the same post; I don't see any reason or new information to point towards RSA 2048 being a questionable or unreasonable choice.

If quantum computing becomes more accessible, there will be a quantum shift (forgive the pun) in how we secure our connections.

A shift of a very, very tiny yet discrete amount?

Not sure if you're just being sarcastic?

A quantum shift, or quantum leap, in common parlance borrows the discontinuity in energy levels in quantum physics as an analogy for a sudden change (compared to the continuous changes in classical descriptions of physical characteristics).

The size is not always part of the analogy except in that the "quantum shift" is far larger in all circumstances than the infinitesimal changes in classical systems. Relative to an infinitesimal the quantum is huge [analytically it is infinitely larger, but the metaphoric analogy doesn't stretch that far].


Yeah, I don't get it either.

Can anybody elaborate on the idea that for RSA <=2048 is potentially unsafe? Is it true? It seems that even 1024 bit keys haven't been factored yet, much less 2048, so why use anything else currently?


I want someone to explain this too -- the article does not give any strong evidence that RSA <= 2048 is unsafe.

Ed25519 is fast, but I don't think the speed is significantly faster to be an argument for using it. Running the borgingssl speed tool on a skylake mobile processor:

    Did 1083 RSA 2048 signing operations in 1017532us (1064.3 ops/sec)
    Did 29000 RSA 2048 verify operations in 1016092us (28540.7 ops/sec)
    Did 1440 RSA 2048 (3 prime, e=3) signing operations in 1016334us (1416.9 ops/sec)
    Did 50000 RSA 2048 (3 prime, e=3) verify operations in 1014778us (49271.9 ops/sec)
    Did 152 RSA 4096 signing operations in 1000271us (152.0 ops/sec)
    Did 8974 RSA 4096 verify operations in 1076287us (8337.9 ops/sec)
    Did 6720 Ed25519 key generation operations in 1029483us (6527.5 ops/sec)
    Did 6832 Ed25519 signing operations in 1058007us (6457.4 ops/sec)
    Did 3120 Ed25519 verify operations in 1053982us (2960.2 ops/sec)
RSA key verification is still extremely fast.

(also don't look at these numbers purely as speed, but as CPU time spent)

Not everyone is using Skylakes. My gateway is a Raspberry Pi (for historical and power reasons) and my most often used client is ARM based.

I always find it interesting to see the disparity between comments here, and in /r/netsec on matters like this.

Here: Generally positive. Netsec: Most upvoted comments being complaints and assertions that it's bad advice. And for some reason, upvoted comments about NSA involvement in curves (which is exactly what 25519 is not).

I've started moving away from reddit for any kind of actual discussion about technical topics. It's just so... Toxic.

I don't know if it's just my point of view changing, or if it's gotten worse over the last few years, but when I see a thread there later, the top comments are always negative, dismissive, and so full of bullshit.

Just look at the recent Lenovo issue, HN had some good discussion on on what could be the actual causes and how to fix some of these problems (and why MS isn't entirely off the hook there), but reddit was just full of "Microsoft loves Linux!" Jokes and whichunting with very little to go on.

Constructing an open-access forum such that the most highly rated comments actually have high levels of information and relevance is difficult at best. Probably impossible without some specific test / proof of competence in the area.

Was half expecting to see somebody say "rainbow table", but the comment in question is even better than I could have imagined.

Someone did say "rainbow table". Fortunately it's just further down.

Security is not my specialty, but I obviously wade in this field, being a developer. Having read this article I will say this to OP and the author:

Thank you, I am sufficiently paranoid enough to change my keys now.

If you have any RHEL machines, you might wanna keep an RSA (or ECDSA) key around. RHEL doesn't support Ed25519.

I haven't checked, but I presume this also goes for CentOS, Scientific Linux, and other derivatives.

The article claims that RHEL7 supports Ed25519. Maybe RHEL6 and older do not.

For those who have just updated to macOS Sierra, the default SSH client configuration is to not allow ssh-dss keys any longer.

Follow these instructions to update your keys.

This is my standard on new server setup (which is admittedly overkill but I'd rather have it slightly slower and safer):




sources.list (if you're on an older version of debian) deb http://http.debian.net/debian wheezy-backports main

apt-get -t wheezy-backports install --reinstall ssh


cd /etc/ssh

rm ssh_host_key

ssh-keygen -t ed25519 -f ssh_host_ed25519_key -a 256 < /dev/null

ssh-keygen -t rsa -b 4096 -f ssh_host_rsa_key < /dev/null

(do not password protect server side keys)



Protocol 2

HostKey /etc/ssh/ssh_host_ed25519_key

HostKey /etc/ssh/ssh_host_rsa_key

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com




Host *

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com


ssh-keygen -t ed25519 -a 256 -f yourkey.key -C whateveryouwant

For people like me who create a new server and put our public SSH keys in ~/.ssh/authorized_keys, can you explain what doing this does, and why it's good?

I can see you're limiting it to particular algorithms/cyphers, but the rest?

The stuff towards the top eliminates the insecure keys your server will try to default to. No reason to keep them on the server (IMO) - they're primarily there for backwards compatibility. I prefer to remove them entirely so I know my client/server won't ever negotiate an insecure connection.

Do you have a link to a gist and/or your other work?

To be clear, the links at the top aren't mine (sorry if it seemed that way). Just sources I've used to create what I've got.

why do you need to do client-side ? The server side will enforce the algorithms that are allowed.

That's because I don't always control the servers I'm connecting to. Setting that client-side will prevent me from ever connecting to something that's completely insecure unknowingly.

So I once read somewhere that RSA is simpler to implement than most other algorithms, and hence it's a safer choice than other algorithms, because weaknesses typically come from suboptimal implementation less than from the cryptographic algorithm. (Unless you use known-broken things like md5 or 3DES).

And I think that was in the context of some DSA or ECDSA weakness, possibly a side channel attack or something similar. I forgot the details :(

What are your thoughts on this? Should we focus more simplicity and robustness of the implementation, rather than just the strength of the algorithm itself?

Actually, if you ready DJB's fine papers he very much touches on this. Curve 25519 (now X25519) is specifically designed to avoid the pitfalls. The reference implementation is not too hard to understand, but granted the optimized versions are a little more delicate. Still, I imagine the optimized RSA implementations are no better.

Maybe a more technical/comprehensive read is this[1] writeup, which I see some others have linked to. Prior HN[2].

[1] https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

[2] https://news.ycombinator.com/item?id=8843994

What's the problem with ECDSA?

Something I found resourceful while setting up SSH on a recent server is Mozilla's SSH Guidelines - https://wiki.mozilla.org/Security/Guidelines/OpenSSH

>RSA 2048: yellow recommended to change

Could someone provide a link with decent explanation why? Is it solely out of fear that it will be cracked soon on quantum computer?

Why not usimg this year as the name for the ssh? Then when you are using 2014.pub or 2013.pub you know it's time to upgrade

Nobody is going to brute-force my git keys, especially when it's so trivial to gain access to the repos via social engineering.

github doesn't support ed25519 keys does it?

Both GitLab and GitHub do in their SSH implementation.

Just tried it out; works fine.

Oh really? Last time I tried it I think it didn't.

I assume they added support for it in the meantime... Great!

Last I checked bitbucket.org didn't

I just tried. Bitbucket was working ok. What issues are you facing? I'm on a fedora 24 laptop.

In Userify (ssh key manager that only distributes sudo roles and public keys -- you keep your private keys[1]) we're going to be disallowing DSS keys soon.

I like this post - it's good advice overall. Keys are easy to handle and in some ways more secure than certificate management (which relies on extra unnecessary infrastructure).

1. https://userify.com

Off topic, but I want to thank you guys again for your service. You guys make it so freaking easy to manage access to our various servers and VMs- I can't even remember how we used to deal with it.

Thank you! I don't know who you are (feel free to email me @ userify.com) but we really really really love to hear stuff like that. We are launching in AWS Mktplace soon too btw!

the need to generate fresh ones to protect your privates much better

Um, I'm pretty sure he meant privacy, not "privates." Time for an edit.

very good post about security!

many people still using RSA/DSA keys :/ some people are doing even worse things. Last week I saw one man who have shared his priv key by email message!

QWERTY people have to grow up!

I wish this whole SSH business would be less complicated...

The defaults are actually fine; don’t worry. You don’t need to set a passphrase for your key, you don’t need to run an agent. Just

    ssh-keygen -t ed25519
and paste your public key into ~/.ssh/authorized_keys. `-t ed25519` is optional.

> You don’t need to set a passphrase for your key

That's highly questionable advice. The passphrase is like two factor auth, it protects you if your private key ends up in someone else's hands. That's a risk you have to weigh up.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact