There is absolutely no reason to pass arguments to ssh-keygen. If it is actually deemed necessary to do so, then that package's installation is inexcusably broken.
I just throw a script in my ~/bin folder called `keygen`:
exec ssh-keygen -t rsa -b 4096 -C "" "$@"
Admittedly this isn't an issue for most people.
I strip comments from my SSH keys too for the same reason you do. But most people don't seem to care and there is a reason for the default.
Not a big deal but it does reveal my ISP and state.
https://github.com/Lattyware.keys - GitHub leave it blank:
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL6WfINFFvzT+Z+l5sYq9zJoyXPLL27v9vvE1+p1XOiW Gareth Latty (gitlab.com)
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGAx5F7iJTDwbPrdhrTtVdQRtozcRDvGNuU7BB+4+mHi Gareth Latty (gitlab.com)
Today it's my hostname. Tomorrow ssh-keygen (with default arguments) could start including more sensitive information, like the IPv6 address of my machine that's open to the internet and its exact OS version, and leave me open to a lot of attacks. Hence, I'm not going to trust the default arguments for it.
Second, if you don't trust ssh-keygen, why the heck do you use it in the first
alias keygen="ssh-keygen -t rsa -b 4096 -C ''"
This is the case with the secure-delete package for srm and sfill. Each argument changes the writes to make them less secure.
I too, say, No f'ing thanks
In cryptography, Curve25519 is an elliptic curve offering 128 bits of security - The curve is birationally equivalent to Ed25519, a Twisted Edwards curve.
Ecrypt II - the EU project into encryption security says:
That gives you
Very short-term protection against small organizations
Should not be used for confidentiality in new systems
and is equivelent to an 816bit RSA key
So how is this new key "Sexy" in any way.
> Long-term protection
> Generic application-independent recommendation, protection from 2016 to 2040
25519 is an asymetric key using eliptic curves
> High security level. This system has a 2^128 security target; breaking it has similar difficulty to breaking NIST P-256, RSA with ≈ 3000-bit keys, strong 128-bit block ciphers, etc. (The same techniques would also produce speed improvements at other security levels.) The best attacks known actually cost more than 2^140 bit operations on average, and degrade quadratically in success probability as the number of bit operations drops.
which takes our comp sci lab 35 seconds on a cluster of 8 machines.
so still no.
Except for the ones recommending 512bit EC's
Perhaps you can find the parameters for a 512bit,non NIST EC and we can both be happy?
Otherwise I'll stick with 8000 and 15424 RSA thnx.
Plus, we KNOW they were routinely breaking 64bit effective strength since the 90s, 30 years on we need a lot lot more than twice the strength, even against half decent hackers - but changing the backbone is considered "too expensive".
So that isn't ME saying the US doesn't have anything valuable enough to protect. Its the US standards agencies.
thats a lot closer to 2 than 10 million million million
whoever taught you integer factorisation and the dlp are order c^n was either lying to you or had been lied to.
But RSA isn't broken, it is well understood, is "boring" (a plus on security, usually), has bigger bit sizes (according to people that know a lot more to me that's a plus point, regardless of EC requiring smaller ones, because of certain attacks), isn't hyped and sponsored by the NSA and isn't considered a bad choice by experts.
Not too many years ago Bruce Schneier was skeptical about EC, because of the NSA pushing for it. Now, I also trust djb and i an sure that ed25519 is a good cipher and there are many projects, like Tor that actually benefit from it, increasing throughput, etc., but for most use cases of SSH that might not be the issue, nor the bottleneck.
So from my naive, inexperienced point of view RSA might seem the more conservative option. And if I was worried about security I'd increase the bit size.
Am I going wrong here?
A better comparison exists between multiplicative group crypto (DH/DSA) and their Elliptic Curve variants. In multiplicative group crypto there is a trivial, obvious mapping to the ring of integers (in less mathematical terms, "factorization makes sense"), so you can use techniques like Pohlig-Hellman (https://en.wikipedia.org/wiki/Pohlig%E2%80%93Hellman_algorit...) and Index Calculus (https://en.wikipedia.org/wiki/Index_calculus_algorithm). In Elliptic Curve groups we use, such a trivial mapping does not exist, although there are some special cases where you can achieve this (http://crypto.stackexchange.com/a/8344). Consequently, larger numbers are needed for security in these cases (along with strong primes).
The "bad press" RSA gets is likely due to the fact that in however many years of existence there have been a number of attacks on implementations - for example, side channel attacks, Bleichenbacher's attack on RSA PKCS#1_5 padding etc. Here's a survey: https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf . A simple explanation is that implementing RSA is a fun exercise for personal experimentation, but implementing RSA for use in the wild is fraught with difficulties.
Ed25519 has some nice properties aside from the size of keys and signatures. It is deterministic, removing the requirement for cryptographically random (and therefore almost unique) k - if you forget this with plain ECDSA you end up in the Sony PS3 scenario. The provided software does not rely on indexes or branch instructions, removing timing issues in those cases and making the code constant time. It has what Bernstein calls "twist security" (https://safecurves.cr.yp.to/twist.html). Ordinarily, an ECC algorithm should check that a calculated or received point, particularly where point compression is used, exists on the curve in question - however with DJB's "twist security" and in particular in ed25519 this is unnecessary, simplifying the job of the implementer.
In other words, it is harder to get an implementation of ed25519 wrong.
As for the NSA pushing ECC, more intelligent people than may have different views, but I believe that ECC in general is sound, taking into account the sum of our knowledge so far. As with any algorithm there are specific choices of curves and constants which weaken the setup and we have a case study where NSA pushed poor choices deliberately: https://en.wikipedia.org/wiki/Dual_EC_DRBG . It comes down, therefore, to whether you believe the constants chosen for the NIST curves are "cooked" or not. I can't see any evidence to support this, but this doesn't mean it hasn't happened. However, Curve25519, Ed-448, Curve41717, FourQ and Brainpool curves are all curves designed outside the NIST system and not endorsed by the NSA and Bernstein includes in his security evaluations the concept of "rigidity", i.e. a full explanation of how curve parameters are generated (see https://safecurves.cr.yp.to/rigid.html) (although, apparently, there are some issues with Brainpool curve generation: https://bada55.cr.yp.to/brainpool.html).
If you want to measure "time of life" as a measure of safety, using the general assumption that the longer something remains unbroken the safer it is, RSA certainly scores better in that sense (although ECC goes back to the 90s and Curve25519 to 2005, so have been around a while too) and for a correct, good, well vetted implementation of RSA I see no reason personally to stop using it, with the caveat that, as you've noticed, if you have a high throughput system, you get equivalent security more efficiently with ed25519.
An interesting recommendation in the original article is to use smart cards for high assurance environments. I personally use an OpenPGP card (v2.1) and an authentication subkey on it acts as my SSH key. For this I use 4096-bit RSA keys - the OpenPGP v2.1 spec as implemented by this card does not support ECC at all, even though the actual hardware chip (http://www.basiccard.com/overview.htm) is capable of 512-bit ECC curves. The OpenPGP Card 3.0 standard includes ECC support but so far as I am aware no cards implementing it are available. The same is actually true in the JavaCard world; hardware support exists, but most JavaCard implementations you can buy are "limited" to 2048-bit RSA (If anyone knows differently I would love to know, please yell), which should be good enough for the moment but it would be nice to have some wiggle room.
So to summarise, the "hate" that RSA gets is likely due to the number of times people have seen mistakes in implementations. That said, the article makes a good point regarding the strength of brute forcing your key passphrase if you are not storing your keys on smartcards. If you are going to regenerate your key, I see no reason not to use ed25519 as you will likely be using it for host authentication anyway (modern remote hosts likely generate ed25519 keys by default, rather than RSA). I'd use it on my smartcards if I could, since there the timing difference for a signature would be noticeable. I have multiple ssh keys and those not stored on a smartcard are all ed25519. If you are considering embedded environments or high throughput ones, or many signature checks, ed25519 makes even more sense.
but implementing RSA for use in the wild is fraught with difficulties
"So let me spell this out: despite the fact that quantum computers seem to be a long ways off and reasonable quantum-resistant replacement algorithms are nowhere to be seen, NSA decided to make this announcement publicly and not quietly behind the scenes. Weirder still, if you haven’t yet upgraded to Suite B, you are now being urged not to. In practice, that means some firms will stay with algorithms like RSA rather than transitioning to ECC at all. And RSA is also vulnerable to quantum attacks."
Stick with the battle tested RSA keys, which are susceptible but not as much as ECC crypto. 4097 or even better 8192-bit lengths.
There's no perceptible user benefits to using ed25519 and it's not even supported everywhere. Also you won't have to rotate all of your keys when workable quantum computers start crackin' everything.
If you see quantum computing as a practical threat (because: you're encrypting and storing static data that needs to resist cryptanalysis for 20-50 years), you need to use a post-quantum cryptosystem. Unfortunately: nobody knows which pq systems will truly hold up to practical quantum computing, and it's possible --- even likely --- that implementations of pq schemes have dumb errors that nobody has thought to check for yet.
That's why Google's first foray into deploying pq crypto feeds both a curve computation and a Ring-LWE PQ computation into its KDF, so that unknown bugs in the RLWE exchange can't destroy the security of TLS. That's how you would seriously account for quantum computing in SSH.
The recommendations in this post are solid.
I discussed this with various people involved in the postquantum debate and the general feedback I got was that this is likely not a big issue, because once quantum computers can be scaled it'll probably not be that hard to scale them up to RSA-breaking size. I think DJB once said something like "this will buy you a year" to me.
No battle tested crypto schemes are post-quantum safe, and no post-quantum crypto schemes are battle tested.
"Don't use elliptical curve crypto because it's vulnerable to quantum attacks... keep using the stuff that is vulnerable to our in-house attacks"
You can combine this fact with zsh's autocomplete powers, and get pretty-close-to-instant (on a good connection) tab-completion of directories on the remote server, which is extremely nice when trying to scp something, as you can tab complete the paths in that command.
(It also saves a few PIDs on the server, as a single sshd child deals with all your connections.)
I found that recent versions of SSH have begun hashing `~/.ssh/known_hosts` which nerfed zsh autocomplete pretty badly. You need to set "HashKnownHosts no" in `~/.ssh/config`. The SSH change was made to prevent a key compromise from giving an attacker a ready-made list of vulnerable next targets.
(Doh... I don't know why I'm getting an invalid certificate authority error when trying to access that site, but Qualys SSL Labs confirms it's a real error. Yikes.)
Your web server needs to key and manage many connections/sec, and any delays feed back into my web developers further arguing against the use of TLS. You can generally expect no one will be trying to break your current TLS sessions ten years from now. If that doesn't apply to your situation, you probably have bigger problems, such as the broken CA model.
Both of these scenarios may be different with GPG.
It's probably not secure for a hundred years because of quantum computers. But none of the currently supported algs in openssh protects against quantum computers, therefore there's nothing you can do against it right now.
The arguments for using a longer modulus all reduce down to "you'll have to eventually, why not now?" rather than any new threat against 2048-bit modulus keys.
Note that this _does_ cause issues sometimes, as it means your ssh-agent (you _are_ using one of those, right?) will have more keys than a typical auth attempt limit, and software that assumes trying all your keys before prompting for input is a sane thing to do will fail.
> RSA 2048: yellow recommended to change
was followed by at least some link that explains why is it unsafe. Could anyone elaborate further?
UPDATE: there is now a link up in this thread, from the user fjarlq, which points to an NSA recommendation from 2015. The recommendation seems to be related to the need of having a "quantum computing resistant" key. But with quantum computing still in its infancy, how do we know which types of keys would be adequate?
That is a concern for the DH key establishment though, that might be decrypted in future.
If you're paranoid, configure your SSH server to only accept Curve25519-based key exchanges, only use AES with authenticated modes or CTR+ETM or chacha/poly1305, and only take ed25519 or long RSA authentication keys.
Assuming your clients are up to date it should work without any major impact. I also strongly recommend rejecting NIST "random" curves in your hostkey verification, better RSA or ed25519 than the current default of the somewhat questionable ECDSA-based keys.
If quantum computing becomes more accessible, there will be a quantum shift (forgive the pun) in how we secure our connections.
A quantum shift, or quantum leap, in common parlance borrows the discontinuity in energy levels in quantum physics as an analogy for a sudden change (compared to the continuous changes in classical descriptions of physical characteristics).
The size is not always part of the analogy except in that the "quantum shift" is far larger in all circumstances than the infinitesimal changes in classical systems. Relative to an infinitesimal the quantum is huge [analytically it is infinitely larger, but the metaphoric analogy doesn't stretch that far].
Did 1083 RSA 2048 signing operations in 1017532us (1064.3 ops/sec)
Did 29000 RSA 2048 verify operations in 1016092us (28540.7 ops/sec)
Did 1440 RSA 2048 (3 prime, e=3) signing operations in 1016334us (1416.9 ops/sec)
Did 50000 RSA 2048 (3 prime, e=3) verify operations in 1014778us (49271.9 ops/sec)
Did 152 RSA 4096 signing operations in 1000271us (152.0 ops/sec)
Did 8974 RSA 4096 verify operations in 1076287us (8337.9 ops/sec)
Did 6720 Ed25519 key generation operations in 1029483us (6527.5 ops/sec)
Did 6832 Ed25519 signing operations in 1058007us (6457.4 ops/sec)
Did 3120 Ed25519 verify operations in 1053982us (2960.2 ops/sec)
(also don't look at these numbers purely as speed, but as CPU time spent)
Here: Generally positive.
Netsec: Most upvoted comments being complaints and assertions that it's bad advice. And for some reason, upvoted comments about NSA involvement in curves (which is exactly what 25519 is not).
I don't know if it's just my point of view changing, or if it's gotten worse over the last few years, but when I see a thread there later, the top comments are always negative, dismissive, and so full of bullshit.
Just look at the recent Lenovo issue, HN had some good discussion on on what could be the actual causes and how to fix some of these problems (and why MS isn't entirely off the hook there), but reddit was just full of "Microsoft loves Linux!" Jokes and whichunting with very little to go on.
Thank you, I am sufficiently paranoid enough to change my keys now.
I haven't checked, but I presume this also goes for CentOS, Scientific Linux, and other derivatives.
Follow these instructions to update your keys.
sources.list (if you're on an older version of debian)
deb http://http.debian.net/debian wheezy-backports main
apt-get -t wheezy-backports install --reinstall ssh
ssh-keygen -t ed25519 -f ssh_host_ed25519_key -a 256 < /dev/null
ssh-keygen -t rsa -b 4096 -f ssh_host_rsa_key < /dev/null
(do not password protect server side keys)
ssh-keygen -t ed25519 -a 256 -f yourkey.key -C whateveryouwant
I can see you're limiting it to particular algorithms/cyphers, but the rest?
And I think that was in the context of some DSA or ECDSA weakness, possibly a side channel attack or something similar. I forgot the details :(
What are your thoughts on this? Should we focus more simplicity and robustness of the implementation, rather than just the strength of the algorithm itself?
Could someone provide a link with decent explanation why? Is it solely out of fear that it will be cracked soon on quantum computer?
I assume they added support for it in the meantime... Great!
I like this post - it's good advice overall. Keys are easy to handle and in some ways more secure than certificate management (which relies on extra unnecessary infrastructure).
Um, I'm pretty sure he meant privacy, not "privates." Time for an edit.
many people still using RSA/DSA keys :/
some people are doing even worse things.
Last week I saw one man who have shared his priv key by email message!
QWERTY people have to grow up!
ssh-keygen -t ed25519
That's highly questionable advice. The passphrase is like two factor auth, it protects you if your private key ends up in someone else's hands. That's a risk you have to weigh up.