
Openssl: uses only 32 bytes (256 bit) for key generation - joeyh
http://bugs.debian.org/742145
======
tptacek
An RSA key isn't 4096 random bytes.

OpenSSL doesn't always retrieve all its random bytes from urandom; it seeds
its own local PRNG with urandom, and feeds raw random bytes from its own
state.

There are no simple answers in public key crypto: the idea that you only need
256 bits of random data can blow your head off if you use something other than
RSA. For instance, if you fill only some of the bits of a DSA k value,
attackers can extract your private key from a series of public signatures.

Some of the comments on that bug thread are batty, though. The OpenSSL CLI
tool doesn't implement its own randomness or key generation; it's a CLI
wrapper around the core OpenSSL functions. The CLI might be for "debugging
purposes" (but obviously not really, since most instructions for generating
SSL certificates involve using that CLI), but the core routines surely aren't.

~~~
noclip
There's also a possibly-not-completly-wrong table on that _other_ TLS
library's website ([http://www.gnutls.org/manual/gnutls.html#Selecting-
cryptogra...](http://www.gnutls.org/manual/gnutls.html#Selecting-
cryptographic-key-sizes)) that takes a stab at estimating the relative
security levels of (correctly-used) public key algorithms.

~~~
2bluesc
[http://www.keylength.com](http://www.keylength.com) also estimates the
relative security from various sources.

------
rdl
RSA keys aren't just random strings (unlike symmetric keys, which are short
random strings, generally 128 or 256 bits now).

You start with two random primes. Generally to generate these what you do is
pick a range of numbers in the right size, then test the odd ones with a prime
sieve (vs small primes), then do probabilistic primality test on the winners.
This doesn't need 4096 bits of randomness.

Once you have the primes, you don't need any more random numbers to generate
the key; it's purely deterministic calculation at that point.

~~~
vilda
To add: not only you test for primality, but also for common attacks. Not all
primes are equally strong
[http://www.uow.edu.au/~jennie/WEB/WEB99/1999_07.pdf](http://www.uow.edu.au/~jennie/WEB/WEB99/1999_07.pdf)

~~~
pbsd
For primes of current acceptable size (>= 1024 bits), you might as well drop
those extra checks. The elliptic curve method gives you around 2^512 attempts
at doing the exact same thing, each one with the same probability of having a
smooth p-1 or p+1. The paper you link to reaches essentially the same
conclusion.

------
Tinned_Tuna
This thread seems to be generating a lot of fuss, so let me weigh in quickly.

This seems to be expected behaviour for generating RSA keys. An RSA key of
_length_ 4096b does not provide you with a security level of 4096b. That is,
you don't have to straight-up guess the keys, you work at it via
factorisation.

The issue is arising from the fact that entopy and key length are both
commonly measured in bits. The entropy of an RSA key is much lower than the
key length. 256b of entropy seems to be more than reasonable at first glance.

Normally, I would not defend openSSL, but I will do so here, having not even
looked at the responsible code (nor has the submitter of this bug), and not
even looked at the mailing list link.

Seriously, do not panic, let the cryptographers look at this and decide if
it's really an issue. It's probably good that they're getting 256b of entropy
from urandom, it's likely that they're seeding a CSPRNG for prime generation &
testing.

I would wager some small sum on this being closed by the end of the week, and
us looking back on this and shaking our heads at the uninformed knee-jerk
reaction of people in here:

"If I ask for a 4096-bit key, I should get one, or an error message. I
shouldn't get a 256-bit key that looks like a 4096-bit key." \-- taejo, 15
minutes ago

------
FiloSottile
What is really troubling is not the amount of entropy, 256 is fine for
everything up to paranoia. What is troubling and reflects the sad status and
detachment from reality of the project is this:

>Historically, the OpenSSL command line tools have been intended for debugging
only.

~~~
dekz
Apart from genrsa, what exactly have you used the openssl cli for in the past?

My answer would be the exact same as you quoted, the openssl cli tools are
quite horrendous to use and you certainly wouldn't use them if you were a CA.
If you are a CA or deal with certificates, openssl does provide sweet
inspection dumping tools for asn1 and certs.

If you have an application which generates keys or certificates, why would you
system exec openssl cli when your language of choice has a Crypto
implementation, or glue to OpenSSL.

So I ask openly, run `history | grep openssl` and see. Even running `openssl
help` is daunting unless you're familiar with most symmetric block modes.

~~~
pjungwir
I use the openssl cli every time I generate a CSR or a self-signed
certificate. If you Google "generate csr" you'll find lots of sites, including
companies that issue SSL certificates, instructing you to use the openssl cli.

------
fleitz
So what?

The 32 bytes it gets goes back into a similar CSPRNG as the CSPRNG it came out
of.

So instead of requesting 4096 bits from a CSPRNG, it requests 256 bits from a
CSPRNG and uses that to initialize another CSPRNG which it reads 4096 bits
from. Cryptographically speaking it's the exact same thing.

The question to be asking is whether the output from either CSPRNG is
predictable.

~~~
vilda
Cryptographically speaking you are completely wrong :) You need both: good
algorithm and enough entropy. Ad absurdum if you read 1 one byte and feed it
into your CSPRNG, you may get up to 256 random streams of 4096 bits, which is
easily enumerable and hardly secure.

~~~
tptacek
I don't know what you're trying to say here, but feeding 256 bits to a CSPRNG
and then pulling 4096 bits of output from that CSPRNG is not
"cryptographically completely wrong".

------
diminoten
> Historically, the OpenSSL command line tools have been intended for

> debugging only.

That very much surprises me. Can someone explain, elaborate, or source this
idea?

~~~
dekz
Have you used the openssl cli tools before?

~~~
diminoten
Haha, yes, extensively. Should I not have?

~~~
dekz
I wouldn't say so, you can probably use the tool to your hearts content.

Obviously the tools are useful for debugging purposes, testing tls
connections, dumping cert information and asn1.

I've now been made aware that tools wrap the openssl cli instead of using it's
programmatic API.

Once I tried to use the ca functions of the tool, found the whole tool
entirely too cumbersome and wrote my own using libopenssl.

What do you use it for extensively if I might ask?

~~~
bashinator
I personally use it for generating CSRs and private keys.

------
bashinator
>Florian Weimer dixit:

>Historically, the OpenSSL command line tools have been intended for debugging
only.

This seems rather out-of-touch with reality.

------
indutny
It seems that this is intentional:
[https://groups.google.com/forum/#!topic/mailing.openssl.dev/...](https://groups.google.com/forum/#!topic/mailing.openssl.dev/fpnieXRtEYo)

------
AaronFriel
Also - isn't 256 bits (32 bytes) enough entropy? Is the post about a lack of
bytes or bits? Can anyone weigh in?

~~~
taejo
If I ask for a 4096-bit key, I should get one, or an error message. I
shouldn't get a 256-bit key that looks like a 4096-bit key.

~~~
Ihmahr
You don't know what you are saying. I suppose you speak about RSA-4096, which
is about prime numbers. The 256 bits are just used as a seed to find large
prime numbers. The 256bit entropy is more than enough to be secure, and 4096
bit prime numbers are never as secure as 256 bit entropy.

~~~
bodyfour
Even 64 bits would be sufficient as long as the entropy is good. It just needs
to be large enough that bruteforcing 2^N isn't feasible. 256 seems like a
perfectly safe amount to me.

~~~
ryan-c
64 bits isn't enough to provide brute force protection, you need at least 80
bits, preferably 128 bits, and 256 bits if you're paranoid.

~~~
bodyfour
Yeah I should have elaborated a little more. I was in a rush. I wasn't trying
to endorse 64 bits, but think for a second about what it would take to attack
weak entropy here. For every possible entropy state:

    
    
        1. Put the random number in that state
        2. Use that randomness to build a RSA key pair
        3. Do an RSA decryption of the captured key exchange using this key
        4. See if the symmetric key we get decrypts into sensible plaintext
    

The thing to note is that steps 2 and 3 are both expensive -- especially step
2. This isn't nearly as easy as scanning a 64-bit key on a symmetric cipher!
So suppose for a second that you've got a huge data center full of machines at
your disposal and you can do a billion tests per second -- a full sweep of the
64-bit space would still take 584 years. So you better hope you'd get lucky.

Now would that be enough protection? Of course not -- computers get faster,
and a determined enough adversary could build special hardware. (Although,
again, it would be orders of magnitude more complicated than just SHA-512
bruteforcing)

The point I was trying to make is that even _if_ they were somehow only using
64 bits of entropy, a practical attack would still be difficult to mount. I'd
say that each test would be at least 2^16 times more computation than a
typical symmetric cipher check. Therefore I think it would be about as hard as
bruteforcing a 80-bit symmetric key.

In other words, 256 is _way_ more than plenty.

------
joeyh
I don't feel this belongs on the front page of HN. I was pretty worried when I
posted it though.

The remaining questions are

1\. Is openssl(1) really only intended for "debugging purposes"?

2\. [http://www.philandstuff.com/2013/03/14/why-does-gpg-need-
so-...](http://www.philandstuff.com/2013/03/14/why-does-gpg-need-so-much-
entropy.html)

------
AaronFriel
I see that several people (even in this thread, which you'd expect better of)
still expressing the prevalence of incoherent and inconsistent-with-reality
beliefs about /dev/random and /dev/urandom.

This quote in particular struck me as very strange.

> From: Florian Weimer <fw@deneb.enyo.de>

> To: Thorsten Glaser <tg@mirbsd.de>

> Cc: 742145@bugs.debian.org

> Subject: Re: openssl: uses only 32 bytes (256 bit) for key generation

> Date: Wed, 19 Mar 2014 21:33:10 +0100

>

> * Thorsten Glaser:

>

> >>Historically, the OpenSSL command line tools have been intended for

> >>debugging only.

> >

> > I disagree,

>

> It's what I was told by the OpenSSL developers.

>

> > Also, what do other tools (that do not invoke openssl(1)

> > unlike most of these I saw, which were shell wrappers

> > around it) do, entropy-wise?

>

> There are different choices. Some use more bits from /dev/urandom,

> some even block on /dev/random. The latter is quite problematic for

> non-interactive key generation during package isntallation.

1\. I would doubt most actually block on /dev/random, but why? /dev/urandom
should be Good Enough For Everyone, except in very narrow circumstances. But
why isn't someone with a firm grasp of crypto setting a safe default? Why are
implementers and consumers of these components making these choices of entropy
size ("some use more bits") and underlying sources ("some even block...").
This is ridiculous, I don't trust the average developer to make safe choices
in this respect. Why is it okay? This is incoherent. Either OpenSSL has safe
defaults, or it doesn't. Leaving it up to consumers to do it correctly is
giving up - and leads me to believe it's done unsafely by default.

2\. The inconsistency alluded to by referring to non-interactive key
generation and "some even block on" startles me. I don't know why developers
think blocking on key generation is generally safer, but if they do, why are
other (perhaps the same?) developers sacrificing that safety when generating
keys on package installation for the sake of user experience? If /dev/random
is safer, it should be used, and if not, it shouldn't be (because of the
blocking issue.) The lack of guidance from crypto-savvy developers is deeply
concerning.

~~~
fleitz
/dev/random is not safer. It just blocks.

Numbers can't be more or less random, either both are unsafe, or both work.
Both use exactly the same algorithms (on Linux) to generate numbers, and the
numbers come from the same pool.

More in depth analysis: [http://www.2uo.de/myths-about-
urandom/](http://www.2uo.de/myths-about-urandom/)

~~~
Spittie
This is probably a stupid question, as I don't know much about /dev/random and
/dev/urandom (and that article doesn't seems to address it fully), but I'll
ask anyway: Doesn't /dev/random block only when it estimates that the entropy
pool is empty, aka when it would create unsafe numbers? The article says that
the estimate is likely wrong and there is still enough entropy left, but what
if it's actually empty?

If this is true, then /dev/urandom might split out unsafe bits (as unlikely it
is), which to me sounds like a terrible idea when creating a key or similar
very importants secrets.

~~~
tjgq
You should read the article to answer your own question in depth. The short
version is: on Linux, both /dev/random and /dev/urandom are fed from the same
CSPRNG. A CSPRNG only needs to be _seeded_ with a few bits of entropy to
generate a whole stream of unpredictable numbers. After the CSPRNG is seeded
(at system boot) it no longer matters that you run out of entropy.

Disclaimer: I am not a cryptographer.

~~~
Spittie
I have (actually, twice. Once when it was posted sometimes ago on HN, and once
now before writing the comment).

What threw me off was that part: "Still, if you insist on never handing out
random numbers that are not “backed” by sufficient entropy, you might be
nervous here. I'm sleeping sound because I don't care about the entropy
estimate."

I was missing the fact that the CSPRNG only needs to be seeded once to be
safe, and reseeding is only a "nice thing" that's not really needed. To be
fair, the article cover this, I guess I just didn't understand that part
really well.

I've got it now, and it actually make sense. Thanks to you, and everyone else
that used some time to educate me.

------
0x0
Wow! Are we going to need a second set of "debian ssl" blacklists?!

~~~
mschuster91
Last time was SSH, not SSL. Still crypto, but different beasts.

~~~
AceJohnny2
He's referring to the 2008 OpenSSL fiasco:

[https://www.schneier.com/blog/archives/2008/05/random_number...](https://www.schneier.com/blog/archives/2008/05/random_number_b.html)

Edit: oh right, it led to the SSH blacklist. Sorry.

