
Myths about /dev/urandom (2014) - labguy
https://www.2uo.de/myths-about-urandom/
======
dang
2018:
[https://news.ycombinator.com/item?id=17779657](https://news.ycombinator.com/item?id=17779657)

2017:
[https://news.ycombinator.com/item?id=13332741](https://news.ycombinator.com/item?id=13332741)

2015:
[https://news.ycombinator.com/item?id=10149019](https://news.ycombinator.com/item?id=10149019)

2014:
[https://news.ycombinator.com/item?id=7359992](https://news.ycombinator.com/item?id=7359992)

(Links for the curious. Reposts are ok after a year:
[https://news.ycombinator.com/newsfaq.html](https://news.ycombinator.com/newsfaq.html))

~~~
labguy
Thanks for the advice! I will check the search first the next time. Also
putting a the year if not recent.

~~~
dang
Just to be clear, it's fine that you posted this! If checking search would
have prevented you, that wouldn't be good. But you could always check search
to find previous links to include for curiosity's sake (that's what I did),
and to make sure that it hasn't had a significant discussion within the last
year.

------
a1369209993
> About 256 bits of entropy are enough to get computationally secure numbers
> for a long, long time.

This isn't quite false, but it's deeply wrong; 256 bits is enough to produce
256-bit secure random numbers literally _forever_ \- the "long, long time" is
precisely however long it takes a attacker to brute-force or otherwise recover
a 256-bit RNG seed (aka private key).

> Look, I don't claim that injecting entropy is bad.

Note that injecting entropy less than 256 bits[0] at a time _is_ bad, in that
it wastes the entropy. If a attacker knows your RNG state (either due to
compromise or because you just booted), injecting 128 bits, generating (≥128
bits of) output, and injecting another 128 bits, only gets you 128-bit
security. If you inject in smaller chunks, say 32 bits (generating ≥32 bits of
output in between), you might as well not be adding entropy at all.

Note, apropos of the above, that /dev/urandom _must_ block during early boot
(before X bits of entropy have been collected). If it doesn't, you have _no_
entropy, because a attacker can repeatedly brute force your (say) <32-bit RNG
state based on /dev/urandom outputs.

0: or whatever security level you're aiming at

~~~
wahern
> If you inject in smaller chunks, say 32 bits, you might as well not be
> adding entropy at all.

Depends on how quickly state bits leak. If the state only leaks 2 bits per
minute, then adding 32 bits every minute is perfectly sufficient. That's why
even though actual leakage rate is >0 bits for any real-world CSPRNG it can
still be perfectly acceptable (and even desirable, depending on threat
profile) to add entropy at a rate of 0 bits.

> /dev/urandom must block during early boot

Assuming CPU jitter is a source of entropy[1], you can harvest 256 bits in a
fixed amount of time. So it depends on the definition of "block". Does waiting
an extra 2 seconds to boot constitute blocking if boot time already takes 1-2
seconds? On Linux I regularly see heavily loaded systems hang random processes
for _minutes_ just evicting VM pages (this is without swap, which is actually
probably part of the problem), so I'm not sure what's so special about boot
time in terms of how we define blocking operations. Faster is always better,
of course.

[1] I'm dubious, but at least some kernel hackers think so. And Linus'
personal opinions notwithstanding, he did add code that as a practical matter
makes this assumption--because downstream users will never question it.

~~~
a1369209993
> Depends on how quickly state bits leak. If the state only leaks 2 bits per
> minute

Fixed, but I'm assuming you're actually _using_ the RNG; if you only generate
2 bits of output per minute, you can get away with all kinds of dumb crap.

> So it depends on the definition of "block".

It's the same definition as any other (2)read syscall. Assuming a 32-byte read
from /dev/random[0], it's equivalent to reading from a pipe with at least
32bytes buffered if the RNG is already initialized (read returns immediately);
if the RNG is not initialized, its the same as reading from a empty pipe - the
kernel switches to some other process after making a note to resume the
calling process when some event happens ((2)write for a pipe, RNG
initialization for /dev/random).

0: Or /dev/urandom, which should be literally the same character device,
because there are never any legitimate reasons for them to differ.

~~~
wahern
> if you only generate 2 bits of output per minute, you can get away with all
> kinds of dumb crap.

If outputting 2 bits leaks 2 bits of state, you're probably not using a
cryptographically strong PRF.

~~~
benchaney
You are misunderstanding the issue. If you only have two bits of entropy, than
outputting two bits leaks the entire state, because two bits is brute
forceable. This is true regardless of the quality of your RNG. The same is
true of 32 bits, because 32 bits is also brute forceable. The reason to insert
entropy in large chunks is so that an attacker cannot brute force the RNG
state, between reseeds.

------
Tomte
(Author)

Keep in mind that Linux 5.6 is imminent, and I haven't found time to
investigate it beyond some short reporting in LWN, let alone update the
article.

But it's still my intention to do that, with some restructuring along the way.

The more changes Linux makes (for the better!) and the more time passes and
later versions are in common use, the less the format "tell everyone how bad
it is and then backpeddle with sections about every improvement made" works,
and the less the title itself stays relevant.

I don't have a good answer for that structural problem, yet, so I'm kind of
procrastinating.

~~~
amluto
It’s really quite different. I got rid of all but the block-once-at-boot
behavior of /dev/random.

If you know you are targeting a new kernel, using /dev/random is reasonable.
Better yet: use getrandom()/getentropy().

~~~
beefhash
On that note, would it kill the OpenBSD guys to get on the getrandom(2) train?
I get it, they're dying on the “arc4random is better” hill (and, yes, I
agree), but they're literally the last people in the way of making
getrandom(2) the new default across not only the usual BSDs and illumos, but
_also_ glibc/Linux.

~~~
tenebrisalietum
Doesn't `getrandom(2)` use `RDRAND`? Should hardware RNGs on ME or PSP enabled
CPUs be inherently be trusted - I mean don't the BSD folks recommend not
enabling hyperthreading due to Spectre, etc.

~~~
loeg
> Doesn't `getrandom(2)` use `RDRAND`?

Not by itself, no.

> Should hardware RNGs on ME or PSP enabled CPUs be inherently be trusted?

Don't really have any choice. You can't defend against them.

------
als0
Title should be lowercase to match

------
baby
Shameless plug: I have a more modern explanation of all of that:
[https://livebook.manning.com/book/real-world-
cryptography/ch...](https://livebook.manning.com/book/real-world-
cryptography/chapter-8/v-5/)

------
awinter-py
> And while appeal to authority is usually nothing to be proud of, in
> cryptographic issues you're generally right to be careful and try to get the
> opinion of a domain expert.

reminds me that cryptography is an area where subspecialty expertise has been
in the professional consciousness forever

------
saagarjha
Aside: did this get some new CSS? I recall reading it a while back and it
looked quite different…

~~~
Tomte
Yes, I'm playing around more than I should, but I'm changing things every now
and then. At least I keep the URL stable. :-)

------
kazinator
> _About 256 bits of entropy are enough to get computationally secure numbers
> for a long, long time._

That's just word semantics that depends on how you define "computationally
secure".

256 bits of entropy is good enough to get 1 random bit, 256 times. That's the
hard fact.

For instance, 256 bits of entropy is not enough to generate more than one 256
bit key for a symmetric cipher. If you generate ten keys from it, you're
allocating 25 bits of entropy to each one.

