(Links for the curious. Reposts are ok after a year: https://news.ycombinator.com/newsfaq.html)
Thanks, it was a fascinating read.
This isn't quite false, but it's deeply wrong; 256 bits is enough to produce 256-bit secure random numbers literally forever - the "long, long time" is precisely however long it takes a attacker to brute-force or otherwise recover a 256-bit RNG seed (aka private key).
> Look, I don't claim that injecting entropy is bad.
Note that injecting entropy less than 256 bits at a time is bad, in that it wastes the entropy. If a attacker knows your RNG state (either due to compromise or because you just booted), injecting 128 bits, generating (≥128 bits of) output, and injecting another 128 bits, only gets you 128-bit security. If you inject in smaller chunks, say 32 bits (generating ≥32 bits of output in between), you might as well not be adding entropy at all.
Note, apropos of the above, that /dev/urandom must block during early boot (before X bits of entropy have been collected). If it doesn't, you have no entropy, because a attacker can repeatedly brute force your (say) <32-bit RNG state based on /dev/urandom outputs.
0: or whatever security level you're aiming at
Depends on how quickly state bits leak. If the state only leaks 2 bits per minute, then adding 32 bits every minute is perfectly sufficient. That's why even though actual leakage rate is >0 bits for any real-world CSPRNG it can still be perfectly acceptable (and even desirable, depending on threat profile) to add entropy at a rate of 0 bits.
> /dev/urandom must block during early boot
Assuming CPU jitter is a source of entropy, you can harvest 256 bits in a fixed amount of time. So it depends on the definition of "block". Does waiting an extra 2 seconds to boot constitute blocking if boot time already takes 1-2 seconds? On Linux I regularly see heavily loaded systems hang random processes for minutes just evicting VM pages (this is without swap, which is actually probably part of the problem), so I'm not sure what's so special about boot time in terms of how we define blocking operations. Faster is always better, of course.
 I'm dubious, but at least some kernel hackers think so. And Linus' personal opinions notwithstanding, he did add code that as a practical matter makes this assumption--because downstream users will never question it.
Fixed, but I'm assuming you're actually using the RNG; if you only generate 2 bits of output per minute, you can get away with all kinds of dumb crap.
> So it depends on the definition of "block".
It's the same definition as any other (2)read syscall. Assuming a 32-byte read from /dev/random, it's equivalent to reading from a pipe with at least 32bytes buffered if the RNG is already initialized (read returns immediately); if the RNG is not initialized, its the same as reading from a empty pipe - the kernel switches to some other process after making a note to resume the calling process when some event happens ((2)write for a pipe, RNG initialization for /dev/random).
0: Or /dev/urandom, which should be literally the same character device, because there are never any legitimate reasons for them to differ.
If outputting 2 bits leaks 2 bits of state, you're probably not using a cryptographically strong PRF.
In general, a N bit output will be inconsistent with 2^N - 1 of your 2^N possible internal states; the point of using ~256 bits of entropy is to ensure that it's computationally intractable to figure out which, and that only works if the attacker can't work out the first (second, etc) 32 bits separately from the subsequent ones.
0: of entropy, mind, not data. If the attacker knows your 64-byte RNG buffer, and you inject 2 bits of entropy, there are only 2^2=4 new 64-byte blobs that your buffer could have evolved to, so you have 512 bits of data with only 2 bits of entropy.
1: for any algorithm that is; on a per-event basis, there's obviously a chance that you happen to produce output that's consistent with multiple possible internal states.
This isn’t the type information leak you should be concerned with. If you attacker has access to some segment of the output stream, they can detect whenever you update the RNG state, and brute force the new state unless you are updating it with a large enough chunks. 32-bits is certainly brute forceable.
Keep in mind that Linux 5.6 is imminent, and I haven't found time to investigate it beyond some short reporting in LWN, let alone update the article.
But it's still my intention to do that, with some restructuring along the way.
The more changes Linux makes (for the better!) and the more time passes and later versions are in common use, the less the format "tell everyone how bad it is and then backpeddle with sections about every improvement made" works, and the less the title itself stays relevant.
I don't have a good answer for that structural problem, yet, so I'm kind of procrastinating.
If you know you are targeting a new kernel, using /dev/random is reasonable. Better yet: use getrandom()/getentropy().
arc4random_buf(3) etc. also uses getentropy(2).
Not by itself, no.
> Should hardware RNGs on ME or PSP enabled CPUs be inherently be trusted?
Don't really have any choice. You can't defend against them.
It's bad enough keeping up with the cruft dreamed up by POSIX; why add BSD functions that nobody uses outside of the BSD microcosm.
Glibc is monolithic; functions that nothing in your system uses are still sitting there in the .so image.
uint32_t arc4random_uniform(uint32_t upper_bound)
"At last, [...] Windows has such a mechanism."
reminds me that cryptography is an area where subspecialty expertise has been in the professional consciousness forever
That's just word semantics that depends on how you define "computationally secure".
256 bits of entropy is good enough to get 1 random bit, 256 times. That's the hard fact.
For instance, 256 bits of entropy is not enough to generate more than one 256 bit key for a symmetric cipher. If you generate ten keys from it, you're allocating 25 bits of entropy to each one.