/dev/random is too severe. It's basically designed to be an information-theoretic random source, which means you could use its output as a one-time pad even if your adversary were time-travelling deities with countless universes full of quantum computers at their disposal. It blocks when you try to use it if there aren't enough bits to satisfy this.
/dev/urandom is too loose. It's designed to be computationally secure, which means that you could use it if your adversaries were stuck in our universe and had to make their computers out of matter and power them with energy. However, it never blocks, even if there isn't enough entropy in the pool.
We just need a PRNG that will spit out numbers as long as, say, 256 random bits were added to the pool at some point. Once you have that many bits, just keep on spitting out numbers.
urandom is not too loose. There is exactly one Linux issue with urandom: it will service requests before the urandom pool is initialized. Linux distros work around this by trying to make sure urandom is initialized securely very early in the boot process.
Once urandom is initialized, the idea that it can ever "run out of entropy" is nonsensical; urandom is structurally the same as a stream cipher keystream generator, and we generally don't fret about whether AES-CTR keystreams will "run out of key". Nonetheless, the belief that urandom will sporadically "run out of entropy" is virulent.
The motivation behind the new system call has more to do with chroot environments, where the device might not be available at all. It's a good change; randomness should be provided by a system call and not a device. Unfortunately, by adding the flag, they've basically managed to add two system calls: getrandom and geturandom. >HEADDESK<
To add to tptacek's excellent comment, it's important to remember that some of the characteristics of /dev/random and /dev/urandom discussed here only apply to Linux; not other operating systems such as Solaris:
>Once urandom is initialized, the idea that it can ever "run out of entropy" is nonsensical; urandom is structurally the same as a stream cipher keystream generator, and we generally don't fret about whether AES-CTR keystreams will "run out of key". Nonetheless, the belief that urandom will sporadically "run out of entropy" is virulent.
If the seed value for urandom were compromised or you were unsure of its provenance (some systems carry over from last boot), would you not be safer calling /dev/random for something sensitive like key generation? What if you did not trust the PRNG that urandom used?
But /dev/random has the additional constraint that it will stop outputting if that PRNG hasn't been reseeded in [some reasonable amount of time/output bits]. So you are guaranteed to get bits that have come from a recently-seeded PRNG, rather than any old PRNG. I have had this argument on here before and the conclusion was that it's not a real issue, but I think this is what the parent poster is talking about.
EDIT: Thanks for the clarification tptacek. I don't mean to disagree -- in fact it was you who explained this to me last time as well.
"Recently" seeded isn't a meaningful distinction. It has either been "seeded" or it hasn't. The recency of the seed --- a point Linux's interface worries a great deal about --- has nothing to do with security.
>The recency of the seed --- a point Linux's interface worries a great deal about --- has nothing to do with security.
Unless you are concerned about where the seed came from (e.g. not storage like /var/run/random-seed) or have any concerns that there is a flaw in the PRNG that could leak information.
> Once urandom is initialized, the idea that it can ever "run out of entropy" is nonsensical; urandom is structurally the same as a stream cipher keystream generator, and we generally don't fret about whether AES-CTR keystreams will "run out of key". Nonetheless, the belief that urandom will sporadically "run out of entropy" is virulent.
Something doesn't add up. Supposedly the reasons for creating urandom were to avoid blocking - but at the same time all the docs warn about entropy depletion. Are you now saying this is meaningless? If so, then why ever use /dev/random?
Exactly. The warnings about "entropy depletion" are only relevant to theoretical attacks which require either impossibly large computational resources or a flaw in the PRNG algorithm. Since either assumption breaks all our crypto for other, unrelated reasons, we use /dev/urandom.
That, and if you use /dev/urandom before it's seeded, you expose yourself to real attacks.
Excuse me for asking a stupid question, I am not too deep into linux kernel randomness generation:
Why is /dev/urandom spitting out anything before it has acquired enough entropy for the initial seed? Wouldn't it be a good idea for it to initially block?
The contract when /dev/random and /dev/urandom came out was that urandom would never, ever block.
On a system with a recent Intel processor, there's a instruction (RDSEED) that uses on on-die hardware RNG. I'm not familiar with the standard linux boot-up process, but it could in principle seed urandom using RDSEED arbitrarily early in the process. That should work on VMs too unless the hypervisor is blocking access (can't imagine a good reason for that).
Via has on-die RNG considerably longer, though it's accessed slightly differently. I don't believe AMD or ARM has anything similar.
Given the lack of the "middle ground" introduced by this patch, you have (had) a choice between the failure mode of "block for a very long time" and the failure mode of "might generate easily predicted cryptographic keys under certain extremely rare circumstances". You use /dev/random if you prefer the first failure mode.
Edit: The whole situation didn't make sense from a system design point of view (unless you don't believe in CSPRNGs, but then you're basically screwed anyway), but given the unreasonable interface as a constraint, it's conceivable that somebody might have reasonably made the choice to use /dev/random.
> "might generate easily predicted cryptographic keys under certain extremely rare circumstances"
no, he's not saying that - he's saying that there's no such thing as entropy depletion - and so urandom is secure. Which makes me ask after urandom was created - why EVER bother using /dev/random with it's blocking flaw/deficiency?
There is no reason to use /dev/random other than cargo culting devs that believe that /dev/urandom can supposedly run out of entropy.
Use /dev/urandom. Don't use /dev/random. On sane systems (FreeBSD for example), /dev/urandom is a symlink to /dev/random and /dev/random only blocks once, upon startup to gather entropy, after that it never blocks!
There's no such thing as entropy depletion, but there is such a thing as an insufficiently seeded CSPRNG - which means that /dev/urandom is not secure by design: it does not protect you against that failure mode, and in fact people rely on fragile hacks implemented by distributions to try to seed /dev/urandom properly as soon as possible in the bootup sequence. These hacks could easily break if somebody does not know exactly what they're doing while touching the boot sequence.
/dev/random is also stupid, but it does protect you against that particular failure mode.
He said there's no such thing as entropy depletion for urandom after it's been seeded. But the seed still has to come from somewhere, and one possible source for that seed is /dev/random.
/dev/random actually points to the same RNG as /dev/urandom, according to http://www.2uo.de/myths-about-urandom/ (and other sources I recall reading but can't find). So you wouldn't use /dev/random to seed /dev/urandom, but you (or the kernel) might use something else to seed both.
You make it sound like you are saying something new in your first couple paragraphs, but you aren't: the initial entropy gap issue seems to be exactly what klodolph was talking about. You hedge with "not really focused on you", but I'm going to say that you probably then have just not posted most of your post, as even if none of it were related to klodolph (which isn't the case anyway, due to the quoting of "too loose") it is still written in a way as to make people believe he made the mistake you are trying to be pedantic about :(. Just because many, even most, people make a particular mistake does not mean everyone does. Note, very carefully, the wording "at some point": and it wasn't then "until it runs out", it was "just keep on spitting out". To some people, this is a real issue stemming from a completely unreasonable kernel default that the developers of libraries and applications have absolutely no control over, which occasionally comes up on computers built with custom distributions by non-experts under the assumption "Linux will do something sane", and which in a prior thread we found actual Hacker News users who had been first burned and then didn't have good options for a fix as it was a customer's computer. I think if you would be willing to be more open to the premise that not everyone is wrong in trivial, predictable ways, you'd be surprised by how often they aren't :/. Your last paragraph is great, and I'm really glad you contributed it, but the first two were unwarranted.
We are commenting on a thread about the Linux kernel random developers making exactly the mistake you think I should assume people won't so readily make.
I described in my comment how the comment by tptacek is not correct; I can state again: it "corrects" klodolph's comment, and yet klodolph's comment was correct and did not deserve correction. tptacek tries to use this comment as an example, turning to the side to address the audience to voice this correction towards everyone with the "not really focussed on you" hedge, but uses the "not too loose" quoting to make it clear that klodolph is still the example.
This particular way in which tptacek's comment is "mistaken" is related to the tone and attitude tptacek takes in his comments. If this were the only instance, even the only instance for this particular topic, it would be one thing, but this is a common issue with tptacek's comments. The pattern is that there are certain "common misconceptions" that everyone has, and tptacek is not generous to the poster of any specific comment that they might not possess them.
I further believe that it is a serious problem on Hacker News that people do not address these kinds of tone issues: that it is perfectly fine to "HEADDESK", claim that certain beliefs are "virulent", and to even use the word "nonsensical" to describe someone's idea, and yet attempts to point out issues in the tone of peoples' comments is somehow a problem: something where you feel the need to say "don't make this conversation personal". More people need to stand up to this.
I, myself, have done some of these things in my own comments. I feel like most of these cases were situations where I was responding to someone else doing it to me, but I've found at least a few instances where that is not the case. It makes me very unhappy that I contributed to this problem: someone should have also complained about my tone in those instances. It needs to be ok to exit the topic and address how someone is saying something, not just what was said.
To be clear, tptacek's position is correct: we don't need a new interface; klodolph's argument largely ends up arguing for fixing /dev/urandom. However, tptacek doesn't say this, as he has assumed that klodolph doesn't understand the difference between /dev/urandom and /dev/random, and then argued based on that assumption. If you read the other comments from klodolph, it is very very clear that he understands perfectly. tptacek could at least apologize.
I don't think 'klodolph took offense. If he did, I would feel bad, and would certainly apologize. For now, I'm going to presume he read my comment in the spirit it was intended: that the virulent, nonsensical idea I was describing was not being attributed directly to him, hence the disclaimer at the top of the comment.
(If you re-read my comment, you'll also find that the >HEADDESK< isn't addressed to 'klodolph at all, but to the designers of the Linux randomness system call).
You could probably mail me further thoughts you have about my comments on HN, if you wanted to keep talking about it.
There's been a lot of papers about urandom being a broken pool. They need to adopt something like Fortuna or one of the more modern formalisms of Fortuna.
What those papers (or at least the ones I think you're talking about) deal with is something slightly more nuanced than what is being discussed here. They analyze how the pool recovers from a compromise, i.e., if you somehow manage to dump its entire state how quickly can you gather entropy again to minimize the damage.
It turns out Fortuna and variants thereof score very well in this metric, but this does not have any bearing on the quality of the output, or whether it loses entropy by generating more bytes.
The Linux urandom pool is managed almost identically to the random pool. The fact that random is "safe" (if blocking makes you safe) immediately after cold boot is actually just a side effect.
/dev/urandom is too loose. It's designed to be computationally secure, which means that you could use it if your adversaries were stuck in our universe and had to make their computers out of matter and power them with energy. However, it never blocks, even if there isn't enough entropy in the pool.
We just need a PRNG that will spit out numbers as long as, say, 256 random bits were added to the pool at some point. Once you have that many bits, just keep on spitting out numbers.