
Myths about /dev/urandom - Tomte
http://www.2uo.de/myths-about-urandom/
======
bredman
Could someone explain DJB's point to me:

    
    
      Cryptographers are certainly not responsible for this superstitious nonsense.
      Think about this for a moment: whoever wrote the /dev/random manual page seems to 
      simultaneously believe that
       
      (1) we can't figure out how to deterministically expand one 256-bit /dev/random
      output into an endless stream of unpredictable keys (this is what we need from 
      urandom), but
        
      (2) we _can_ figure out how to use a single key to safely encrypt many messages
      (this is what we need from SSL, PGP, etc.).
     
      For a cryptographer this doesn't even pass the laugh test.
    

Is the argument here simply that a properly encrypted message should look like
randomness? And as a result we should be able to turn something non-random
into randomness?

~~~
agwa
The same types of cryptographic algorithms used in SSL, PGP, etc. are also
used to securely expand a small bit of entropy into an endless stream of good
randomness. It's intellectually inconsistent to not trust these algorithms
when used in an RNG, but to trust them when used in SSL, PGP, etc. See the
contradiction with stressing the importance of using random, not urandom, when
generating SSL/PGP keys?

------
throwaway092834
These aren't myths, they're platform guarantees. It just so happens that a few
of the most common unixes (Linux, BSD) implement a very good /dev/urandom and
the author is suggesting that we write non-portable software that depends on
implementation details of these platforms.

There can be benefits from depending on non-portable implementation details
but also significant drawbacks.

~~~
tptacek
No. Applications _on Linux_ should use urandom to the exclusion of any other
CSPRNG.

~~~
bostik
> _Applications on Linux should use urandom to the exclusion of any other
> CSPRNG_

Applications, yes. Appliances built on it - now that's more open to
interpretation.

Back in 2003/2004 I was building a centrally managed security appliance
system. At the time I made the hard choice that first-boot operations (such as
generating long-term device keys) MUST use /dev/random. It made the initial
installs take longer, but I refused to take the chance that an attacker could
install and instrument a few hundred nodes and find out possible problems with
entropy sources.

Once the first-boot sequence was over, applications used /dev/urandom for
everything. This included the ipsec daemons. Forcing everything to /dev/random
during first boot made sure that on subsequent boots there would be (for all
practical purposes) enough entropy available for urandom to work securely.

The first-boot problems were amplified by the fact that we were running our
nodes inside virtualization. (At the time: UML, and we built our own on top of
it. Xen wasn't nearly ready enough back then.)

It's fascinating to see that the problems we had to deal with 10 years ago are
now becoming an issue again. To this day I choose to use /dev/random if I need
to generate key material shortly after boot (which could be install), or for
my own long-term use. Good thing personal GPG keys have a shelf-life of
several years...

~~~
tptacek
If you're building an appliance, why wouldn't you simply ensure urandom is
seeded at first boot?

I'm sympathetic to people's concerns about generating long-term keys. But my
problem is, /dev/random isn't addressing the major risks there either. You
should generate long-term keys on _entirely separate hardware_.

------
shawnz
> Both /dev/urandom and /dev/random are using the exact same CSPRNG

The author keeps coming back to this fact, but I don't understand how it
negates any of the arguments in favour of using /dev/random. What does it
matter if the numbers are ultimately coming out of the same PRNG? I don't see
how this negates the "self-healing" property of /dev/random or any of the
other advantages. It seems like the author is really just saying,
"/dev/urandom is probably good enough".

~~~
Tomte
This is basically a response to the widespread conception that /dev/random has
some mechanism whereby it generates "true" random numbers, while /dev/urandom
only uses some lowly PRNG.

The idea that a lot of people get from that is that urandom is not much better
than Mersenne twister, while /dev/random is the real thing.

And yes, you're right, the argument is precisely "computational security is
good enough".

By the way, /dev/urandom is also self-healing.

Which other advantages of /dev/random might be there, apart from the cold boot
case?

~~~
SeanLuke
> The idea that a lot of people get from that is that urandom is not much
> better than Mersenne twister

> And yes, you're right, the argument is precisely "computational security is
> good enough".

Point of order. MT is not, and is not meant to be, a cryptographically secure
random number generator.

Generally speaking, good RNGs may be divided into two categories. First, there
are RNGs designed for simulation. Goals: high periods, strong statistical
randomness, and speed. Then there are RNGs designed for crypto. These have one
primary goal: that even if you knew how the RNG worked, and had a long stream
of previous output, you could not [easily] predict the next output.

MT is in the first category. It is trivially predictable given about 650 past
integers, since they collectively defined its internal seed.

~~~
dfc
A point of order is limited to issues regarding procedure. It is not meant for
factual disputes and or differences of opinion.

~~~
SeanLuke
Shoudn't your comment have been a point of order? :-)

------
belorn
I could be mistaken, but weren't there a security issue a while back with keys
generated at boot time by using /dev/urandom? Since there weren't enough
entropy, using urandom created an silent failure in the security design.

I think this is a key point that the article is missing. Silent security
failures are very, very bad. Blocking will at least provide assertion that
either things are as expected, or the issue will be detected early.

~~~
ordinary
It's not missing:

 _Linux 's /dev/urandom happily gives you not-so-random numbers before the
kernel even had the chance to gather entropy. When is that? At system start,
booting the computer._

I invite you to read the article, because there's more.

~~~
belorn
Yes, I did read it.

But my key point is that urandom can fail silently. /dev/random create an
assertion that there is enough entropy, which is a useful assertion to do
until there are guaranties from urandom of the same.

~~~
KMag
But the entropy estimates are spurious, which means /dev/random can easily
fail silently. This is the whole motivation behind Bruce Schneier and Niels
Ferguson developing the Fortuna algorithm. Yarrow relied upon entropy
estimates to recover from state compromise. With Fortuna, you don't know how
long it will take to recover from state compromise (since it's difficult to
reliably estimate entropy), but you know it eventually will happen.

Also note that most POSIX systems will cat a random seed file into /dev/random
at startup and save entropy from /dev/urandom to the seed file at shutdown
(and also hopefully right after adding to the entropy pool at startup). As
long as an attacker doesn't have access to the disk or root permissions on the
host, insufficient entropy should only be a problem the first time a host
boots after install, presuming it gathers enough entropy after first boot.

FreeBSD and OSX use Yarrow for /dev/random and /dev/urandom, with absolutely
no difference in behavior between /dev/random and /dev/urandom. I'd like to
see Linux adopt Fortuna and make /dev/random non-blocking, though I can see
the case being made for making /dev/random block after each boot until the
estimated pooled entropy first goes above some threshold.

~~~
belorn
What happens with live-cd's on a read-only media? What about disk-less/thin
clients? What about image restoration from backup? In each of those cases, the
seed will either not be there or identical for each boot.

I would like to see a smarter /dev/urandom which asserts that there is enough
fresh entropy. It would increase the trust of using it, and would put
/dev/random usage in the special category of one-time pads. Until then, each
case which the saved seed can cause issues with will have to be addressed if
one choose to use /dev/urandom.

~~~
stouset
/dev/random should not be used for one-time pads. It's still generated from a
stream cipher-based CSPRNG and is not "true" randomness.

------
nly
Another good source explanation of the difference between urand and rand is
the Linux source code, which is well commented:

[https://github.com/torvalds/linux/blob/master/drivers/char/r...](https://github.com/torvalds/linux/blob/master/drivers/char/random.c)

The operation is explained starting on line 52. The 2 paragraphs starting on
line 78 explain the relationship behind the entropy estimator and the CSPRNG
(It relies on SHA-1).

> As more and more random bytes are requested [from /dev/urandom] without
> giving time for the entropy pool to recharge, this will result in random
> numbers that are merely cryptographically strong

~~~
danieldk
Thank you! That was actually much clearer. Line 307 is also relevant - it
seems that the urandom pool is reseeded at most every 60 seconds?

[https://github.com/torvalds/linux/blob/master/drivers/char/r...](https://github.com/torvalds/linux/blob/master/drivers/char/random.c#L307)

------
tkiley
I'm not a crypto expert but I find the arguments of this article (and tptacek
et al) convincing.

The one thing I still don't like about /dev/urandom is this: urandom's
proponents say that it only fails on first boot, because modern linux distros
capture entropy to seed urandom on following boots.

This means that in order for /dev/urandom to be ok, I have to depend on
specific functionality of both the kernel and the linux distribution. In
contrast, /dev/random is (arguably) harder for the distro to mess up, as it's
primarily a function of the kernel.

Given the number of times distros have screwed up crypto, what are the odds
that this will someday matter in the real world?

------
rlpb
> Virtual machines are the other problem. Because people like to clone them,
> or rewind them to a previously saved check point, this seed file doesn't
> help you.

There is work on fixing this problem (or, at least, improving the situation)
on Ubuntu: [http://blog.dustinkirkland.com/2014/02/random-seeds-in-
ubunt...](http://blog.dustinkirkland.com/2014/02/random-seeds-in-
ubuntu-1404-lts-cloud.html)

~~~
nodata
Cloning: okay. Regenerate the seed, and also regenerate your sshd's private
key.

But rewinding and restoring to a previous save point? What's the harm in
keeping the seed?

~~~
rlpb
I'm not sure what the author's intention was when he said that, but I think
there would be a problem if you cloned, and then returned to a previous save
point shared with other clones.

~~~
Tomte
That's one thing.

The other one is that after rewinding you've got the identical internal state
of the CSPRNG.

So you're possibly reusing the same random numbers (or closely related ones)
as before.

Of course, after a short while the rewinded VM diverges. It's another flavor
of the cold boot case, really.

~~~
danielweber
If I rewind a VM, and it chooses the exact same random numbers this time, I
consider that everything acting correctly.

------
__david__
Why not make /dev/urandom block, but only at boot time until sufficient
entropy has been achieved? Then you could "trust" it, even in boot conditions
and in normal operations it would still never block…

Or if the change in semantics is too onerous, a new /dev/drandom (delayed-
random) could do this.

------
saurik
This article's premise seems to come down to the key argument under the
section "What's wrong with blocking?", which seems to summarize as "security
should not get in the way of the developer: if it does, they will turn off the
security". This frankly just seems silly: if the developer cannot be trusted
to not do something insane (seriously? "patching out the call to random()"?!),
then they should not be developing :/.

This is like saying that using consistent databases for things that require
consistency is also "wrong", because the developer is going to add some
ludicrous poorly-implemented caching layer in front on production as the
performance dips, leading you to have a poorly-controlled inconsistent system
anyway. It's an argument I appreciate, but one that indicates a deeper problem
of developers thinking they can fix things they can't.

...and that's the only argument in the entire document for why /dev/random is
_bad_ : everything else is a justification for why /dev/urandom should be
considered a reasonable alternative to /dev/random, most of which is difficult
to confirm or deny unless you are a cryptographer. I mean, from a naive
standpoint, it would sound like "putting all your eggs in one basket" could
lead you to a problem, but how would I know, right?

Helpfully, the article claims to quote cryptographers on the matter. However,
and this pretty much damns this article for me: he takes these quotes out of
context. If you read the original sources, Daniel Bernstein makes it clear
that /dev/urandom is broken on Linux (more on this later), and Thomas Pornin
was responding to a user that was simply trying to get unique random values
(where /dev/urandom is obvious).

As for Thomas Ptacek's emphatic "use /dev/urandom" repetition, his article was
about people attempting to use user-land replacements for the kernel
implementations, not people who were using /dev/random. In essence, he's
trying to make the argument that people shouldn't be _so afraid of
/dev/urandom_ so as to _use something clearly worse_. He _does_ state that
/dev/random is overrated, but correctly points out its flaws.

Back to this article: after a ton of argumentation that the people suggesting
that there is a key difference between /dev/random and /dev/urandom are
inherently being misleading, the author gets around to quoting a section of
the man page that sounds perfectly reasonable, and even is willing to cede
that there is no issue using /dev/random for those use cases. I think the
author is just mad because some service is slow. :/

Now, here's what I'll leave you with: an argument why this author's article is
not just wrong but promoting beliefs that are actively harmful, an argument
that can I hope can be believed even if you aren't a cryptographer and does
not rely on me being a cryptographer: its an argument based on practical
concerns only, with a demonstration of the problem not just in theory, but
that happened to real people/developers.

[edit: Apparently, I misremembered this Android issue, so when I pulled
sources for it I ended up with something that is not actually an issue with
/dev/urandom. I have added details on a different situation where /dev/urandom
actually was the culprit, but the argument isn't _quite_ as strong. I was
hoping for an argument that anyone would believe without invoking even a
single hypothetical, but sadly I screwed up: I'm sorry for the confusion :/. I
think the new example still makes a strong, though not impervious, point.]

So, if the author got his way, then all source code everywhere would be
referencing /dev/urandom instead of /dev/random for all use cases (including
"long-lived keys"). However, the author also admits that /dev/urandom "isn't
perfect", and that on Linux it _never_ blocks, even when it really should. It
is then claimed that the correct solution to this problem is that you should
seed virtual machines with entropy before using them.

Look: that just isn't always possible. I say this not from theory, but from
the trenches of information security. On Android, some poor developer made
java.security.SecureRandom use /dev/urandom. This meant that all Java
libraries that rely on random numbers were getting ones from urandom, even
when they claimed that they wanted "cryptographically strong" ones. Put short:
the author's dream come true, codified into the library.

Sadly, users expect to be able to use Android devices soon after they boot,
the user is not in control of the entropy sources, and the people who build
the kernel drivers for these devices often didn't include enough entropy
sources to fill the pool faster. This meant that all of this code was now
broken: instead of blocking until "seriously-random numbers" could be
generated, it would get "reasonably random-looking ones".

This is not a theoretical issue: people built bitcoin wallet applications
sitting on top of these APIs, and some of the people using those wallets ended
up with bitcoin addresses secured by keys that were painfully predictable
across usages on different devices (the specific concern people bring up using
/dev/urandom!). This was looked into, traced back to this design flaw, and
Google even commented on the situation.

[edit: For clarity, this is the part that is wrong; as pointed out in the
author's rebuttle to my comment, SecureRandom was actually doing something
ludicrous here--the kind of thing that Thomas Ptacek was angry about ;P--and
thereby Google's own article, which I had pulled up as a source for something
I had apparently misremembered, claims that developers could consider using
/dev/random _or_ /dev/urandom to replace it.]

[http://android-developers.blogspot.com/2013/08/some-
securera...](http://android-developers.blogspot.com/2013/08/some-securerandom-
thoughts.html)

[edit: Thankfully, there is another situation I can cite: the attack on
routers from a year or two ago that involved low-entropy keys that were
generated at boot. It is the same issue, but what makes this a weaker argument
is that the user is now the developer, and the developer is actually in
control of the device and could have fixed the entropy pool as stated by both
this author and by Daniel Berstein's full comment.]

[edit:]

So, in this new example, the key issue is that a bunch of embedded systems--
routers and firewalls from companies such as Cisco--were generating keys to
secure their configuration portals and administrator consoles using
/dev/urandom. This was done, on first boot (when there would be no entropy
available), and the result was that many RSA keys were generating the exact
same input prime numbers. If you are trying to factor a large semi-prime, and
you have another large semi-prime that shares a factor, you can break it apart
very easily with simple math.

Some researchers went through to determine why this happened, and their
conclusion was that reliance by library developers on /dev/urandom, being used
on devices and at moments where /dev/urandom didn't end up working very well,
was the "root cause" of the issue, and made a recommendation to library
developers that usage of less secure mechanisms should be a non-default option
and that any tradeoffs in its selection should be clearly documented, lest
security be contingent on factors that do not immediately seem related to
security.

[https://factorable.net/weakkeys12.extended.pdf](https://factorable.net/weakkeys12.extended.pdf)

> In the final component of our study, we experimentally explore the root
> causes of these vulnerabilities by investigating several of the most common
> open-source software components from the population of vulnerable devices
> (Section 5). Based on the devices we identified, it is clear that no one
> implementation is solely responsible, but we are able to reproduce the
> vulnerabilities in plausible software configurations. Every software package
> we examined relies on /dev/urandom to generate cryptographic keys; however,
> we find that Linux’s random number generator (RNG) can exhibit a boot-time
> entropy hole that causes urandom to produce deterministic output under
> conditions likely to occur in headless and embedded devices. In experiments
> with OpenSSL and Dropbear SSH, we show how repeated output from the system
> RNG can lead not only to repeated long-term keys but also to factorable RSA
> keys and repeated DSA ephemeral keys due to the behavior of application-
> specific entropy pools.

> When we disabled entropy sources that might be unavailable on a headless or
> embedded device, the Linux RNG produced the same predictable stream on every
> boot. The only variation we observed over 1,000 boots was the position in
> the stream where sshd read from urandom.

> On stock Ubuntu systems, these risks are somewhat mitigated: TLS keys must
> be generated manually, and OpenSSH host keys are generated during package
> installation, which is likely to be late in the install process, giving the
> system time to collect sufficient entropy. However, on the Fedora, Red Hat
> Enterprise Linux (RHEL), and CentOS Linux distributions, OpenSSH is
> installed by default, and host keys are generated on first boot. We
> experimented further with RHEL 5 and 6 to determine whether host keys on
> these systems might be compromised, and observed that sufficient entropy had
> been collected at the time of key generation (due to greater disk activity
> than with Ubuntu Server) by a slim margin. We believe that most server
> systems running these distributions are safe, particularly since they likely
> have multiple cores and gather additional entropy from physical concurrency.
> However, it is possible that other distributions and customized
> installations do not collect sufficient entropy on startup and generate weak
> keys on first boot.

> For library developers: Default to the most secure configuration. Both
> OpenSSL and Dropbear default to using /dev/urandom instead of /dev/random,
> and Dropbear defaults to using a less secure DSA signature randomness
> technique even though a more secure technique is available as an option. In
> general, cryptographic libraries should default to using the most secure
> mechanisms available. If the library provides fallback options, the
> documentation should make the trade-offs clear.

[/edit]

Of course, this is covered by the idea that these devices were not properly
configured; but that's kind of the point, right? What everyone is saying about
/dev/urandom is that it isn't guaranteed to produce anything "seriously"
random; it happens to, most of the time, on most operating systems, produce
something _incredibly_ random, but some times, on some operating systems, it
might produce something that is barely random at all.

This means that if everyone switches to using /dev/urandom in their code, a
ton of libraries are going to work on some systems, and not on others; they
might work on a properly configured server, but not on an Android device, or
might work on one cloud provider, but not another... this is insane... if I
download "secure off-the-record messaging implementation library for Linux",
am I going to expect it to fail like this?

Certainly, the developers this article seems to believe are both common and
acting reasonably enough to be worked around by people writing code--the ones
who are supposedly going around patching random() out their libraries ;P--are
not going to realize their code isn't secure, or that they they need to
configure their kernel or operating system distribution to make it secure; but
this is even going to burn the people reasonable enough to download code and
not try to hack it :/.

[edit: And, to close the earlier edit: clearly the people who were building
these embedded routers didn't realize these tradeoffs. The author of this
article is thereby asking developers to be cognizant of how these different
pseudo-random number generators operate and design their entire systems in
ways that will make specific sub-components, components they probably don't
understand well and "just want to be secure", continue to work correctly under
potentially-modified assumptions. While they technically can do this--unlike
the sadly-flawed Android argument I wanted to make, where users and developers
had no control--this seems like a much higher bar than keeping people from
taking random() and patching it out entirely due to a performance benefit, and
comes down to a simple heuristic.]

~~~
Tomte
Thanks for laying out your thoughts, but obviously I disagree.

Let me rebut two points that kind of irk me:

You claim that the Android problem that bit several Bitcoin wallets was due to
java.security.SecureRandom using /dev/urandom.

That is mistaken, as the blog posting you gave clearly shows:

"Developers who use JCA for key generation, signing or random number
generation should update their applications to explicitly initialize the PRNG
with entropy from /dev/urandom or /dev/random."

So, in their opinion, /dev/urandom is fine.

Furthermore you claim that I've taken quotes out of context.

I was aware that there is the danger of misunderstanding these people's point
of view, so I emailed all three of them, right when I put this web page up
some days ago, specifically asking them if they felt that I might be
misrepresenting them or if they are otherwise unhappy about me using their
names.

Of those three two replied (both very quickly).

Daniel Bernstein replied very shortly with "Seems reasonable.", but noted that
he disagrees with the "more entropy cannot hurt" point. That's why I added the
little sidebar with a link to his blog posting.

Thomas Pornin seems to have taken the time to really read the article, he
wrote "That's a fine page. I like it.", noted a mistake in the boot scripts
section (that I have fixed) and suggested some additional points to discuss
(which I did not put on the page).

Off-topic remark: All in all I was really fascinated how easy it is to get in
touch with those highly-respected people, and how respectful they treat "us
normals". :-)

~~~
saurik
Wow, you totally win on the Android thing; I apparently misremembered that
issue. I'm really sorry: I pulled a source for that based on memory of it
having happened, but failed to notice that I had misremembered the cause of
the problem. I've added a couple inline edits to my comment response admitting
the ways in which it is wrong. Thankfully, this isn't the only example of this
problem, it was just the one that was easiest for me to cite, so I'm going to
go add a couple paragraphs shoring up the argument [edit: done; I also left
the original example, but admit clearly how it is flawed].

> Furthermore you claim that I've taken quotes out of context.

As for this, I don't just claim it: I demonstrate it; the fact that these
people also separately said your article sounded reasonable doesn't change the
original reasons they made their comments. It hurts my position that they
approved of your article, but it still irks me that you are using these quotes
as evidence, when they aren't actually good arguments for your position. "I
e-mailed these people and they agreed with my article" would have been a much
stronger statement to me than "they said these things, out of context and in
ways that sort of undermine my position, that you should believe".

------
stcredzero
What if certificates and SSL weren't guaranteed to work properly "on cold
boot" and the industry expected OS implementers to know this and take
appropriate steps to "do it right?" This wouldn't make any sense at all. Yet
CSPRNG are pretty foundational to cryptographic tools across the board, and
the programming field seems to have this kind of hair-shirted attitude about
it.

We as a technical community and we as a society need to start using the same
stances and strategies that governments and agencies like the NTSB use to
prevent accidents. When planes crashed in the wooly pioneering days of
aviation, people probably stood around sadly tsk-tsking someone's failure to
recognize a stall. Today, the NTSB conducts root cause analyses and the
industry takes concrete steps to prevent future mishaps. (Like equipment that
detects stall and warns the pilot.)

In this case, why don't we simply have classes of devices that are expected to
implement crypto for end-users, and require them to be built with generous
entropy pools? (To head-off the expected smart-asses: Each one gets their own
unique one, of course.)

(EDIT: An analogy: What if you knew that pilots all had conflicting opinions
on how to fly and maintain a plane, with constant acrimonious debates, and
where some large swathe of all pilots simply were operating on misinformation?
Would you fly? Would you expect lots more crashes? Now ask yourself what the
situation is like with computer security, where knowledgeable people expect
security to always be broken.)

~~~
marcosdumay
> In this case, why don't we simply have classes of devices that are expected
> to implement crypto for end-users, and require them to be built with
> generous entropy pools?

Intel added a generous entropy source to their processors. The result is that
nobody trusts it as a single entropy source, and /dev/random blocks anyway.
Also, we are learning that using it only as an added source may also be
harmful, and maybe we should ignore it completely.

I don't think this'll have any simple solution, although the idea of a CSPRNG
that may block only at boot is interesting.

~~~
stouset
There's no problem with combining non-entropic (or even malicious!) sources of
entropy into an entropy pool. As long as _some_ (and enough) of the input is
unpredictable to an attacker, they can't predict any outputs of the system.

------
binarycrusader
The author seems to be assuming that all operating systems use the same
implementation or have the same man page even for these random number
generator devices -- they do not.

For example, on Solaris from random(7D):

    
    
      The   /dev/random   and /dev/urandom  files  are suitable
      for applications requiring high quality random numbers
      for cryptographic purposes.
      ...
      While bytes produced by  the /dev/urandom  interface are
      of lower quality than bytes produced by /dev/random, they
      are nonetheless suitable for less demanding  and shorter
      term cryptographic uses such as short term session keys,
      paddings, and challenge strings.
    

In short, on Solaris, you can use either one although you are encouraged to
only use /dev/urandom for specific cases.

~~~
Dylan16807
How much do you want to bet that that man page isn't grossly misleading, just
like the linux one?

~~~
binarycrusader
I'm very much willing to bet the Solaris man page is NOT grossly misleading,
since I know the past and present authors / maintainers of the cryptography
framework on Solaris personally :-)

Also, one of the Solaris Security Engineers wrote about random number
generation on Solaris extensively just last year:

[https://blogs.oracle.com/darren/entry/solaris_random_number_...](https://blogs.oracle.com/darren/entry/solaris_random_number_generation)

Some blogs from other Solaris Security engineers:

Valerie Fenwick: [http://bubbva.blogspot.com/](http://bubbva.blogspot.com/)

Dan Anderson: [https://blogs.oracle.com/DanX/](https://blogs.oracle.com/DanX/)

Darren Moffat:
[https://blogs.oracle.com/darren](https://blogs.oracle.com/darren)

...and last, but not least, Enrico Perla (doesn't have a blog at the moment as
far as I know) did author this book and is a Solaris Engineer and someone that
works on Solaris Security-related things:

[http://www.amazon.com/Guide-Kernel-Exploitation-Attacking-
Co...](http://www.amazon.com/Guide-Kernel-Exploitation-Attacking-Core)

~~~
Dylan16807
Okay, you've convinced me, they definitely put in a somewhat weaker urandom.
Also that function for getting random numbers with no zero bytes 'for key
generation' terrifies me a bit.

~~~
binarycrusader
Yeah, Solaris has pretty strong crypto verifications thanks to a great
security team :-)

I'm glad I could provide useful info.

------
dspillett
_> Imagine an attacker knows everything about your random number generator's
internal state ... But over time, with more and more fresh entropy being mixed
into it, the internal state gets more and more random again._

Of course that probably only only counts if an external or local unprivileged
entity manages to become informed of the random source's internal state. If
the attacker has direct access to the kernel's state then they probably have
access to influence (or at least monitor) the incoming entropy such that they
can _stay_ informed of the full internal state.

There is a point in some attacks beyond which your only half way guaranteed
solution is the metaphorical orbital nuke platform.

~~~
spoiler
If you always use the same seed, and the attackers knows the PRNG
implementation details, then he can devise the seed, and after that he can
predict the next value. So, reseeding makes sure this doesn't happen. I think.
I'm not a cryptography expert, more of an aspiring noob. :D

You don't need access to the machine.

~~~
dspillett
Aye, but if the attacker is local and can see the pool directly, then they may
be able to circumvent your attempts to reseed so you can't escape the
situation that way. Once someone has that level of access all is lost: turn it
off and build a new one!

Of course if the attacker is not that far in and has derived the PRNG state by
other means (and you can stop them just relearning the state by the same
means) then reseeding would work.

------
zorlem
The only sensible long-term solution for Linux seems to be to adopt FreeBSD's
way of /dev/random operation - block once until enough entropy is gathered and
then never block.

This would make sure that distro vendors don't even get a chance to mess
initial seeding at boot time. It will also force vendors of embedded or
"cloud" distributions (eg. Ubuntu's AWS images) to find a way to pre-seed the
images to reduce the initial boot times.

Unfortunately there is such a huge amount of software that depends on this
particular difference between /dev/random and /dev/urandom that I don't see
the change happening soon.

------
nzp
Always good to see people trying to fight superstition. :) Just one tiny
nitpick if you don't mind: it seems to me it's somewhat unclear what does
FreeBSD actually do. In FreeBSD there is in fact no /dev/urandom, it's just a
symlink to /dev/random. If one already doesn't know this, relevant parts of
the last section can be a bit confusing, IMO, so maybe you should point out
that they are the same device (or that urandom doesn't really exist, or
something like that).

~~~
Tomte
Thanks!

------
gnur
I found this article very confusing, second paragraph:

    
    
      /dev/urandom is a pseudo random number generator, a PRNG,
       while /dev/random is a “true” random number generator.
    
      Fact: Both /dev/urandom and /dev/random are using the exact same CSPRNG (a cryptographically secure pseudorandom number generator). They only differ in very few ways that have nothing to do with “true” randomness.
    

What? So it is a myth, then why is the line started with: "fact"?

~~~
milliams
He's saying:

    
    
      This a commonly believed myth
    
      Fact: This is the real fact

------
MichaelMoser123
most libraries use some form of specialized crypto random number generator
(openssl uses the ssleay random number generator, JDK has the same schema for
java.security.SecureRandom): /dev/rand is used to seed it with some random
numbers, each iteration computes the SHA1 checksum of the previous state, the
algorithm returns part of the state as random number output.

The assumption here is that the initial state can't be guessed, i guess you
would rather use /dev/random for that, even if it blocks. Now be careful with
the initialization step of your favorite crypto library, most inconsistencies
/fishy tricks do happen right here !

Also another issue is that that such a random number generator introduces
locks; I have a small program that encrypts everything with RSA, and uses
multiprocessing to speed up the process; RSA encryption is supposed to use
padding schema that needs to generate random numbers, so it locks, however you
don't have this problem with decryption - that's where it counts because RSA
decryption is much slower than encryption.

here is my project:
[http://mosermichael.github.io/cstuff/all/projects/2014/02/24...](http://mosermichael.github.io/cstuff/all/projects/2014/02/24/openssl.html)

------
zentrus
As people have pointed out in this thread and other articles, clearly there
can be issues with using /dev/urandom on boot where there is a lack of
entropy. What I would love to know more than anything on this topic though, is
whether it might ever make sense to use /dev/random after boot--after the
CSPRNG has been properly seeded with good entropy. Specifically, I would love
to hear of any real attacks. Anyone?

~~~
nzp
It makes no sense. The only difference between /dev/random and /dev/urandom is
that random uses the estimator of the seed entropy (which doesn't make that
much sense to begin with) to block when it "estimates" that there wasn't
enough entropy in the seed, but apart from that gives numbers _identical_ to
the ones urandom gives.

~~~
zentrus
Maybe I'm not following your response. What I'm saying is that there _are_
reasons to use /dev/random at least after boot (to block for seed entropy).
Saurik gives a real world example of the "cold boot" problem as other articles
have done in the past. Are you saying we should be able to simply stop there
and then just use /dev/urandom? If so, then why is the CSRNG re-seeded every
60 seconds? It would seem that there are reasons for doing so.

~~~
nzp
> Are you saying we should be able to simply stop there and then just use
> /dev/urandom?

Yes.

> If so, then why is the CSRNG re-seeded every 60 seconds? It would seem that
> there are reasons for doing so.

From what I understand about it (IANACryptographer), mostly to safeguard
against situations where an attacker can learn the internal state of the
CSPRNG at one moment. But I don't think this is too relevant for the problem
at hand.

The only difference in Linux between /dev/random and /dev/urandom is that
/dev/random always plays a guessing game about the seed entropy. The main
argument here is, I guess, that this is a silly game. How can you really
estimate entropy? It doesn't only depend on what physical sources in the
machine give you, but also on what the attacker knows or can know about these
sources and the state of the machine. How do you estimate that?

This was the main motivation behind the Fortuna CSPRNG by Schneier, Ferguson
and Kelsey. There is no guessing going on, rather the algo is designed to not
need to guess. Interestingly, the way Fortuna works seems to me would also
somewhat mitigate the kind of attack DJB talks about in the article referenced
by OP[1].

[1]
[http://blog.cr.yp.to/20140205-entropy.html](http://blog.cr.yp.to/20140205-entropy.html)

------
peterwwillis
What is the point behind an 18-point font? Even if you Ctrl-minus to reduce
the font you get this tiny squeezed column of text and _still_ have to scroll
forever. My scrolling finger and eyes are exhausted.

~~~
Tomte
It's hard to please everyone.

[http://www.reddit.com/r/linux/comments/1zt76a/myths_about_de...](http://www.reddit.com/r/linux/comments/1zt76a/myths_about_devurandom/cfww9gs)

------
blazespin
For the vast majority of cases, urandom is fine. I am glad random exists
though.

------
nodata
Nice try, NSA!

------
zobzu
tldr..

haveged, /dev/random

