
On Linux's Random Number Generation - raesene9
https://research.nccgroup.com/2019/12/19/on-linuxs-random-number-generation/
======
tytso
There are a number of things that the blog post gets wrong. First of all, it
was not Jason A. Donenfeld, the author of Wiregard, which added the
ChaCha20-based cryptographic random number to Linux. It was me, as the
maintainer of Linux's random number generator. The specific git commit in
question can be found at [1].

[1]
[https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e192be9d9a30555aae2ca1dc3aad37cba484cd4a)

Secondly, I think a lot of people have forgotten what things were like in the
early 90's. Back then encryption software was still export controlled, so we
couldn't put DES (AES wasn't released until 2001) into the kernel without
triggering all sorts of very onerous US government restrictions (and Europe
was guilty of this too, thanks to the Wassenaar Agreement); you couldn't just
put things up on an FTP. Also, back then, there was much less public
understanding of cryptography; the NSA definitely knew a lot more about
cryptoanalysis that in the public world, and while it was known by 1996 that
MD5 had Problems, there wasn't huge trust that the NSA hadn't put a back door
into SHA-1 which was designed by them with zero explanation about its design
principles.

This is why the original PGP implementation, as well as the Linux Kernel
random number generator, was very much focused on entropy estimation. We knew
that it was potentially problematic, but then again, so was relying on
cryptographic algorithms that were poitentially suspect. There was a good
reason why in the 90's, it was generally considered a very good idea to be
algorithm agile; there simply wasn't a lot of trust in crypto design, and
people wnated to be able to swap out cryptographic algorithms if it was found
that some algorithm (e.g., like MD4, and later MD5) was found to be insecure.
So the snide comments about people not trusting algorithms seems to miss the
point that even amongst the experts in the field --- for example, at the
Security Area Directorate at the IETF, of which I was a member during that
time --- there was a lot of thinking about how we could deploy upgrades if it
were found that some crypto algorithm had a fatal weakness, and we would need
to swap out crypto suites with minimal interoperability issues.

Unfortunately, being able to negotiate crypto suites leads to downgrade
attacks, such as we've seen with TLS --- but what people forget is that when
the original SSL/TLS algorithm suites were designed, people thought they were
good! It was only later that some crypto suites were found to be insecure,
leading to the downgrade attack issues. But it also shows that people were
___right_ __to be skeptical about crpyto algorithms in that era.

Since then, we've learned a lot more about cryptographic algorithm design, and
so people are a lot more confident that algorithms can be relied upon to be
secure --- or, at least, other issues are much more likely to be weak link.
That's why Wireguard is designed without any ability to negotiate algorithms,
and as a result, it makes it _much_ simpler than IPSEC. And it's probably the
right choice for 2019. (At least, until Quantuum Computing wipes out most of
our existing crypto algorithms; but that's a rant for another day.)

As far as monitoring entropy levels in Linux, in general, the primary reason
why we need it is because even if we are willing to invest a lot of faith into
the ChaCha20 CRNG, we still need to provide a secure random number seed from
_somewhere_. And that can be tricky. If you fully trust a hardware random
number generator, then sure, no worries. Or if you are using a cloud provider,
so you _have_ to trust the hypervisor anyway, then using virtio-rng to get
randomness from the cloud provider is fine. (If you cloud provider wants to
screw you, they can just reach into the guest memory or intercept network or
disk traffic at boot time, so if you don't trust not to backdoor virtio-rng,
you shouldn't be using the cloud provider at all.)

As far as whether or not to trust RDRAND, the blog post seems to assume that
it's absurd to trust that NSA couldn't possibly have backdoored the CPU
instruction. On the author hand, there are those who remember DUAL-EC-DRBG,
where most people _do_ now believe the NSA did put in a backdoor. And Snowden
revelations did show that NSA teams were putting backdoors into Cisco routers
by intercepting them between when they are shipped and when they were
delivered. So given that you can't audit the Intel CPU's RDRAND, and Intel
_is_ a US company, it's not _that_ insane to perhaps have some qualms about
RDRAND. After all, if you were using a chip provided from a Chinese company
(where the owner of said company miight also have been a high ranking general
in the PLA), or a CPU provided by a Russian company controlled by a Russian
Oligarch who is good friends with Putin and who also had a background from the
KGB --- is it _really_ insane to be worried about those CPU's? Let's not even
talk about concerns over China and 5G telephony equipment. Why is it then
completely absurd for some people to be considered about the complete
inauditability of RDRAND, and the fact that no functional or statistical test
can determine whether or not there is a backdoor or not?

Of course, if you really don't trust a CPU, you should simply not use it. But
creating a CPU from scratch, using only 74XX TTL chips really isn't a
practical solution. (When I was an undergraduate at MIT, we did it as part of
an intro CS class; but MIT doesn't make its CS students do that any more.) So
the best we can try to do is to try to spread out the entropy sources; that
way, even if source 1 might be compromised, if it is being mixed with source 2
and source 3, hopefully at least one of them is secure. (Or maybe source 1 is
backdoored by the NSA, and the source 2 is backdoored by the Chinese MSS, but
if we hash it all together, hopefully the result will only be vulnerability if
the NSA and MSS work together, which hopefully is highly improbable.)

The bottom line is that it's complicated. Of course I agree that we should use
a CRNG for most purposes. But we still have to figure out good ways of seeding
a CRNG. And in case the kernel memory gets compromised and read by an
attacker, or if there is a theoretical vulnerability in the CRNG, it's good
practice to periodically reseed the CRNG. And so that means you still need to
have an entropy pool and some way of measuring how much entropy you think you
have accumulated, and how much has been possibly revealed ("used") for
reseeding purposes. In a perfect world, of course, assuming that we had
perfectly trustworthy hardware to get an initial seed, and in a world where we
are 100% sure that algorithms are bug free(tm), then a lot of this isn't
necessary. But we in real world engineering, we have safety margins because
sometimes theory breaks down in the face of reality....

~~~
est31
What's your response to Pornin's criticism of 5.3 changes?

> Linux 5.3 will turn back getrandom() into /dev/urandom with its never-
> blocking behavior, because, quite frankly, Linus’s opinions on his own
> mastery of RNG theory exceed his actual abilities. The reasoning seems to be
> that if there is “not enough entropy”, then the application should make an
> interpretative dance of some kind to promote the appearance of new hardware
> events from which the entropy can be gathered. How the application does
> that, or why the kernel should not do it despite being much closer to the
> hardware, is not said

~~~
tytso
I wasn't a real fan of the 5.3 change, but it was better than some of the
alternatives that people were proposing. At the end of the day, the problem is
that userspace just shouldn't be trying to get randomness during early boot.
But if they do, and we make the kernel more efficient things can break, and
Linus believes that if there is a user-visible regression we Have to Fix it,
even if it the root cause is broken user space.

In this particular case, what triggered this was an optimization in ext4 in
how we did directory readahead, which reduced the number of I/O's done during
the boot sequence. Some user space program in early boot tried to call
getrandom(2), and it blocked because with a smaller number of I/O's, for some
hardware platforms, it would stop the boot sequence in its tracks, and this
then resulted interrupt events, leading to no further activity, leading to an
indefinite hang.

So what do we do? We could revert the ext4 optimization, but what it did was
draw attention to the fact that in the absence of a hardware random number
generator which everyone trusted, what we had in terms of CRNG initialization
was fragile.

Now the blog posting is inaccurate here as well, it is _not_ the application
which needs to make an interpretative dance of some kind. We actually _are_
doing it in the kernel, and there is at least some hope that on an x86, making
some assumptions about how the caches and clocks work, it _probably_ is
secure. I am actually worried that it won't be secure enough on simpler RISC
cores, such as ARM, RISC-V, MIPS, etc.

The real right answer is that we should be pulling from as many hardware
random number generators (that are _designed_ for crypto purposes) that are
available, such as from the UEFI, or the TPM, and mix that in as well. We'll
probably continue to have config options so that the person building the
kernel can decide whether or not those will be trusted. That hasn't happened
yet, but I've been trying to recruit some volunteers to implement this in UEFI
boot, or using NERF or Coreboot, etc. Util we do, or for hardware that doesn't
have trusted hwrng's, starting in 5.3, we now have an in-kernel interpretative
dance which is the fallback, for better or worse.

I'm not super-fond of that, but it was better than the alternative, which was
_no_ interpretive dance, and simply having getrandom(2) return "randomness"
regardless of whether or not we thought it was random or not. On modern x86
processors, we will be mixing in RDRAND, so if you trust RDRAND, you'll
probably be OK, interpretative dance or not. But the big worry is going to be
on simpler CPU's such as RISC-V.

Ultimately, there are no easy solutions here. Arguably, just gathering timing
events during the boot was also an "interpretive dance", since how much
uncertainty there really is from SSD operations, and whether the SSD is using
a oscillator different from the one used by the CPU, etc., involves a certain
amount of hand-waving. So the only real solution is real, carefully designed,
hardware RNG's. But then the question is how an you be sure they are
trustworthy? This conundrum has always been there.

For myself, I use a ChaosKey[1] and make sure it is contributing to the
entropy pool before I generate long-term public keys. Of course, can I be sure
that the NSA hasn't intercepted my ChaosKey shipment and trojaned it, the way
they did with Cisco Routers? Nope. I can only hope that I'm not important
enough so that they wouldn't have bothered. :-)

[1] [https://keithp.com/blogs/chaoskey/](https://keithp.com/blogs/chaoskey/)

~~~
upofadown
>We'll probably continue to have config options so that the person building
the kernel can decide whether or not those will be trusted.

My understanding is that XORing a trusted seed source with any number of
untrusted seed sources results in a trusted seed source. What would be the
point of such kernel options?

~~~
tytso
The problem is "trusted" is as much a social construct as it is a technical
one.

For example, if you are an NSA employee, you might be utterly confident that
the NSA didn't twist Intel's arms to put in a backdoor into RDRAND --- e.g.,
that it isn't AES(NSA_KEY, SEQ++) --- and even if it were, you would be sure
that as a US citizen, you would be safe from intrusive attacks to spy on your
communications without a FISA warrent, which of _course_ would only be done
with a super scrupulous attention to legal process, the recent IG report of
the Carter Page FISA warrant to the contrary.

In 2019, if you are a Republican member of the House of Representatives, such
as Devin Nunes, you might be sure that the FBI is playing fast and louse with
all FISA warrants, and so if so no one is safe from politically motivated
investigations, especially if you are working for the Trump campaign, such as
Carter Page.

See? Two different people might have very different opinions about whether a
particular source should be trusted or not.

~~~
upofadown
My point is that you don't have to trust any particular source of randomness.
NSA can backdoor RDRAND all they want. As long as there is a single legit
source of randomness XORed in then the NSA is just wasting their time.

So having options to remove some sources is pointless and can only make things
worse, never better.

~~~
tytso
Sure, and that's why we still have the entropy pool in Linux. It's where we do
the mixing.

My apologies if I didn't understand the point you were making.

And yes, I would love to see us adding _more_ entropy sources in the future.
The problem is I don't have a lot of time, and a lot of other projects to work
on, but I've been trying to recruit people to add support to pass entropy from
UEFI into the kernel (which requires changes to Grub, NERF, Coreboot, etc.),
being able to pass entropy from one kernel to the next when using kexec, etc.
I can't do it all myself; this is something that requires a lot of people to
make things better.

------
tptacek
This is Thomas Pornin, by the way, from whom I and many others have learned a
lot, and whose original Stack Exchange answer† about random vs. urandom set me
and presumably Thomas Huhn (there's something about urandom and Thomases)
tilting against this Linux windmill with our respective urandom advocacy
pages, both of which pop up on HN every couple of months.

This article is authoritative and also covers some recent updates, like Jason
Donenfeld's work on the Linux RNG, is well worth a read, and hopefully will
show up on HN at least as often as me and Thomas Huhn's.

[https://security.stackexchange.com/questions/3936/is-a-
rand-...](https://security.stackexchange.com/questions/3936/is-a-rand-from-
dev-urandom-secure-for-a-login-key)

~~~
ajross
That, and the article, seems like sort of a tangent though. In practice,
/dev/urandom advocacy has won for almost all apps. That's what everything
uses, and it meets the requirements desired by you and Pornin AFAICT. People
who want crypto-backed non-blocking kernel RNGs have a very high quality one
in linux. Most of this is just an argument about defaults, whether
"/dev/random" should be non-blocking by default and whether it should be
renamed to something else.

The more interesting question, and one I think is very poorly served by this
article, is whether or not Linux's entropy pool engine is a useful feature or
not (and tangentially whether or not we should trust RDRAND). And... I mean,
in a world where we have known backdoors in both hardware and crypto
algorithms, it seems like having a system that defaults to not trusting them
might not be the worst thing...

Basically, this argument seems really fragile to me. All it takes is one paper
showing a likely backdoor in RDRAND and everyone is going to be laughing like
crazy at what-in-hindsight-would-seem the ridiculous naivete in this article.

~~~
clarry
> In practice, /dev/urandom advocacy has won for almost all apps.

Which is a shame, because the thing that should've won is arc4random_buf or
something similar that always works (empty chroot with no /dev/ under it,
isolated user with no read access to anything on the filesystem, and no free
file descriptors? still works!) and can't give an error.

~~~
ajross
For clarity: I'm using "/dev/urandom" as shorthand for "the non-blocking
interface to the kernel entropy pool" and not the literal device node
interface. As others have pointed out there is a syscall to get that for you
if you can't use the device.

~~~
tytso
To be completely correct: On modern kernels, /dev/urandom is not an interface
to the kernel entropy pool. It is rather a ChaCha20 Cryptographic Random
Number Generator which is periodically seeded from the kernel entropy pool.

getrandom(2) uses the same ChaCha20-based CRNG. The main advantage is that it
doesn't require that you be able to open a file descriptor. (There were some
obscure attacks on some userspace where the attacker would exhaust the number
of file descriptors, and the sloppy userspace code wasn't checking error
returns, and so it wasn't actually reading from /dev/urandom. Getrandom(2) is
supposed to be foolproof --- which is hard, given how ingenious fools can be,
but at least it's harder to screw up using getrandom(2). It also works in the
completely empty chroot scenario referenced in the grandparent of this post.

------
sleevip
A few problems with the reasoning in this article.

It assumes that either "cryptography works" (and you only need to seed the
CSRNG once) or "cryptography does not work" (and the entropy pool is
necessary). In actuality, "cryptography" covers a wide range of algorithms,
some of which seem to work, and some of which (like RC4) are known not to
work. It's possible that (say) ECDSA works but Linux's CSRNG isn't as good as
we think it is.

The article also makes fun of the kernel for not trusting RDRAND. Well, here's
a sequence of random numbers from RDRAND on the AMD Ryzen 3000: 0xffffffff,
0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff. Do you see a pattern? Was it
indeed a foolish choice not to trust RDRAND?

~~~
saagarjha
> Was it indeed a foolish choice not to trust RDRAND?

From that output? No, not at all. It's not backdoored, it's the same as any
other microcode bug. What would be more insidious was if it gave back
0xD82C07CD, 0x6BAA9455, 0x82E2E662, 0x7A024204: seemingly random but secretly
predictable. (Hint: Python, random.seed(0))

~~~
gus_massa
You can always check with Random Sanity
[https://www.randomsanity.org/](https://www.randomsanity.org/) In this case
[https://rest.randomsanity.org/v1/q/D82C07CD6BAA945582E2E6627...](https://rest.randomsanity.org/v1/q/D82C07CD6BAA945582E2E6627A024204)
It is not 100% sure, it was "true" until a few seconds ago.

~~~
Tomte
I strongly advise against trusting statistical entropy tests in the
application layer.

There are reasons to do statistical tests (for example, the USB key Ted
mentioned needs to make sure the circuit is still generally functioning), but
when you're in the application, there has already been so much de-biasing and
whitening going on, that you will never really catch a problem. You will only
obsess about a random (ha!) fluke where a tests sporadically fails.

~~~
gus_massa
It was a semi joke. It's an interesting project, but note that:

It is not so popular, so they didn't have this string before. (Perhaps they
can preload some common cases, but perhaps is too much memory.)

Once you send your secret random stuff to a random site in the internet, it's
no longer secret. You can send some samples as a test and discard them, and
use other part of the random stream, but from a security point of view I'm not
sure it is a good idea.

I agree that flukes are a problem if you call something like this too much.
I'm not sure about the implementation details to be sure how strict it is.

------
ncmncm
I don't understand why nobody ever mentions CCD noise as a source of quality
randomness. Billions of devices, now, including all phones and most laptops,
have an extremely productive seed source with many millions of pixels, each
pixel supplying at least a quarter bit of entropy. It works equally well with
a sticker over the lens.

Microphones are also good sources, down in the low bits, because true silence
does not exist. Like CCDs, even if you don't think much of the entropy in any
one sample, there are one hell of a lot of samples to draw on.

If you have an accelerometer, that's another six sources. If the device is on
the table, jiggling the vibration motor provides enough activity.

The RDRAND behavior in that shipment of Zen2 chips suggests that the hardware
RNG can be turned off by the System Management backdoor device, and that a
code path to do it is active and (a little bit too) available. People messing
with CCD pixel readings seems less likely.

~~~
chrisseaton
> I don't understand why nobody ever mentions...

Most of these things aren't present in the context of a server, let alone a
virtualised one.

Nobody's there to literally physically jiggle your virtualised server while it
boots.

~~~
ncmncm
A virtualized server has no reliable source of randomness at all, besides
whatever the host provides. They are out of the picture entirely.

~~~
chrisseaton
> They are out of the picture entirely.

They’re a central topic of this article and where the problem we're discussing
is usually felt.

------
mikedilger
This was a good article. Bits of entropy are not lost as the CSPRNG spits out
bits 1:1. Nobody thinks that a 64 bit key is compromised after a CSPRNG
outputs 64 bits.

However RDRAND should not be trusted because we know for certain that
motherboard BIOSes have the capability to force the output of RDRAND to -1
every time it is called. Hardware vendors have demonstrated this to us as a
"bug" multiple times, but I consider it a "wink". They have fixed it with BIOS
patches. The fact that this is patchable at all indicates RDRAND is not
secure. The BIOS can control this behavior; therefore it is IMHO very highly
suspect and not worthy of trust. There is no legitimate reason why a ring
oscilator circuit with it's whitening/etc should ever output a predictable
sequence of any kind.

~~~
glandium
Don't BIOS patches contain microcode updates?

~~~
mikedilger
Yes, an earlier version of my comment made a distinction between BIOS vs
microcode, but I realized it wasn't relevant and edited it. Apparently my edit
and your comment experienced a race condition.

------
ikeboy
>In a sense, whether a given mechanism provides entropy is a matter of “this
or that expert said that it does”; impossibility of accurate simulation comes
from physics, specifically quantum mechanics, so the entropy pool estimator is
based on a fair amount of trust in physicists such as Feynman or Bohr (but not
Einstein, for that matter). But the entropy depletion is an assertion that
cryptographers such as Shamir cannot be equally trusted.

Well yeah, we have much more reason to believe our physical laws represent
something about reality than to think our cryptographic primitives are
unbreakable. Where's the analogue of Bell's theorem for cryptography?

------
arkadiyt
FYI Andrew Ayer said this change didn't go through:
[https://twitter.com/__agwa/status/1208066598407483393](https://twitter.com/__agwa/status/1208066598407483393)

It's not clear to me what the current state is

~~~
jlgaddis
That would explain why I wasn't able to find "CONFIG_RANDOM_BLOCK" in my
kernel configs.

On a related note, I've been running jitterentropy-rngd on most of my machines
for a long while now, including the ones that have internal and/or external
HWRNGs (even though it's probably not really doing much of anything on most of
them).

I'm far from an expert on any of this but jitterentropy-rngd seems much, much
better than haveged (although, FWIW, I think pretty much _anything_ is better
than haveged). I was kinda surprised that Debian decided to use haveged in the
installer, although (to their credit) they did say, in effect, "we need
something _now_ , this works for now, we can find something better later".

------
breser
I find it strange that the article says that Java uses /dev/urandom by
default. When I've been having to configure Java to use /dev/./urandom for
years to deal with the fact that it defaults to /dev/random and setting it to
/dev/urandom doesn't work because the Java code treats the two as the same.

~~~
0xdeadb00f
Slightly tangential, but why doesn't Linux symlink /dev/random to /dev/urandom
like OpenBSD does?

Wouldn't that stop issues like these (what's the benefit in keeping
/dev/random the same)?

~~~
Tomte
Extreme backwards compatibility. An application might depend on the blocking
behaviour (be it sensible or not). And since it's a change visible from user
space, it won't be done (except in extreme circumstances).

------
wahern
FWIW, Virtio RNG is natively supported by Linux, NetBSD, and OpenBSD as
guests, and hypervisors like Linux KVM/QEMU and OpenBSD VMM/VMD. I use libvirt
and virt-manager on Linux which makes it easy to add a Virtio RNG device, but
it _should_ be the default, as it is with OpenBSD VMM/VMD.

~~~
tedunangst
Ah, I just found --rng in another read of the virt-install manual. This works
a lot better than aborting the install and editing the xml and restarting.

------
nullc
There is little more disappointing to see a position you embrace argued with
weak or misleading arguments.

The entropy estimation used in Linux is indeed essentially pointless, but a
number of the arguments used in this page are not great.

The main reasons I'd give is that the applications we use randomness for have
only computational security at best so the effort to provide information
theoretic security is overkill. The actual mechanisms that have been used in
linux both for estimating random inputs and maintaining the pool are entirely
adhoc and there is no particular reason to believe they'd actually provide
information theoretic randomness even if you did have some fringe usage where
it could be a theoretical benefit. And finally, the weird behavior like
blocking RNGs added for purely conjectural benefits results in real
vulnerabilities.

Not much more is needed to be said: It's a common story, security theater
resulting in insecurity.

> so the entropy pool estimator is based on a fair amount of trust in
> physicists such as Feynman or Bohr (but not Einstein, for that matter). But
> the entropy depletion is an assertion that cryptographers such as Shamir
> cannot be equally trusted

Our expectations about the security of cryptographic constructs are largely
unlike our expectations of physical systems.

Thomas is making a cheap and misleading argument here, and I very much expect
that Shamir would disagree with it... (heck, one of the famous 'Shamir' named
things is shamir secret sharing which comes straight from the realm of
information theoretic security.)

A case can be made for cryptography with information theoretic security... but
what the kernel provided in the past was at best a cargo-cult imitation of it,
taking many of its costs without providing its assurances.

> The Linux kernel uses rdrand. It does not trust rdrand, because NSA (I’m not
> exaggerating! The kernel source code explicitly calls out the NSA),

Imagine this in a world where the kernel always blindly trusted rdrand:
[https://arstechnica.com/gadgets/2019/10/how-a-months-old-
amd...](https://arstechnica.com/gadgets/2019/10/how-a-months-old-amd-
microcode-bug-destroyed-my-weekend/)

CPU makers can't manage to achieve really strong reliability against obvious
_accidental_ fault in rdrand. Gonsidering this, some caution against backdoors
in a totally black box magic source of random numbers seems eminently prudent.
NSA is just an existence proof of intelligence operations with the
intellectual and financial capabilities that would make backdooring a CPU
possible, if not likely.

------
rini17
Relying on seed-based algorithms makes the seed very attractive for leaking by
some side channel, how does CSRNG prevent that?

~~~
wahern
I'm not sure what the seed has to do with anything. Every CSPRNG has a state
which can be leaked, and while nominally larger than a typical seed, if a
side-channel leaks 32 bytes then it likely can leak 512 bytes, 1Kb, etc. (The
way the article distinguishes the "RNG" entropy pool from the "CSRNG" seems
rather confusing and maybe a reflection of Linux' convoluted framework. Better
to think of the whole framework as a giant, overwrought CSPRNG function.)

A proper CSPRNG offers forward security, which means if the state is leaked in
the future then past outputs are still secure. This is usually accomplished by
cycling the PRNG after output.

There'a also backward security (perhaps what you were getting at), which means
future outputs are protected from past leaks. Of course, you need new
randomness to achieve backward security. AFAIK, typical kernel CSPRNGs mix in
new entropy on a regular basis (notwithstanding the implication in the article
that on Linux this can be unnecessarily delayed). However, that's not
necessarily a better thing. Daniel Bernstein has argued that _maybe_ ,
depending on your threat model, you're better off with a purely deterministic
CSPRNG after initial seeding:
[https://blog.cr.yp.to/20140205-entropy.html](https://blog.cr.yp.to/20140205-entropy.html)

------
xmmrm
For comparison, here‘s a sane interface: [https://fuchsia.dev/fuchsia-
src/reference/syscalls/cprng_dra...](https://fuchsia.dev/fuchsia-
src/reference/syscalls/cprng_draw.md)

~~~
saagarjha
How is this different from read(open("/dev/random", O_RDONLY), buffer,
sizeof(buffer))?

~~~
dchest
\- It won't fail if there's no /dev/random device mounted (e.g. in chroot)

\- It won't fail if there are no file descriptors available

\- No error handling needed: the call always succeeds and random bytes are
always returned

~~~
ben_bai
So like the standard arc4random_buf(3) which everybody has except for Linux.

Don't worry the name is historic it does not us the arc4 cipher any longer, at
least on OpenBSD. They switched to chacha20 as stream cipher.

~~~
dchest
The interface is the same, but arc4random_buf is a user-space CSPRNG.

The analog of zx_cprng_draw is getentropy(2): [http://man.openbsd.org/cgi-
bin/man.cgi/OpenBSD-current/man2/...](http://man.openbsd.org/cgi-
bin/man.cgi/OpenBSD-current/man2/getentropy.2) except zx_cprng_draw kills the
process, while getentropy returns an error.

------
jlgaddis
> _Or not. Linux 5.3 will turn back getrandom() into /dev/urandom with its
> never-blocking behavior, ..._

This can be disabled by passing the

    
    
      random.getrandom_block=on
    

parameter to the kernel.

\---

 _EDIT:_ It appears that this functionality may have not actually made it into
the kernel (see other comments) so this parameter would have no effect.

------
badrabbit
> The whole premise of entropy depletion is that cryptography does not work
> (the CSRNG does not prevent the leak),...

Nice post but had to stop there. I don't get why some cryptologists don't
grasp redundancy. In systems design this is a very sane assumption, in other
words The premise is not that "cryptography does not work" but rather "one
piece of a cryptography systema may have been broken or weakened".

You don't tell a system engineer "The whole premise of HA is that servers
fail" right? Are elementary components of a crypto system expected to be
unbreakable or uncompromisable? I mean to me this makes me want to ask the
writer of this post: I thought security of CSRNG is estimated to be infeasible
to crack based on existing computational resources (brute force) and existing
research to break and weaken the RNG ? If so, how is it improbable enough for
a bug or backdoor to exist in the CSRNG or for some algorithmic breakthrough
with PQC?

In my opinion, it is entirely possible(as history has proven) for elementary
components of a crypto system to be weakened and a general purpose OS should
presume some level of redundancy where possible. I mean, most users should not
be concerned of such rare attacks(except maybe they should,given how billions
of devices depend on Linux's rng and there are many extremely resourced
attackers that would be interested in dragnet attacks) but you have to also
keep in mind there are people who are of enough high value that they will be
targeted individually. And I can envision not just the NSA and other spy
agencies but plenty of private organizations and exploit brokers that will
capitalize big time on discovery of csrng flaws that are not reported for a
bug fix or publicized.

I think urandom is fine for most cases but on rare situations like pgp keys,
TLS certs, disk encryption keys,etc... The paranoia of /dev/random might come
in handy at least for a handful of people somewhere.

------
RcouF1uZ4gsC
Just use rdrand. If your cpu is compromised at the level of individual
hardware instructions, you are likely toast anyway, and all your extra steps
are just false reassurances.

------
maxk42
This article seems to continually conflate "sources of randomness" with pRNGs.
The linux entropy pool is not merely some estimation of randomness by the
kernel -- it's a collection of bits accumulated from a variety of sources (
_including_ rdrand, btw) -- which are munged in such a way that any single bit
of input causes a cascade in the states of _all_ output bits -- _then_ fed
into a pRNG. Not trusting rdrand (or any other single source, including input
devices, internal sensors, IRQ timings, internal clock, etc.) is a _feature_
and does not somehow make the degree of entropy weaker. Having a broad
collection of entropy sources feeding into your entropy pool simultaneously
helps make the random device more secure by ensuring that if an attacker gains
access to any number of these inputs, they still cannot predict outputs as
long as they lack even a single one.

This is exactly the article I would have published if I were a state attacker
trying to make it easier for myself to create side-channel vulnerabilities in
applications relying on randomness.

If tpornin's primary concern is blocking IO (and not unpredictable,
cryptographically-secure RNG) then his or her advice makes sense. However, if
secure randomness is a concern, then the advice should simply be to encourage
application developers to use the _minimum_ amount of entropy needed from the
entropy pool. Many, if not most applications are guilty of consuming far more
randomness than they truly require.

