
Fixing Getrandom() - Tomte
https://lwn.net/SubscriberLink/800509/c46eba62a7bda958/
======
brynet
It's recommended that people check out Theo de Raadt's Hackfest 2014
presentation on OpenBSD's arc4random(3).

[https://www.openbsd.org/papers/hackfest2014-arc4random](https://www.openbsd.org/papers/hackfest2014-arc4random)

Also suggest reading from page 19 on, which covers OpenBSD"s random(4)
subsystem, and getentropy(2) syscall which suffers from none of the issue that
are plaguing Linux getrandom today.

[https://www.openbsd.org/papers/hackfest2014-arc4random/mgp00...](https://www.openbsd.org/papers/hackfest2014-arc4random/mgp00019.html)

[https://man.openbsd.org/man4/random.4](https://man.openbsd.org/man4/random.4)
[https://man.openbsd.org/arc4random.3](https://man.openbsd.org/arc4random.3)

~~~
beefhash
There may be some systems where storing entropy in a file across reboots isn't
an option, e.g. diskless situations or non-writable filesystems. Though those
should be a rare case. Of course, this would actually require cooperation from
userspace, and given Linux is just a kernel and userland kind of free to do
what they want, it's not as easy as it sounds.

Incidentally, I really wish we could get the arc4random(3) family on GNU/Linux
already. It's the only notable *NIX platform that still doesn't provide it;
illumos has it, every BSD has it, macOS has it. Given the difficulties with
entropy as noted above, however, the whole "non-failing cryptographically
secure PRNG" model just won't work with Linux.

~~~
bonzini
systemd does store the entropy in a file, but Lennart says this in the
comments of the article: "we can only credit a random seed read from disk when
we can also update it on disk, so that it is never reused. This means /var
needs to be writable, which is really later during boot, long after we already
needed entropy".

How does OpenBSD deal with this issue?

~~~
throw0101a
From _rc(8)_ :

    
    
      # Push the old seed into the kernel, create a future seed  and create a seed
      # file for the boot-loader.
      random_seed() {
            dd if=/var/db/host.random of=/dev/random bs=65536 count=1 status=none
            chmod 600 /var/db/host.random
            dd if=/dev/random of=/var/db/host.random bs=65536 count=1 status=none
            dd if=/dev/random of=/etc/random.seed bs=512 count=1 status=none
            chmod 600 /etc/random.seed
      }
    

* [https://cvsweb.openbsd.org/src/etc/rc?rev=1.537](https://cvsweb.openbsd.org/src/etc/rc?rev=1.537)

FreeBSD:

* [https://svnweb.freebsd.org/base/head/libexec/save-entropy/](https://svnweb.freebsd.org/base/head/libexec/save-entropy/)

* [https://www.freebsd.org/cgi/man.cgi?loader.conf(5)](https://www.freebsd.org/cgi/man.cgi?loader.conf\(5\)) (see "entropy_cache_load")

OpenBSD's boot loader also injects entropy into the kernel.

~~~
devnulloverflow
Which is to say, /var/ is writable when they do it.

This business of /var/ not being writable early enough is probably to do with
the wonderful complexities of modern Linux, and is just not a problem that
OpenBSD has.

~~~
bonzini
What if a filesystem error forces /var (or whichever mount point hosts the
entropy seed) to remain read-only and seed is never replaced? Seems like
unlike systemd the BSDs are fine with crediting entropy even if it has been
used already in a previous boot. That's wrong though.

~~~
throw0101a
> _Seems like unlike systemd the BSDs are fine with crediting entropy even if
> it has been used already in a previous boot._

FreeBSD runs a regular cron job so that there are multiple files. If /var goes
R-O, you still have a bunch of files from when it wasn't and they are not the
same as on initial boot (assuming that (a) /var was mounted R-W at some point,
and (b) cron managed to run as well).

Also, on shutdown there is an attempt to write 4096B to both /entropy and
/boot/entropy:

* [https://svnweb.freebsd.org/base/head/libexec/rc/rc.d/random?...](https://svnweb.freebsd.org/base/head/libexec/rc/rc.d/random?revision=348122&view=markup)

Further, all of these various seed files are not the only sources of entropy
(timers, RDRAND, etc).

~~~
bonzini
> (assuming that (a) /var was mounted R-W at some point

Not a good assumption if the initial r/w mount is the one that fails...

~~~
throw0101a
If the initial /var (or /) is broken, then one has bigger problems and
probably has to go in manually to fix things.

But for >99% of cases where the system comes up cleanly, and runs cleanly for
a period of time, you'll have a bunch of seed files ready to go for the next
boot. This configuration optimizes for the common case.

------
cpeterso
Linus merged a jitter entropy fallback that calls the kernel's schedule()
function in a loop:

[https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3f2dc2798b81531fd93a3b9b7c39da47ec689e55)

~~~
kzrdude
Classic maintainer stuff, love it. Getting adequate fixes in place, when
nobody else has been there to submit it, so that the next release can have one
bug less.

------
cryptonector
What a mess. The problem here is systemd and/or things like GNOME wanting high
quality entropy too soon, often not actually even needing it. Systemd
shouldn't need high-quality entropy for itself, and most services it starts
shouldn't need any either, though eventually some service is bound to need it
(e.g., for TLS), but each service should make its own decisions about early-
in-boot entropy. Then instead of a deadly embrace between the kernel and
systemd leading to not making progress in boot you'd have some services not
quite operational right away -- still a big problem, but a lesser one.

Still, if you're booting a system that will be doing one thing, and that one
thing needs entropy, then you have a problem. And the only practical solution
then is to accept using things like RDRAND and a previous shutdown's saved
seed to seed the system's SRNG, then add better entropy slowly as you can get
it.

EDIT: I agree with tytso: if you trust the CPU enough to run on it, then you
should trust its RDRAND or equivalent (if it has it).

~~~
zaarn
RPis don't necessarily have RDRAND, they especially suffer from increasingly
fast boot where waiting for the entropy pool is an actual concern.

Of course, not all applications need that kind of cryptographic entropy but
during early boot it's pretty much the best randomness you can get (seeding
from datetime isn't available on the RPi, so until you can bring up the
network and contact the NTP server, you're stuck with whatever getrandom()
returns)

~~~
cryptonector
RPI 2s have a BMC (binary-only) driver for entropy generation.

~~~
zaarn
I have an RPi1 that suffers from this problem and will for a long while still.

------
emilfihlman
This is strictly bad. getrandom was fine as it was, it's not the issue with
the kernel if people don't read the explicitly documented man page on it. This
is now changing behaviour, breaking the god damn userspace which is exactly
100% what the kernel shouldn't be doing.

~~~
webkike
Indeed. I don't see how the solution to this problem isn't just to patch gdm.

~~~
ac29
Because the kernel has a policy of not introducing changes that break
userspace, even if userspace is doing something weird.

~~~
loeg
Changing the semantics of getrandom(flags=0) breaks a lot of userspace ABI.

------
tzs
> systemd reads a random seed off disk just fine for you, no need to write any
> script for that. Problem with the approach is that it's waaaay too late: we
> can only credit a random seed read from disk when we can also update it on
> disk, so that it is never reused. This means /var needs to be writable,
> which is really later during boot, long after we already needed entropy, and
> long after the initrd

I understand in general why seed reuse is bad. If an attacker can get access
to two things that used the same seed, that attacker can often learn things
that the randomness was suppose to make unlearnable.

In this particular case, though, I wonder if that is actually a problem. If
the seed is used before you have writable storage, and then the system is
rebooted without writable storage ever becoming available (and so before the
seed could be updated), and then the system reuses that seed on the next boot
--the only way an attacker can get access to two things that used the same
seed is if something that used it on the first boot has persisted somewhere.

Since there was no local writable storage the first time, this can only happen
if the results of using the seed were communicated off the system, via
networking or via a terminal, to someplace that did have working writable
storage.

Thus, it would seem, that if you block networking with other machines and
terminal access until /var becomes writable and the seed is updated, there is
no problem with seed reuse.

~~~
stouset
Blocking doesn’t buy you anything because the seed could have already been
used to generate data in-memory, which can then later be sent over a network
or persisted to permanent storage.

------
boring_twenties
> we need to get over people's distrust of Intel and RDRAND.

Am I misreading this or is Ted T'so really suggesting that we should all just
stop worrying and love the secret and compeltely unauditable RNG offered by
the same company that has literally backdoored every CPU they've sold in the
past 12 years?

~~~
tytso
If you don't trust Intel, then don't use Intel CPU's at all. Using Intel CPU's
and simultaneously saying, "but we can't trust RDRAND because could be
backdoored" is completely insane.

Intel hid an entire x86 core running minix --- with security holes --- and
told no one. Between the firmware code which runs in System Management Mode
and in UEFI, which persists after the OS is booted and can take over the
system and read and write arbitrary memory locations --- if you fear a
backdoor in the CPU, RDRAND is not the only place where an attacker can screw
you over. The bottom line is the entire CPU can not be audited. So trust it.
Or don't trust it. Use another CPU architecture. Or go back to using pen and
paper, and don't use any computers or cell phones at all.

We have to trust the bootloader to verify the digital signature on the kernel,
so we might as well trust the bootloader to get a secure random seed. But from
where? It could call UEFI or try use the RNG from the TPM (if available). But
now you have to trust Intel, and/or the motherboard manufacturer, and/or the
TPM. The bootloader could read from a seed file from the HDD/SSD. But we know
that nation-state intelligence agencies (like the NSA) have the capability of
implanting malware into HDD firmware (which is also unauditable), and we also
know that we can't trust most consumer grade manufacturers or IOT devices to
correctly insert device-specific secure entropy into the seed file at
manufacturing time. Otherwise, all of the seed files will likely have the same
value, at which point when the device is generating its long-term private key,
immediately after it is first plugged in, the key will very likely be weak
(see the "Mining your p's and q's"[1] paper).

The bottom line is that unless you propose to personally wire up your CPU from
transistors, and then create your own compiler from assembly language (see the
"Reflections on Trusting Trust"[2] paper by Ken Thompson about what can be
hidden inside an untrustworthy compiler), and then personally audit all of the
open source code you propose to compile using that compiler and use on your
system, you have to trust _someone_.

What makes this political, and difficult to address, is everyone has different
opinions on what they are willing to trust and not trust. But some
combinations, such as "we can't trust Intel because their firmware can't be
audited", and "but we insist on using Intel CPU's", really don't make much
sense.

[1] [https://factorable.net](https://factorable.net)

[2]
[https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p7...](https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf)

~~~
boring_twenties
RDRAND might not be the only place where an attacker can screw me over, but
it's a much easier exploit than many others, and one that is by design
impossible to detect.

It would probably not be possible to get away with an XOR instruction that
doesn't behave as advertised. There is simply no way to determine if RDRAND is
behaving as advertised or not.

> you have to trust someone

Well, yes. Intel specifically has to my mind demonstrated beyond a shadow of a
doubt that they are not be trusted (that "entire x86 core running minix ---
with security holes" thing you just mentioned, thought it was ARM though). So
if the only difference is moving my root of trust from Intel to a random,
unknown party that is _not_ Intel, that is already something of an
improvement.

~~~
gpderetta
How would you measure the jitter from your disk? Certainly you can't trust
rdtsc either.

------
kelnos
This feels too obvious to work, but:

The kernel knows it's early in the boot process, and it knows that there isn't
enough entropy that it will have to block getrandom() calls. We also know that
what caused this issue was some FS efficiency improvements that result in
fewer disk accesses during early boot, which results in fewer interrupts,
which results in less entropy.

So... when we get into this situation, why doesn't the kernel just start
issuing some disk accesses (perhaps with weakly-random offsets and sizes)
until enough interrupts are generated to fill the entropy pool to a safe
level?

~~~
loeg
Not all Linux systems have disks. The jitter entropy approach described
elsewhere is more comprehensive and less dependent on hardware components
outside of the CPU.

~~~
kelnos
Fair; it does make sense to re-evaluate the whole thing and find the most
generic of solutions, but the issue that triggered this discussion was a sort
of regression on systems that _do_ have disks.

(Your point also raises the question: on systems that _don 't_ have disks, how
are requests for early-boot entropy handled?)

~~~
loeg
> Fair; it does make sense to re-evaluate the whole thing and find the most
> generic of solutions, but the issue that triggered this discussion was a
> sort of regression on systems that do have disks.

Right. On those kinds of systems, persisting some saved state between boots is
your best first option. (As other commenters point out, you really don't need
very much! 256 bits or 32 bytes is basically adequate.) Any additional entropy
you get from device interrupts or machine-dependent random sources like RDRAND
are good to mix in.

(I don't think the adversarial RDRAND thread model is a reasonable one, and
that has been adequate addressed elsewhere in this thread by tytso, tptacek,
and tedunangst. Any adversarial CPU that can defeat cryptographic digest
whitening of entropy sources is so sophisticated/privileged it already fully
owns your operating system; game over.)

Linux systems already do this, kind of! But Linux has taken the stance that
entropy you can't delete from the boot media _at the time it is fed into the
CSPRNG as entropy_ doesn't count ("credit") as far as initial seeding, and
that decision is what moots its utility for early CSPRNG availability. It's
not an unreasonably paranoid take, but not an especially practical one (IMO).
I suspect the net effect on security, holistically, is negative.

> (Your point also raises the question: on systems that don't have disks, how
> are requests for early-boot entropy handled?)

Depends which subsystem you ask and how you ask it, right? And whether or not
a jitter entropy scheme is enabled in the kernel. (I might be misunderstanding
your question, sorry if that's the case.) If you ask /dev/random or
getrandom(~GRND_NONBLOCK), they'll block until the CSPRNG is satisfied enough
entropy is available. If you ask /dev/urandom, it just spews unseeded and
likely predictable garbage at you. If you ask getrandom(GRND_NONBLOCK), it
returns -1/EAGAIN.

~~~
kelnos
> _(I might be misunderstanding your question, sorry if that 's the case.)_

Not entirely, but I think my question was just more narrow. Considering the
case that started this off (machines that used to work, now hanging on boot
because of lack of entropy, triggered by too few disk interrupts), I don't get
how this generalizes. If you take that same system, and remove the disk
(replacing it with netboot or whatever), how would it _ever_ boot, at least
not without making some changes to how early entropy is seeded? I guess in the
netboot case you end up with the timing of network interrupts that you can
feed into the entropy pool, but this whole thing just feels _off_ to me.

If a system was relying on a specific, inefficient set of disk access to give
it _just_ enough entropy to get through early boot, and that making the FS
code more efficient caused it to fail to have enough entropy, I'd suggest that
(likely unbeknownst to the system administrators), this system was _already_
pretty broken. I get why Torvalds decided to bucket this under the "we don't
break userspace" umbrella, and appreciate the care he takes to that sort of
thing, but suggesting that "FS access patterns during early boot" is sorta a
part of the kernel's ABI is... a bit too incredible for me to take seriously.

------
the8472
So the discussed sources of entropy are

    
    
       - interrupts
       - rdrand
       - timer/instruction jitter
    

Why don't they throw more sources at the problem? LSBs from analog sources
such as audio inputs, stray network packets (put the nic in promiscuous mode).
Even uninitialized memory might be worth a few bits (not all systems zero it
on boot).

~~~
bonzini
This is happening so early that audio and network (and /var where the
persistent seed is stored) aren't up. The kernel doesn't want to be
susceptible to hangs unless userspace brings up all entropy sources early
enough.

~~~
nwallin
This is about GDM, which is the last thing that gets started.

Hardware devices are initialized before init is started, so you should still
be able to gather entropy from the various input devices.

Lack of entropy can't cause the kernel to not boot; it's just userspace
(typically services which require encryption) which can have a problem. It
would be silly to start a webserver or sshd before the FS is remounted rw.

~~~
megous
Not if you use modules.

~~~
loeg
Can you elaborate on what you mean?

~~~
megous
Init is executed before modules start being loaded.

------
zvrba
I'm surprised nobody in the thread mentioned using a TPM device when
available. My 5-year old MB comes with one builtin, and probably all new ones
have one. TPM also provides a limited amount of persistent storage.

------
tudelo
I used to work on a Java application that used a bunch of libraries, and
specifically I remember one of the libraries generated some identifier using
urandom. I think it was axis2 or something like that. There was unexplainably
random hangs when trying to read from urandom. The wikipedia article says that
urandom should not block. Anyone smarter than me that can understand this
article have any reason why urandom hangs?

~~~
dwaite
Java on Linux at used to have this bad behavior where it would short-circuit
configuration to use /dev/urandom to use its own internal RNG based on
/dev/random. It did not create a single global seed for the internal random
number generator, so some systems could overwhelm the entropy pool.

I've solved this for some systems by changing the java security properties
file to instead point to /dev/./urandom .

~~~
tudelo
I did that and even then it still happened :( one of lifes great mysteries

------
mehrdadn
When the issue came up with the OpenSSH hang, I failed to be convinced why
this should be a problem at all. Perhaps someone can enlighten me after
reading my thoughts in the thread there?
[https://news.ycombinator.com/item?id=20463586](https://news.ycombinator.com/item?id=20463586)

~~~
viraptor
> Sounds like the real problem is some part of the system can't be bothered to
> persist its state.

That's just a special case when you do a reboot. There are many cases where
the system starts and there's either no previous state, or the previous state
is explicitly killed to prevent sharing.

For example a lot of AWS instances will boot once once in their lifetime, and
will do that from a shared AMI. You can't reuse a shared random seed in that
case or getting random values from one VM would allow you to predict new
values on another.

So persisting the state will help desktop users, but that's addressing only a
specific version of this problem, not solving it for everyone.

~~~
mehrdadn
> You can't reuse a shared random seed in that case or getting random values
> from one VM would allow you to predict new values on another.

Why can't AWS seed you with a random number before giving you control? In fact
does it not do that already? If it doesn't, either I'm surprised, or I don't
understand why it's not possible.

~~~
viraptor
AWS does not touch your drives itself. (Apart from what you configure) It
would be a drama on its own if they tried. And even if they did try, what
about encrypted partitions, filesystems which they don't support, systems
configured for a different seed location, etc?

For some hypervisors this is solved in a different way:
[https://wiki.qemu.org/Features/VirtIORNG](https://wiki.qemu.org/Features/VirtIORNG)
but as far as I know AWS does not support it yet.

~~~
mehrdadn
What do you mean they don't touch your drive? Do they just give you a blank
disk with no bootable OS (in which case, doesn't that mean you yourself are
now responsible for seeding when you install the OS)? I know whenever I've
dealt with a cloud VM, it's come with an OS set up by the vendor, but if it's
not, then the OS installer needs to seed the image. So my point is, whoever
sets up the system seems responsible for seeding it. If AWS gives me a VM, I
expect it to come seeded with a unique number, and of course I don't expect
them to touch anything _after_ giving me control. Anything I set up beyond
that is my responsibility to seed properly. I don't understand why this would
be a drama.

~~~
viraptor
They give you a VM with an image you choose. It may be an image you prepare or
one they provide. But no modifications to the file contents since creation.

Drama would come from the same place as "we shouldn't trust rdrand". That's
still avoiding the issue of: sometimes can't write that seed in some
configurations.

~~~
mehrdadn
Yeah, so while creating it, they seed it. If you don't trust that, you install
another OS yourself, with a kernel your trust, and seed it yourself when
installing, from whatever sources of randomness you want. I fail to see why
this wouldn't work.

~~~
viraptor
They can't seed it while creating the AMI. If they did that, everyone would
share the initial seed and I could boot up the published image, run the
generator with that seed, and be able to predict (for example) what ssh key
your server will have.

(Or if you meant creating the VM, as explained before they can't do that for
encrypted volumes for example)

------
kzrdude
Why isn't it standard that hypervisors provide a random seed for each of their
vms?

~~~
loeg
Virtio-RNG.

------
loup-vaillant
So, the Linux kernel can't just persist 32 little random bytes? That shouldn't
be hard. Even giving a seed for the first boot (or when booting a VM image)
shouldn't be hard. Heck, we don't even need to collect entropy. Like, _ever_.
Just use Chacha20 with fast key erasure and be done with it.

If your system is too small to be able to persist 32 bytes, it's probably too
small to perform any amount of meaningful cryptography anyway.

~~~
loeg
Yes, the availability problem hinges on being able to persist a handful of
bytes. The problem isn't the number, which, sure, is small. The problem is the
availability of _any_ persistent media in general.

There are absolutely devices with exclusively RO media with plenty of
processing power to perform meaningful crypto (e.g., embedded devices with
802.11 wifi). So:

> If your system is too small to be able to persist 32 bytes, it's probably
> too small to perform any amount of meaningful cryptography anyway.

This sentence is not based in reality.

~~~
magicalhippo
> There are absolutely devices with exclusively RO media with plenty of
> processing power to perform meaningful crypto (e.g., embedded devices with
> 802.11 wifi).

Don't most microcontrollers have some EEPROM that could be used for this? I'm
assuming even 4 bytes would help a lot. I am pretty sure STM32's with hardware
crypto have EEPROM good for a million write cycles.

~~~
loeg
4 bytes isn't enough, but yes, it's quite possible some SoCs could persist
256-512 bits of entropy in some hardware-specific way. It's hard to abstract
that into a general-purpose operating system, though it could be a good
solution for specific applications.

~~~
loup-vaillant
> _It 's hard to abstract that into a general-purpose operating system_

Linux has lots and lots of hardware specific drivers. This shouldn't be any
different.

~~~
loeg
Feel free to start writing device drivers for 1000s of individual SoCs and
figure out a way to solicit testing. I agree hardware-specific drivers are a
solution for the SoCs with RW media of some kind, but it's not as trivial as
you paint it. And (AFAIK) there are still SoCs without RW media. I think
jitterentropy is a better and more easily tested use of developer hours than
implementing 1000s of SoC drivers, but it's open source; work on what you
like.

~~~
loup-vaillant
The onus is not on Linux to support devices. The onus is on the _devices_ to
support Linux. Linux support is a selling point, that should be leveraged.

The current way of gathering initial entropy should not be the default, it
should be a _fallback_. If a device have neither persistent storage (32 bytes
for crying out loud) nor a hardware random generator, they should be treated
like second class citizen, with only best effort random numbers.

Right now, _everyone_ is a second class citizen. Kind of insane, don't you
think?

> _it 's not as trivial as you paint it_

It is, under one (extremely hard to fulfil) condition: _collaboration from
hardware vendors_. It's just 32 persistent bytes. Just bury them in a chip and
memory map the damn thing! The driver can be reduced to an offset.

> _I think jitterentropy is a better and more easily tested use of developer
> hours than implementing 1000s of SoC drivers_

That one? [https://fuchsia.dev/fuchsia-
src/zircon/jitterentropy/config-...](https://fuchsia.dev/fuchsia-
src/zircon/jitterentropy/config-basic)

It's one of the first links I found, I hope it is obsolete: they say the
internal state of Jitterentropy is puny _64 bits_ number. What were they
thinking, we need four times as much! That kind of thing is why people are
tempted to shed the system's RNG, and ask users to wiggle their mouse instead.

~~~
loeg
I'll pass on most of this rant, just want to correct this misunderstanding:

> That one? [https://fuchsia.dev/fuchsia-
> src/zircon/jitterentropy/config-...](https://fuchsia.dev/fuchsia-
> src/zircon/jitterentropy/config-..).

> It's one of the first links I found, I hope it is obsolete: they say the
> internal state of Jitterentropy is puny 64 bits number

You skimmed too quickly and came to a judgement based on a misreading or
misunderstanding of the concept. Obviously 64 bits is not sufficient.

The idea of the jitter entropy mechanism is that most modern CPUs have super
high resolution clock or cycle counters available (even embedded boards), and
instruction execution speed has mild variance.

You run a series of trials, each of which produces one or more bits of output
(classically: 1). You run as many as you want, producing an infinite stream of
weak entropy, until you have collected a satisfactory number of output bits.
Trials can be run relatively quickly (many nanoseconds or a handful of
milliseconds per trial).

For each trial, you perform some minimal workload intended to exacerbate CPU
runtime variance (this might be where you saw "64 bits"), and extract some
number of output bits (maybe 1) from one or more of: the low bits of the
cyclecounter, nanosecond clock, or something similar of that nature.

Caveat: jitter is an even weaker entropy source than most non-HWRNG sources
typically consumed by entropy gatherers. Your motto of "just 256 bits" assumes
_total independence and 8 bits per byte of entropy_. It isn't met by most
real-world entropy sources on server systems (expect 4-5 bits/byte) and
especially not by jitter entropy. Empirically, jitter seems to come closer to
1 bit per byte minimum in SP800-90B evaluations on the raw output — it's a
pretty weak entropy source.

Anyway: feel free to read more about the concept at any of the sources. The
Fuschia writeup you linked is a good one if you read it more closely; there's
also:

* From the original (2013) proponent, Stephan Mueller: description: [http://www.chronox.de/jent.html](http://www.chronox.de/jent.html) and sources: [https://github.com/smuellerDD/jitterentropy-library](https://github.com/smuellerDD/jitterentropy-library)

* LWN's 2015 writeup of the concept as a Linux entropy source: [https://lwn.net/Articles/642166/](https://lwn.net/Articles/642166/)

* And this is all suddenly topical due to the latest nonsense from Linus (TFA, 2019), who still does not understand CSPRNGs. Anyway, Linus wrote and merged a version of jitter entropy quite recently: [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=50ee7529ec4500c88f8664560770a7a1b65db72b) This is a relatively happy outcome in that Linus didn't just break the ABI to be completely insecure by default.

~~~
loup-vaillant
> _Your motto of "just 256 bits" assumes total independence and 8 bits per
> byte of entropy._

Yes of course. This is easily obtained by hashing a much bigger input. The
problem is determining how big the input should be. That is, how much entropy
it _actually_ holds. You can also hash piecemeal (H is whatever you think is
secure):

    
    
      H0 = H(I0)
      H1 = H(I1 || H0)
      H2 = H(I2 || H1)
      H3 = H(I3 || H2)
    

You can stop as soon as you gathered enough input to be happy about its
entropy. Then just switch to fast key erasure with Chacha20 and stop wasting
cycles on entropy gathering.

> _Anyway, Linus wrote and merged a version of jitter entropy quite recently.
> […] This is a relatively happy outcome in that Linus didn 't just break the
> ABI to be completely insecure by default._

I'm genuinely relieved. This would have been the _worst_ way to break
userspace. Still, tiny embedded systems might need to persist (properly
seeded) 32 bytes instead of relying on jitter entropy.

