Hacker News new | past | comments | ask | show | jobs | submit login
OpenSSH taking minutes to become available, booting takes half an hour (2018) (daniel-lange.com)
135 points by zdw 33 days ago | hide | past | web | favorite | 98 comments



It doesn't on OpenBSD. arc4random(3) simply can not fail, and getentropy(2) does not block. Linux screwed up getrandom. High quality random numbers are available very early in the OpenBSD kernel, and there is no early boot problem for userland.

Linux has an opportunity to learn from OpenBSD.

https://www.openbsd.org/papers/hackfest2014-arc4random/index...


Seems like they store entropy in an on-disk file[1], and re-read that file on startup. This feature is available in Linux as well (see urandom(4) for an example of how to do it manually, though I think most distros bundle this in some manner); although apparently, according to the OP, it doesn't fully work?

Seems like the man page sort of notes the problem in the article:

> Writing to /dev/random or /dev/urandom will update the entropy pool with the data written, but this will not result in a higher entropy count.

But, why not, man page, why not? I feel like if I, as root, write to /dev/random, I am saying that this is acceptable input for seeding the generator, and that it is on me to not, say, seed it with the exact same stuff every boot, or seed it with straight nuls, no?

How does OpenBSD guarantee that this file exists, though? Perhaps making it a system default is wise, but what about brand new VMs/installs? Does the install just take the time to do the first generation of that file?

The OpenBSD folks also appear to trust RDRANG, which would help not blocking. But I disagree with that trust decision.

[1]: start reading here: https://www.openbsd.org/papers/hackfest2014-arc4random/mgp00...


> The OpenBSD folks also appear to trust RDRANG, which would help not blocking. But I disagree with that trust decision.

It feeds the entropy pool, mixed in with many sources. OpenBSD also feeds in data from the AMD CCP as well. The point is it doesn't matter.

https://man.openbsd.org/ccp

And anyone running "echo badapples > /dev/random" in a loop, it still doesn't matter.

https://man.openbsd.org/random.4

https://man.openbsd.org/arc4random.9

> How does OpenBSD guarantee that this file exists, though?

If you keep reading, OpenBSD's random subsystem always mixes old with new data from many sources. If the random.seed file is not available (for example, boot media), it is not fatal. There is a warning displayed. But the system rc(8) scripts handle creating the seed file at both boot and shutdown.

https://www.openbsd.org/papers/hackfest2014-arc4random/mgp00...


> If you keep reading, OpenBSD's random subsystem always mixes old with new data from many sources.

Sure. But say we don't trust RDRAND. It's the first boot, so you haven't got a seed file yet. Assuming you haven't got a HW generator at hand, what other source are there that don't take significant time to collect from? (I.e., I know you can get it from network timings, keyboard/mouse if they exist, etc., but that takes time, and would cause a call not wanting to return prior to there being sufficient data to block. If you're only mixing in unready or untrusted sources, you might get lucky and make it hard for an attacker, or worse, if your other sources are blocking b/c they're collecting data… you might not, but regardless it hardly seems principled?)

It'd be willing to write off the first boot, except that I feel like a lot of things do potentially get initialized then; e.g., in a VM in the cloud, I think that's when host keys are initialized, for example, and certainly whatever application that VM might be hosting could have further requirements. Not blocking and issuing a warning would be to supply data prior to having sufficiently initialized a generator, no?


> Sure. But say we don't trust RDRAND. It's the first boot, so you haven't got a seed file yet.

Why not? The install media should be collecting entropy as it does the install, xoring it with a pool of random data built into it, and writing that to disk as part of the install.

Take a look at https://github.com/openbsd/src/blob/master/distrib/miniroot/..., specifically, feed_random and store_random


>Why not? The install media should be collecting entropy as it does the install, xoring it with a pool of random data built into it, and writing that to disk as part of the install.

and during first reboot you image the disk and flash it into millions of routers, resulting in something like https://devcraft.io/posts/2017/07/21/tp-link-archer-c9-admin...


Thank god OpenBSD mixes in data from hardware RNGs, and the bootloader collects entropy as part of the boot process, effectively turning that exploit vector into a state level attack.


But if you're imaging a drive and also RDRAND fails, OpenBSD is just as vulnerable to attack. See also AMD for RDRAND failing entirely.


Yeah, that's why the bootloader also gathers entropy as it's coming up.

But, at some point nobody can save you from yourself entirely. Get a broken enough system, and every security feature will fail.


The bootloader collects some, but I doubt it reliably collects enough all by itself.


How much do you think is enough, and why do you think the amount collected during boot is insufficient?


Let's say 300 bits.

I'm assuming a bootloader can't collect more in three seconds than an OS can collect in multiple minutes. Is that assumption wrong?

If it's the design of the code that lets it collect entropy faster, then it's that code that needs to be ported, and it doesn't matter if it's in a bootloader or not except to save a second or two.


300 true random bits is a lot. Even something like 128 bits is unbreakable with available hardware.


Okay, that's fair for install media installing to a physical machine such as a desktop.

What about virtual machines / machines booting off an image? We can't put a seed in the image, or it'll get distributed to all downstream consumers.

Also, there's the case posed in response to my first comment, about IOT devices which would ship with some factory-installed image.

(I suppose there are some novel ways one could work around this such as somehow keeping a pool of one-shot images ready w/ just the seed added to them, but I don't feel like this is how real-world systems work. E.g., an AMI in AWS?)


> What about virtual machines / machines booting off an image? We can't put a seed in the image, or it'll get distributed to all downstream consumers.

Mitigations within mitigations.

Having one random seed per VM image that gets installed is better than having nothing at all -- now, an attacker needs to have your install image. Then the hardware RNG and the virtio random drivers help mix into that seed.

Given that presumably you've already booted the image for testing, you've probably already generated the SSH keys that you need to be concerned about -- so you'd probably need to take care to regenerate them securely in any case, from a running system that has started to gather entropy from all the sources it can get its hands on, including the boot loader.

And, ideally part of the imaging process would write some random data to the image. But I agree, most people wouldn't even think of doing that.



HW rng is essential then


/dev/random and /dev/urandom are world-writeable by design. This allows anyone to contribute to randomness via "cat ~/my_secrets > /dev/random". Of course, my_secrets might not be very secret, or controlled by the attacker. So there is no credit given to that.

There is a way for a root process, like systemd, to contribute trusted entropy to the entropy pool, using an ioctl. Of course, then it is systemd which will be taking on the responsibility if it turns out that /var/lib/random_seed turns out to be replicated onto a zillion static images on some IOT device, so it turns out /var/lib/random_seed wasn't actually all that random to begin with


> The OpenBSD folks also appear to trust RDRANG, which would help not blocking. But I disagree with that trust decision.

Mixing more data into a random source cannot make it worse. OpenBSD does not use rdrand alone.


This is unlikely, but wouldn't it be conceivable that a malicious hardware "RNG" that lives in the CPU and has access to the internal state of the OS RNG (and knows common OS algorithms) could always supply the worst possible value to counteract whatever is in the pool?


Yeah, but you don't need to use rdrand in that scenario -- the cpu could also just tamper with the pool at will, regardless of the RDRAND instruction.


djb has noted that RDRAND is a particularly attractable means to exfiltration back in 2014 [1]. So it is not simply irrelevant.

[1] https://blog.cr.yp.to/20140205-entropy.html


I believe they xor the output of rdrand with that of other randomness sources; meaning, they don't have to trust that its output is actually random.


There is an ioctl to write with an ascribed entropy from Linux.


> High quality random numbers are available very early in the OpenBSD kernel, and there is no early boot problem for userland.

OpenBSD also controls its own boot loader, which also feeds in entropy that the kernel then leverages. Perhaps GRUB(2) should do something similar?


you mean you haven't heard of /etc/grub.conf.d/entropy.d/randomseed.d/grub-entropy-randomseed.conf and it's good buddy systemd-enable-grub-conf-d-entropy-d-randomseed-d.target ?


SystemD, not linux, had an opportunity to learn from OpenBSD and actively refused.


systemd didn't even learn from pseudo-systemv in this case - from tfa:

"Basically as of now the entropy file saved as /var/lib/systemd/random-seed will not - drumroll - add entropy to the random pool when played back during boot. Actually it will. It will just not be accounted for. So Linux doesn't know. And continues blocking getrandom(). This is obviously different from SysVinit times2 when /var/lib/urandom/random-seed (that you still have lying around on updated systems) made sure the system carried enough entropy over reboot to continue working right after enough of the system was booted."


Linux handles random in weird ways. Once I migrated from Manjaro to Arch just to learn a bit more about Linux in general. Boot was extremely slow on Arch, but the same setup was ok on Manjaro.

Thing was you need the Haveged daemon to generate entropy, something that should come within the kernel itself.


arc4random was also submitted to POSIX[0] but they insisted on wrecking the design.

[0] http://austingroupbugs.net/view.php?id=859


Is it cryptographically possible to not require randomness for an SSH server past initial key creation? If so it'd be worth doing, quality entropy is a hard ask for embedded/virtual systems.


You need randomness for ephemeral keys, both asymmetric and symmetric.

Quality entropy isn't a hard ask, at least not for anything typically running OpenSSH or other server software. Intel has RDRAND, and even where AMD's RDRAND is broken their PSP coprocessors provide an entropy function. Similarly, NICs and other controllers also often come with RNGs. RNGs abound on modern embedded systems, actually, it's just that nobody has the full-time job of plugging them into the kernel's PRNG pool. It's not a full-time job for any OpenBSD developer, either, but they do seem to do a better job of this than on Linux; often times the only feature supported of a miscellaneous system component is the RNG.

It's the APIs that are broken. getrandom shouldn't block, period. A system only needs 16-32 bytes of pure hardware randomness for strong security. That's it! You either have it shortly upon boot, or you don't. If you don't, you're screwed anyhow, so why block? If you have 32 bytes of good entropy, all the entropy accounting mumbo-jumbo is pointless.

I do take issue with the author's complaint that systemd shouldn't have its own user-space PRNG. BSD systems have arc4random() as part of their libc, which is seeded from the kernel pool. This is very convenient; developers shouldn't have to think twice about calling into a PRNG for a 32-bit number, but if acquiring that requires a syscall they do think twice and often screw things up. Not to mention that Linux getrandom's default block semantics is broken by design.[1] Until glibc, musl libc, and other Linux runtimes wisen up and add arc4random, it's hard to blame projects like systemd for including their own PRNG.

[1] I realize that getrandom has an option to not block, but it's only function is to cast suspicion on itself when in fact the only thing that deserves suspicion are entropy guesstimators.


> Quality entropy isn't a hard ask, at least not for anything typically running OpenSSH or other server software.

128 bits is enough to feed into a stream cipher that will generate a lot of random bits:

> The original version of this random number generator used the RC4 (also known as ARC4) algorithm. In OpenBSD 5.5 it was replaced with the ChaCha20 cipher, and it may be replaced again in the future as cryptographic techniques advance.

* https://man.openbsd.org/arc4random.3

* https://security.stackexchange.com/questions/85601/

Re-feed/-stir every so often so that past entropy state can't be used to compromise things in the future.


> It's not a full-time job for any OpenBSD developer, either, but they do seem to do a better job of this than on Linux;

don't blame the linux kernel, we're just waiting on the dbus interface, gnome3 applet, and systemd binary + unit files before we integrate /etc/grub.conf.d/entropy.d/randomseed.d/grub-entropy-randomseed.conf and it's good buddy systemd-enable-grub-conf-d-entropy-d-randomseed-d.target into the Loobuntitis GNU/Linux Kubernetes CloudPaaSOS 2019.07.18 'doofy nerdguy' LTS Server release.

OpenBSD guys are clearly antiquated and using old modes of thinking with such a simplistic and neanderthal like design of having a system wide library installed in a single location without a dynamic json configurable runtime rest api endpoint.


> A system only needs 16-32 bytes of pure hardware randomness for strong security. That's it!

Correct.

> You either have it shortly upon boot, or you don't.

Not correct. On virtualized systems, with minimal interrupts, there is frequently not enough entropy available. This comment seems to reflect a poor understanding of the getrandom(2) system call and its history; part of the reason for implementing getrandom was in order to provide this new behavior of no longer blocking once enough entropy has been collected for a secure CSPRNG initialization. Linux needs this "entropy accounting mumbo jumbo" because it doesn't have a standard mechanism for persisting random seeds across boots; OpenBSD doesn't need it because the bootloader and kernel are tightly integrated, so they can easily implement this feature.


virtio-rng has been around since 2013. If a hypervisor doesn't support passing through entropy, then it's broken. Period.

Entropy accounting can't fix the underling problems here, all they do is obscure and confuse. Hypothetically and in a very technical sense, they can be useful and even necessary. But in practice they simply have no place outside of the actual hardware-based entropy generating devices. If you can't quickly seed yourself with 32[1] bytes of cryptographically strong entropy, then you're screwed, period. Blocking doesn't improve security, it just induces people and developers to implement awkward workarounds with the net effect of drastically reducing security.

When Linux added the getrandom syscall they should have dropped support for blocking. It was patterned after OpenBSD getentropy, which doesn't block; neither did Linux' long deprecated sysctl() random UUID mechanism that many programs once relied upon (like Tor). But they, and Ts'o in particular, seem unable to resist the siren call of entropy guesstimation.

[1] Even 16 bytes is enough to seed the system pool for an indefinite period, at least relative to a system without a strong hardware RNG. And that's the point. There are reasons for why a component might need ongoing sources of strong entropy, but if a system can't even provide 16-32 bytes at boot then those arguments are purely hypothetical because there clearly aren't sources of strong entropy available, anyhow. But if those sources are available, then it's ridiculous to think they can be "depleted" as a practical matter. If the CSPRNG pooling functions are broken, then all modern cryptography is broken, so you gain nothing with the convoluted semantics of Linux's traditional /dev/random machinations.


Is it possible to have some scheme where the client can provide all the initial randomness for the connection?

For example, the server could have an RSA private key, and the clients have the corresponding public key. A client could generate a full RSA block of random data, and encrypt that with the public key, and send the result to the server. The server could recover the random data and use that to initialize a CSPRNG.

This CSPRNG could be user mode code running in the ssh system, and only used for randomness needed for that particular connection, so that if a client supplies poor random data it only weakens that client's connection.


"Quality entropy isn't a hard ask, at least not for anything typically running OpenSSH or other server software. Intel has RDRAND, and even where AMD's RDRAND is broken their PSP coprocessors provide an entropy function."

I'm not a fan of trusting closed hardware for randomness.


And? The fact of the matter is, there's no workable alternative. You either have a good source of entropy, or you don't. If you don't, people aren't going to stop using things like OpenSSH. If a box is blocked on boot that's universally considered broken and everybody will try to work around it.

Fortunately in practice the situation isn't so dire. There are usually multiple hardware RNGs on a system. They may be all untrusted, but I'll trust them together before I'll ever trust the strength (and persisting correctness) of guesstimators. Ironically, entropy guesstimators are predicated on the very notion that the hardware is benign. Malicious hardware could implement subtle timing patterns in interrupts, much like what they might do for an actual RNG (e.g. use an AES encryption function to "randomize" their visible behavior). And in any event, you can still mix various system timing events into your pool without pretending you can quantitatively and reliably know their entropic contribution.


For virtual systems it shouldn't be, Virtio has an RNG provider that's able to use host entropy:

https://fedoraproject.org/wiki/Features/Virtio_RNG

For embedded systems, many have inbuilt RNGs, have a look at your SoC docs.


Yeah, I don't understand what's so cryptographically special about booting. If the system hadn't been turned off and on, it wouldn't have had a problem, right? Sounds like the real problem is some part of the system can't be bothered to persist its state. Either that, or the problem should also come up during normal usage, in which case there's a security hole somewhere.


systemd isn't using the right interface for adding the entropy in the seed file, so the kernel uses it but counts it as zero bits of entropy

edit: but i guess that's on purpose to handle disk imaging


Yeah I was just reading that. Why was that not an obvious bug to be fixed immediately? I haven't read much of the GitHub issues but it seems so weird to recommend trusting the CPU RNG instead of just pushing out a quick fix...


because people are poorly educated and copy their system disks without changing the random seed. then, they generate similar long-term keys, e.g. RSA keys with common factors. digitalocean was well known to have identical SSH host keys for quite some time (one of many reasons I wouldn't trust them with any sort of remotely serious hosting job); I would be shocked if they knew to reset the random seed.


All this is literally just concern for the first ~minute or two of the very first boot after disk cloning? It should be irrelevant for subsequent boots once they reseed on the next boot... hardly seems like such a huge deal.


Except that SSH keys are often generated on first boot, so there's the risk that they're all the same--unless you put some code in your deployment recipes to regenerate them.


Isn't that the user's and the SSH devs' problem? SSH can just wait for new randomness if it sees that as a risk when generating keys. Key generation is already slow anyway. Why is systemd trying so hard to "fix" a problem that might by some chance come up in SSH during the first minute of the the first boot after an improper disk clone? Especially with the currently proposed solutions it seems like such a crazy way to attack somebody else's problem...


> Isn't that the user's and the SSH devs' problem?

Per some of the comments in the Debian bug(s): if you tell users and devs simply "you deal with it", you will get a multitude of ad hoc, half-baked solutions that may nor may not be secure. You will have dozens of people trying to re-invent the wheel.

The whole point of things like /dev/urandom and getentropy() is that the problem is solved once properly, and then you allow everyone to benefit. By having the above solutions not-work, we are basically going back to the days of where they did not exist--in which case what was the point of creating them?

The BSDs seem to not have a problem with this, so I have no idea why the Linux folks can't seem to get their act together.

> Key generation is already slow anyway.

Define "slow". Key generation was slow in the early 1990s when I first started using PGP (nee GPG) back in the day. It is generally not-slow nowadays IMHO--as in, it takes less than ten minutes to find p and q for RSA 1024 (nevermind 2048+).



> because people are poorly educated and copy their system disks without changing the random seed.

To a first approximation:

* SHA256( cat /var/lib/randomseed || ifconfig || date -u ) > /dev/urandom

will get any decent stream-cipher-based PRNG going.

Each machine will have a unique MAC address, so that's 48 bits right off the bat, even if randomseed is identical and every machine is booted at the exact same second.


MACs are very much not random. Especially on a series of machines for a given provider, you can likely predict one. Date of the bootup is also not random if it's got only seconds resolution and you can figure out when a machine comes online by constantly trying to connect. In practice you added much fewer than 48 bits.


I strongly suspect that this information is already included in the Linux kernel RNG, but it is not given much weight in the entropy calculation, on the basis that virtualized systems frequently have duplicated MAC addresses (as is allowed on a non-public system), and sometimes also have low initial timestamp resolution.


If a system will be communicating with other systems, it will need a unique Layer 2 address. This can be leveraged for some entropy (certainly you don't have to credit the full 48 bits, but are you tell saying that not even 1-2 bits of credit cannot be applied?).

If the system is not communicating with other systems... what attack tree are you actually worried about if it is inaccessible?

I would also be curious to know which virtualization environments generate duplicate MAC addresses over (say) dozens of systems?


MAC addresses are not used in a purely routed/virtualized network.

qemu uses the same MAC address by default.


date -u just returns 1-second resolution? It's trivial to go through all the possibilities...


$(date -u | wc --bytes) says there is 29 bytes worth of output, which is 232 bits.

Are you telling me there is not even, say, 8 bits' worth of entropy credit in there? (Especially when hashed with other data?)

Remember: the context of this discussion is VMs (and not embedded systems without a battery-backed RTC).


> Sounds like the real problem is some part of the system can't be bothered to persist its state.

Persistent state is hard. For example, some systems run with read-only filesystem.


I thought about that, but then, if you setup that kind of a system, isn't it your responsibility to worry about how to generate and persist cryptographic state? It's such a niche and advanced use-case by someone who should really know better considering the setup... why should all the other >95% of users pay such a price for it?

Edit: Replaced "host keys" with "cryptographic state".


The issue is not about host keys, which are generated once. It is about randomness that sshd needs for nonces and session keys. It needs fresh randomness during each boot.


Okay, I substituted "cryptographic state" for "host keys". It doesn't change my argument.


Very hard ask for small embedded devices. Maybe you can get part of the way there mixing the low order bit(s) of a number of somewhat noisy floating ADC inputs? Against a half hearted nearby attacker I'm not sure I'd bank on it though.


No sshd needs to generate nonces and session keys.


The real question is why can’t sshd do this after forking?

Read and parse the config, fork, and lazily build an entropy pool.

Let the ssh connections or key generations hang if there isn’t enough entropy yet, but don’t get in the way of booting.

This is two bugs: one in systemd, and one in OpenSSH.


This affects Debian 10 official cloud images on various clouds - as the image doesn’t handle virt-rng out of the box


It's not just their cloud images, as far as I know Debian excludes all virtio drivers (net/disk/rnd) from all install media.

It is very frustrating.


I regularly do net installs of debian with virtio net and disk. I think this used to be true but it isn't anymore.

Edit: looks like this is supported to at least back to debian lenny, released in 2009.


Is this because there is 0 entropy available at boot and therefore even /dev/urandom is unavailable? Or is this because legacy tools are still relying on the concept of being able to "drain" the entropy pool? I'm not sure which the "getrandom()" call is related to.


The problem with /dev/urandom is that it's always available, even before any entropy was added. Fixing /dev/urandom to block until enough entropy is available once causes breakage like in the article. That is one of the reasons why getrandom() was added, so that existing applications can keep using the old and broken behavior but applications that care about the RNG being properly initialized can wait for it. So software that does crypto switched to it, and then things broke anyway.


> Fixing /dev/urandom to block until enough entropy is available once causes breakage like in the article.

... not to mention it is the entire point of /dev/urandom .. if you want something to block for entropy, you use /dev/random. if you are using /dev/urandom, you are saying I want something 'randomly random' and I'm okay if I can't get something random enough..


getrandom blocks until 128 bits of entropy have been collected since boot. It doesn't have the concept of "draining" the pool after the pool has been initialised.


I'm so happy I've stayed with Gentoo and never had to bother with this systemd thing that everyone is talking about.


systemd didn't change much related to this. The seed file never got credited, at least not in Debian. So either you have the same problem on Gentoo, or Gentoo has other problems.


I’ve had docker take 5 minutes to start on a cloud server due to entropy starvation. Iirc it’s somewhat kernel version dependent between the later 4.x kernels and the older xenial default kernel.


TIL Linux launches a built-in denial-of-service attack on itself every time it reboots!


1. /etc/ssh/sshd_config: UseDNS no

2. If haveged isn't perfect enough, setup a hw TRNG (i.e., EntropyKey, BitBabbler, InfiniteNoise) and EGD or EntropyBroker.

https://everipedia.org/wiki/lang_en/Comparison_of_hardware_r...

http://egd.sourceforge.net

https://www.vanheusden.com/entropybroker

3. It maybe something else.


> UseDNS no

Is not sufficient by itself to never have any part of the ssh connection and authentication not use DNS.

See https://unix.stackexchange.com/questions/56941/what-is-the-p... and https://serverfault.com/questions/576293/sshd-tries-reverse-...


And "UseDNS no" is the default anyway.


Most fun bug I've ever fixed is when we had builds time out while running unit tests, but it would never happen on our dev machines. Unless we let it run while grabbing a coffee. Or if we just sat there watching the screen without touching the mouse. But as soon as we would touch the mouse it would resume.

...tl;dr quick change to Java property so that SecureRandom uses /dev/urandom instead of /dev/random and problem solved.


Have you tried wiggling your mouse? It worked wonders for me, back when I was on Windows 95!


Maybe AWS can hook up a Mechanical Turk service to AWS so real live people can generate entropy by wiggling mice over the internet.


Hope the connection doesn't get MITM'd and have the entropy replaced with bias :)


.. just had a not so pleasant flashback to the thoughts occurring while watching a web server log for a porn site when I worked in web hosting..


Had similar years ago on a box and found out the disk(s) were 99.99% full - took forever to whittle it down.


No monitoring? :(


Needs a (2018) at least (article date) in the headline. Although it reports an issue introduced to Linux in (2013) which was "discovered" in (2017).


Read it. There are 2019 updates in it.

And of course it's systemd, what else. (not sarcasm)


The systemd bug thread was started in 2016, closed as not a bug still in 2016, and been referenced by a lot of other bugs claiming wontfix because the problem is this one "not a bug".

There are posts as lately as this year.


> closed as not a bug

I can't even count these any more, systemd closed it as "not a bug". I have angry words in my head.


Very disappointing to hear that Debian and some other distros enabled CONFIG_RANDOM_TRUST_CPU.

I'd rather not be relying on Intel hardware to do something so important correctly.


And yet you trust it to run all your other instructions? This is kind of a mad threat model. The processor can see everything, change anything, and do so at any time.


That's fair I suppose, the difference being that there's a lot more ways to screw up instruction backdoors and make them extremely visible than RDRAND, where the output is claimed to be unpredictable.

In the end it's a defense in depth approach - I simply think the original and default Linux kernel policy of demanding sufficient entropy from other means is preferable from a security perspective.

Theodore Tso has some good posts on this on the kernel mailing lists. Due to the way these sources get mixed, it would only improve security to turn this option off.


Why do you trust Intel less then any other hardware RNG you might have? Do you trust the one in the TPM? Or do you just not trust hardware to collect entropy?


You'd rather rely on... systemd?


An open source software stack vs a Frankenstein Minix running with mysterious powers over your computer, do you even have to ask?


The management engine doesn't have anything to do with RDRAND. What's your point here?


Can certainly be used for a backdoor.


Does distrusting the processor by disabling CONFIG_RANDOM_TRUST_CPU somehow reduce that risk?


The kernel does not provide the cpu’s random number generator data directly to downstream users. It is mixed into the entropy pool, same as curl’ing news.google.com into /dev/random.

Don’t worry about it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: