Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The hash of known data is know data. So such an attacker could flood the system with zero-entropy data and predict with higher than baseline probability the output of “random” generators.



You can't reduce the entropy just by adding low entropy data to the pool. If I hand you 256bits of entropy you can hash "a" into it as many times as you like - it won't reduce the entropy.

The only thing I can think here is that the attacker would trick the system into thinking it had hit its 256bit entropy minimum before it actually had.

At which point... I don't know what they gain, but sure, it's at least a dick move.


MENLO PARK, March 30 (Associated Press) - In what was widely characterized as "at least a dick move", malicious hackers of unknown origin were discovered yesterday to have been spamming the entropy pool of thousands of McDonalds self service kiosks with the phrase "i can haz jalapeno cheezeburger" during system boot.

While the direct ramifications of this action remain unclear, informal discussions are already being held in various constellations in congress to probe whether the FCC should be given an extended mandate to regulate randomness, according to several sources close to the food sales tech stack regulations sector who requested to remain anonymous.

In response to the swift call for regulatory action, several privately held businesses that supply PoS devices for tea shops have pledged to promptly deploy OTA updates that replace all sources of randomness in their systems with an infinite repetition of the second amendment.

Said Harry Parks, 37, of International Teaselling Machines Inc. - "In this country it is the given right of each man to select the randomness that he wants, so be damned, and I will happily sacrifice the security of my customers to make a stand for the unique freedom and liberty we have in this country."


No, that's not the problem at all. Nobody is predicting anything. The issue as I understand it is that it might have been possible in some pathologically bad but real configurations for malicious userland code to trick the kernel into believing the LRNG had been seeded when it hadn't yet been.


There are a few intertwined closely related pitfalls that are each subtly different:

1) "premature first" wrt non-local attacker: this is the problem you identified - the RNG initializes when there's actually only 1 bit of entropy, and then SSH generates keys that some researchers bruteforce years later.

2) "premature first" wrt local attacker: the RNG has no entropy. Something legit feeds it 32 bits of entropy, and the kernel mixes that entropy directly into the key that's generating the /dev/urandom stream. Local unpriv'd attacker reading /dev/urandom (or some remote attacker who has access to overly large nonces or something) then bruteforces those 32 bits of entropy, compromising it, since it's only 32 bits.

3) "premature next": the RNG has some entropy. That entropy gets compromised somehow. Then the "premature first" wrt local attacker scenario happens. Maybe you think this is no big deal, since a compromise of the RNG state probably indicates something worse. But compromised seed files do happen, and in general, a "nice" property to have is that the RNG eventually recovers after compromise -- "post compromise security".

Problem set A) A malicious entropy source can currently cause any of these due to the lack of a fortuna-like scheduler. Since we just count "bits" linearly, and any source that is credit-worthy bumps that same counter, a malicious source can bump 255 bits and a legit source 1 bit, and then an attacker brute forces the 1 bit.

Problem set B) Making writes into /dev/[u]random automatically credit would cause the same issue, since it's already common for people to write non-entropic stuff into there (e.g. Android cmdline), and because others manually credit afterwards, and mixing into the /dev/urandom key without crediting would also cause a premature next issue, since some things trickle in 32 bits at a time. And other things that trickle in more bits at a time might still only have a few of those be entropic. Yada yada yada, it would cause some combination of the problems outlined above.

In spite of problem set A, the kernel currently does do a few things to prevent against these issues. First, it avoids problem set B, by not implementing that behavior. More generally, /dev/urandom extracts from the entropy pool every "256 bits" and 5 minutes. And, in order to prevent against a "premature first" it relaxes that 5 minutes to 5 seconds, then 10 seconds, then 20 seconds, then 40 seconds --> 5 minutes during early boot, so at least a potential "premature first" gets mitigated somewhat quickly.

Problem set A still exists, however. Whether anybody cares and what the code complexity cost is versus the actual risk of the issue remains to be seen, and should make for some interesting research.


Yeah. I see (1) and (2) as instances of the same basic problem, and (3) as mostly a non-problem (like, you do the best you can to get compromise recovery from a CSPRNG, you don't do nothing, but you don't hold up progress on it).

But from my read of the backstory here, the problem is userland regressions on (1) and (2), and I buy that you simply can't have those.


> (3) as mostly a non-problem (like, you do the best you can to get compromise recovery from a CSPRNG, you don't do nothing, but you don't hold up progress on it).

Mitigating that attack is the main selling point of Fortuna, which makes this attack way harder. I think this is the primary thing we would get from a Fortuna-like scheduler that we don't currently have or can't currently have given the present design.

> But from my read of the backstory here, the problem is userland regressions on (1) and (2), and I buy that you simply can't have those.

Yea so the way these interact with the current story is in two totally opposite directions.

The original thing -- unifying /dev/urandom+/dev/random -- was desirable because it'd prevent (1)-like issues. People who use /dev/urandom at early boot instead of getrandom(0) wouldn't get into trouble.

Then, in investigating why we had to revert that, I noticed that the way non-systemd distros seed the RNG is buggy/vulnerable/useless, but fixing it in the kernel would lead to issue (2) by introducing problem set B. So instead I'm fixing userspaces by submitting https://git.zx2c4.com/seedrng/tree/seedrng.c to various distros and alternative init systems.

By the way, running around to every distro and userspace and trying to cram that code in really is not a fun time. Some userspaces are easygoing, while others have "quirks", and, while there has been reasonably quick progress so far, it's quite tedious. Working on the Linux kernel has its quirks too, of course, but it's just one project, versus a handful of odd userspaces.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: