Hacker News new | past | comments | ask | show | jobs | submit login

There are a number of things that the blog post gets wrong. First of all, it was not Jason A. Donenfeld, the author of Wiregard, which added the ChaCha20-based cryptographic random number to Linux. It was me, as the maintainer of Linux's random number generator. The specific git commit in question can be found at [1].

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

Secondly, I think a lot of people have forgotten what things were like in the early 90's. Back then encryption software was still export controlled, so we couldn't put DES (AES wasn't released until 2001) into the kernel without triggering all sorts of very onerous US government restrictions (and Europe was guilty of this too, thanks to the Wassenaar Agreement); you couldn't just put things up on an FTP. Also, back then, there was much less public understanding of cryptography; the NSA definitely knew a lot more about cryptoanalysis that in the public world, and while it was known by 1996 that MD5 had Problems, there wasn't huge trust that the NSA hadn't put a back door into SHA-1 which was designed by them with zero explanation about its design principles.

This is why the original PGP implementation, as well as the Linux Kernel random number generator, was very much focused on entropy estimation. We knew that it was potentially problematic, but then again, so was relying on cryptographic algorithms that were poitentially suspect. There was a good reason why in the 90's, it was generally considered a very good idea to be algorithm agile; there simply wasn't a lot of trust in crypto design, and people wnated to be able to swap out cryptographic algorithms if it was found that some algorithm (e.g., like MD4, and later MD5) was found to be insecure. So the snide comments about people not trusting algorithms seems to miss the point that even amongst the experts in the field --- for example, at the Security Area Directorate at the IETF, of which I was a member during that time --- there was a lot of thinking about how we could deploy upgrades if it were found that some crypto algorithm had a fatal weakness, and we would need to swap out crypto suites with minimal interoperability issues.

Unfortunately, being able to negotiate crypto suites leads to downgrade attacks, such as we've seen with TLS --- but what people forget is that when the original SSL/TLS algorithm suites were designed, people thought they were good! It was only later that some crypto suites were found to be insecure, leading to the downgrade attack issues. But it also shows that people were right to be skeptical about crpyto algorithms in that era.

Since then, we've learned a lot more about cryptographic algorithm design, and so people are a lot more confident that algorithms can be relied upon to be secure --- or, at least, other issues are much more likely to be weak link. That's why Wireguard is designed without any ability to negotiate algorithms, and as a result, it makes it much simpler than IPSEC. And it's probably the right choice for 2019. (At least, until Quantuum Computing wipes out most of our existing crypto algorithms; but that's a rant for another day.)

As far as monitoring entropy levels in Linux, in general, the primary reason why we need it is because even if we are willing to invest a lot of faith into the ChaCha20 CRNG, we still need to provide a secure random number seed from somewhere. And that can be tricky. If you fully trust a hardware random number generator, then sure, no worries. Or if you are using a cloud provider, so you have to trust the hypervisor anyway, then using virtio-rng to get randomness from the cloud provider is fine. (If you cloud provider wants to screw you, they can just reach into the guest memory or intercept network or disk traffic at boot time, so if you don't trust not to backdoor virtio-rng, you shouldn't be using the cloud provider at all.)

As far as whether or not to trust RDRAND, the blog post seems to assume that it's absurd to trust that NSA couldn't possibly have backdoored the CPU instruction. On the author hand, there are those who remember DUAL-EC-DRBG, where most people do now believe the NSA did put in a backdoor. And Snowden revelations did show that NSA teams were putting backdoors into Cisco routers by intercepting them between when they are shipped and when they were delivered. So given that you can't audit the Intel CPU's RDRAND, and Intel is a US company, it's not that insane to perhaps have some qualms about RDRAND. After all, if you were using a chip provided from a Chinese company (where the owner of said company miight also have been a high ranking general in the PLA), or a CPU provided by a Russian company controlled by a Russian Oligarch who is good friends with Putin and who also had a background from the KGB --- is it really insane to be worried about those CPU's? Let's not even talk about concerns over China and 5G telephony equipment. Why is it then completely absurd for some people to be considered about the complete inauditability of RDRAND, and the fact that no functional or statistical test can determine whether or not there is a backdoor or not?

Of course, if you really don't trust a CPU, you should simply not use it. But creating a CPU from scratch, using only 74XX TTL chips really isn't a practical solution. (When I was an undergraduate at MIT, we did it as part of an intro CS class; but MIT doesn't make its CS students do that any more.) So the best we can try to do is to try to spread out the entropy sources; that way, even if source 1 might be compromised, if it is being mixed with source 2 and source 3, hopefully at least one of them is secure. (Or maybe source 1 is backdoored by the NSA, and the source 2 is backdoored by the Chinese MSS, but if we hash it all together, hopefully the result will only be vulnerability if the NSA and MSS work together, which hopefully is highly improbable.)

The bottom line is that it's complicated. Of course I agree that we should use a CRNG for most purposes. But we still have to figure out good ways of seeding a CRNG. And in case the kernel memory gets compromised and read by an attacker, or if there is a theoretical vulnerability in the CRNG, it's good practice to periodically reseed the CRNG. And so that means you still need to have an entropy pool and some way of measuring how much entropy you think you have accumulated, and how much has been possibly revealed ("used") for reseeding purposes. In a perfect world, of course, assuming that we had perfectly trustworthy hardware to get an initial seed, and in a world where we are 100% sure that algorithms are bug free(tm), then a lot of this isn't necessary. But we in real world engineering, we have safety margins because sometimes theory breaks down in the face of reality....




> There are a number of things that the blog post gets wrong. First of all, it was not Jason A. Donenfeld, the author of Wiregard, which added the ChaCha20-based cryptographic random number to Linux. It was me[1], as the maintainer of Linux's random number generator.

For the avoidance of doubt: I can confirm that I didn't add chacha20 to the kernel or have anything to do with that. I did move get_random_int() and get_random_long(), which are used for things like aslr and wireguard handshake ids, from some weird md5 horror to using Ted's chacha20-based rng (when rdrand is unavailable) a few years ago, but that's different from /dev/urandom. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin... Ted is the one who added chacha20 for /dev/urandom.


Question: why did most OS’s move to chacha when they could have moved to AES and benefited from hardware support? Asking due to a friend who seemingly has to use a userland PRNG because getrandom is too slow


Linux has to support multiple architectures, and it's a pain in the tuckus to add conditional support for different CPU architectures which might or might have AES acceleration, using different CPU instructions.

Linux does have support for it, but you have to drag in the crypto system, which is optional, and it's a super-heavyweight and complex interface. Jason tried to simplify it for Wireguard, but ran into a lot of resistance, and he's now adding Wiregard with an interface layer to the crypto subsystem. He's still going to work on trying to add a simpler crypto interface, but a core principle of Linux's RNG is that it must always be present; I didn't want to make it an optional component that could was enbled or disabled at compile time. That means I couldn't rely on the crypto subsystem, even if I was willing to put up with its rather horrific interface. (There are some reasons for its complexity, but it adds no value to the random driver or Wireguard.)

In any case, if you really need more speed than the current ChaCha20 CRNG, you're doing something wrong. It's almost certainly not for cryptographic purposes. So if you do want that kind of speed, number one, you probably really don't need the security, and so a PRNG will always be faster. Or if you do need the security, grab a seed value using getrandom(2), and then implement a userspace CSRNG. (But I bet you really don't.)


Thanks for the answer! My friend’s response:

> Let’s take this comment at face value and say Linux rng throughput is 180MB/sec: https://www.reddit.com/r/crypto/comments/ednj0x/comment/fbjr...

> Let’s take one of the more moderate types of AWS instances and suppose our max network speed is 10gbps = 1250MB/sec.

> Supposing we have an application that does not need to do any slow operations like reading from a database and is only performing very simple operations in memory (like, let’s say, generating a symmetric key), we see that the Linux rng is roughly an order of magnitude slower than our network throughout.


In what situation do you need to saturate your network pipes with cryptographically secure random data?


I'm sure Pornin can speak for himself better than I can, but it seems like this response might not so much engage with the substance of his argument than with a simplified subset of it.

It's not, so far as I can tell, Pornin's claim that RNG design is mooted by RDRAND. I would be surprised to see Pornin argue for a system design that replaced a standard RNG design, with some kind of secret-unpredictable-event secure seed, with calls to RDRAND.

But your argument that it's sensible to be cautious about RDRAND because of Dual EC might itself be misleading. Dual EC is a complete design for a CSPRNG. RDRAND is a component of a CSPRNG. In every proposed system I've seen that uses RDRAND, RDRAND is one of several secret inputs to the RNG. I've never seen a design that chains Dual EC; it was surprising to see Dual EC even used, anywhere, because it is so comically expensive. Most of the BULLRUN revelations (and the subsequent Juniper fiasco) weren't new discoveries about Dual EC, but rather the discovery of systems not thought to be relying on Dual EC that were.

Even Bernstein doesn't really argue that RDRAND hurts security in the design context you're talking about; for the same reason that you say cloud users should trust virtio, a "snooping" RDRAND is simply out of the threat model.

But of course, RDRAND isn't really the point. Clearly Pornin doesn't think that haveged makes sense in systems without RDRAND. So what are you really saying here? That he's not charitable enough about the historical context you were working in?


> Even Bernstein doesn't really argue that RDRAND hurts security in the design context you're talking about

Would you mind explaining https://blog.cr.yp.to/20140205-entropy.html then?


What's your response to Pornin's criticism of 5.3 changes?

> Linux 5.3 will turn back getrandom() into /dev/urandom with its never-blocking behavior, because, quite frankly, Linus’s opinions on his own mastery of RNG theory exceed his actual abilities. The reasoning seems to be that if there is “not enough entropy”, then the application should make an interpretative dance of some kind to promote the appearance of new hardware events from which the entropy can be gathered. How the application does that, or why the kernel should not do it despite being much closer to the hardware, is not said


I wasn't a real fan of the 5.3 change, but it was better than some of the alternatives that people were proposing. At the end of the day, the problem is that userspace just shouldn't be trying to get randomness during early boot. But if they do, and we make the kernel more efficient things can break, and Linus believes that if there is a user-visible regression we Have to Fix it, even if it the root cause is broken user space.

In this particular case, what triggered this was an optimization in ext4 in how we did directory readahead, which reduced the number of I/O's done during the boot sequence. Some user space program in early boot tried to call getrandom(2), and it blocked because with a smaller number of I/O's, for some hardware platforms, it would stop the boot sequence in its tracks, and this then resulted interrupt events, leading to no further activity, leading to an indefinite hang.

So what do we do? We could revert the ext4 optimization, but what it did was draw attention to the fact that in the absence of a hardware random number generator which everyone trusted, what we had in terms of CRNG initialization was fragile.

Now the blog posting is inaccurate here as well, it is not the application which needs to make an interpretative dance of some kind. We actually are doing it in the kernel, and there is at least some hope that on an x86, making some assumptions about how the caches and clocks work, it probably is secure. I am actually worried that it won't be secure enough on simpler RISC cores, such as ARM, RISC-V, MIPS, etc.

The real right answer is that we should be pulling from as many hardware random number generators (that are designed for crypto purposes) that are available, such as from the UEFI, or the TPM, and mix that in as well. We'll probably continue to have config options so that the person building the kernel can decide whether or not those will be trusted. That hasn't happened yet, but I've been trying to recruit some volunteers to implement this in UEFI boot, or using NERF or Coreboot, etc. Util we do, or for hardware that doesn't have trusted hwrng's, starting in 5.3, we now have an in-kernel interpretative dance which is the fallback, for better or worse.

I'm not super-fond of that, but it was better than the alternative, which was no interpretive dance, and simply having getrandom(2) return "randomness" regardless of whether or not we thought it was random or not. On modern x86 processors, we will be mixing in RDRAND, so if you trust RDRAND, you'll probably be OK, interpretative dance or not. But the big worry is going to be on simpler CPU's such as RISC-V.

Ultimately, there are no easy solutions here. Arguably, just gathering timing events during the boot was also an "interpretive dance", since how much uncertainty there really is from SSD operations, and whether the SSD is using a oscillator different from the one used by the CPU, etc., involves a certain amount of hand-waving. So the only real solution is real, carefully designed, hardware RNG's. But then the question is how an you be sure they are trustworthy? This conundrum has always been there.

For myself, I use a ChaosKey[1] and make sure it is contributing to the entropy pool before I generate long-term public keys. Of course, can I be sure that the NSA hasn't intercepted my ChaosKey shipment and trojaned it, the way they did with Cisco Routers? Nope. I can only hope that I'm not important enough so that they wouldn't have bothered. :-)

[1] https://keithp.com/blogs/chaoskey/


> Now the blog posting is inaccurate here as well, it is not the application which needs to make an interpretative dance of some kind. We actually are doing it in the kernel,

Yeah, I was thinking the article looked inaccurate there. Thanks for confirming.


>We'll probably continue to have config options so that the person building the kernel can decide whether or not those will be trusted.

My understanding is that XORing a trusted seed source with any number of untrusted seed sources results in a trusted seed source. What would be the point of such kernel options?


The problem is "trusted" is as much a social construct as it is a technical one.

For example, if you are an NSA employee, you might be utterly confident that the NSA didn't twist Intel's arms to put in a backdoor into RDRAND --- e.g., that it isn't AES(NSA_KEY, SEQ++) --- and even if it were, you would be sure that as a US citizen, you would be safe from intrusive attacks to spy on your communications without a FISA warrent, which of course would only be done with a super scrupulous attention to legal process, the recent IG report of the Carter Page FISA warrant to the contrary.

In 2019, if you are a Republican member of the House of Representatives, such as Devin Nunes, you might be sure that the FBI is playing fast and louse with all FISA warrants, and so if so no one is safe from politically motivated investigations, especially if you are working for the Trump campaign, such as Carter Page.

See? Two different people might have very different opinions about whether a particular source should be trusted or not.


My point is that you don't have to trust any particular source of randomness. NSA can backdoor RDRAND all they want. As long as there is a single legit source of randomness XORed in then the NSA is just wasting their time.

So having options to remove some sources is pointless and can only make things worse, never better.


Sure, and that's why we still have the entropy pool in Linux. It's where we do the mixing.

My apologies if I didn't understand the point you were making.

And yes, I would love to see us adding more entropy sources in the future. The problem is I don't have a lot of time, and a lot of other projects to work on, but I've been trying to recruit people to add support to pass entropy from UEFI into the kernel (which requires changes to Grub, NERF, Coreboot, etc.), being able to pass entropy from one kernel to the next when using kexec, etc. I can't do it all myself; this is something that requires a lot of people to make things better.


Check out https://blog.cr.yp.to/20140205-entropy.html Yes, xorring RDRAND with your existing secret can hurt security in certain cases.


Thanks for the detailed answer!


I think that's a pretty bad misrepresentation of the situation. The root problem is that security-first people have a hard time trusting anything for real entropy to seed the RNG at first boot.

RDRAND is used, but not trusted by default (and there was a fault in AMD RDRAND with old un-patched bios). systemd applies a seed file generated during last boot, but again disables the option to credit the entropy pool, because it does not trust that this wasn't an improperly prepared and distributed VM image with the same seed file used over and over. There are lots of things that very likely contribute usable entropy, but the kernel can't know with absolute certainty that any particular one does not have some flaw. The final straw was ext4 optimizations and modern SSDs resulting in very few storage controller interrupts, and some programs using getrandom() during early boot, and boot locking up indefinitely for some users running the latest linux kernel, ultimately due to paranoia about the RNG entropy sources.

So if you have to choose between failing to boot up, or just giving the best random numbers you can despite lack of certainty/guarantee of sources of entropy, Linus prefers to not fail to boot up. I know that OpenBSD trusts that seed file, which Linux+systemd uses but does not trust, and I'm not sure what macOS and Windows do, but they probably trust RDRAND, which Linux also uses but does not trust. (My personal fix is to trust rdrand, there's a kernel cmdline option for it.)


Part of the problem is that Linux supports a much larger set of architectures and boot loader than OpenBSD. So trying to use a seed file which is read by the bootloader at boot time is hard.

Could systemd choose to read a seed file after it mounts the root file system? Sure, but then it's on systemd to believe that the seed file is always secure, even on IOT devices where the seed file might be imaged onto millions of devices with an identical value out of the box. Using a seed file means you also have to trust how the OS is installed; security is a holistic property involving the entire system's design and implementation.

Ultimately, it's not going to be up to kernel or systemd to tell users and system administrators what they should or shouldn't trust. If you trust RDRAND, and you're on x86, you can enable the config option or provide the boot command line flag, and you're all set. But I'm not going to tell you one way or another whether or not you should trust RDRAND. And even if I did, you could just reject my advice, either way. As I said in another reply, "trust" is as much a social construct as it is a technical one.


Specifically on

> it's good practice to periodically reseed the CRNG. And so that means you still need to have an entropy pool and some way of measuring how much entropy you think you have accumulated, and how much has been possibly revealed ("used") for reseeding purposes

Periodically re-seeding the PRNG can be argued to make sense, but doesn't require entropy estimation: consider https://en.wikipedia.org/wiki/Fortuna_(PRNG) (and the paper mentioned under "Analysis".) Not needing to estimate entropy does come at the cost of a longer time-to-reseed, but that still seems a good trade-off to me.

(Just in case you weren't already aware of that design.)


> I think a lot of people have forgotten what things were like in the early 90's.

Thank you for posting this.

“Those who do not know history's mistakes are doomed to repeat them.”


OT but the chacha20 code IIRC spills badly with either gcc or clang.


This is kind of hard to avoid without hand-coded assembly or careful use of vector intrinsics. The core of chacha20 is a 4x4 matrix, which would need 16 registers to hold without spilling (plus a few more for temporary use during the calculations). Both 64-bit x86 and 32-bit ARM have only 15 or 16 general-purpose registers.


FWIW x64 effectively has 32 GPRs worth of 32-bit registers, which is what chacha20 needs. You'd need to access them as something like:

  add eax r8d
  rol r8 32
  add edx r8d
though, so several are useless due to clobbering, and you'd need to schedule things rather weirdly. Also, where did you get the RNG state from, if you don't trust memory to be read-what-where-proof?


You could also use the SSE/AVX registers.


Probably true (I think all x64 hardware includes SSE at least), but they did imply they were trying to use just the scalar instructions. (They also said no hand-assembly, but something like:

  register uint64_t ra,rd,...
  ra += 0xFFFFFFFF&r8
  rd += r8 >> 32
should accomplish much the same thing given a resonably well-developed compiler.)


Off topic or no, this is very disturbing. Are we talking about power noise, or timing noise? The former is generally hard to counter, but we should be able to get our timing right. And doesn't DJB understand all this well enough to get the code right?


Your parent is talking about register spills to memory. Using memory instead of registers may be somewhat inefficient, but it's still constant-time (i.e. no dependence on the key etc.)


If the attacker has Meltdown/Specter-style access to your stack, register spills (especially but exclusively if not overwritten when you're done) can compromise your keys, but you need to have a rather weird threat model to avoid having them just read the actual keys out whatever storage you got them from in the first place.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: