Hacker News new | past | comments | ask | show | jobs | submit login
Linus on /dev/random: "We actually know what we are doing. You don't." (sophos.com)
139 points by marcuspovey on Sept 12, 2013 | hide | past | favorite | 74 comments



We allow the wild web to have access to our closed source GPU driver blobs but we elaborate tinfoil hat theories about rdrand. This is insane.

Regarding "(I'm not sure I agree with Linus that mixing in a known-tainted RDRAND stream would nevertheless invariably improve randomness, but on the surface, it shouldn't reduce it.)": I think it's fair to say it would, in practice.

Even if the NSA knows how to predict the output of rdrand (because it's really some stream cipher with a known key or something), most people don't. Therefore, it wouldn't improve the randomness of the final stream from the NSA point of view, but it would from the point of view of any other attacker not in the secret. So I think it's fair to say it can't do harm and it can actually do some good.

See also the previous discussion:

https://news.ycombinator.com/item?id=6359892

The consensus seems to be that if the NSA can backdoor rdrand so deeply that it can keep track of the CPU state and the contents of the RAM then you might as well throw away the whole CPU, why would you choose to trust all instructions but rdrand? They could have compromised the interrupt vector, the syscall vector or anything else.

This feels like "rumor based cryptography" or more precisely "FUD based cryptography". We're just running in circles.


This is true. I think it is unlikely that rdrand is this deeply backdoored. However, I do think for something this critical the comments should match the code better and working on fixing the problems identified in the article is probably prudent.

It is worth noting that police used to be able to exploit firewire DMA to bypass disk encryption and copy encryption keys out of memory from any system with a firewire port. This has been fixed and a CPU-level exploit for crypto seems unlikely to me because making something that not only worked consistently but didn't slow down general purpose programming when it worked, would be an engineering marvel.

This being said, having clear, unimpeachable code in these areas is a good start because it helps ensure that other problems are not lurking under the surface.


I agree that the comment of get_random_bytes should be fixed, it's misleading. However I don't think modifying code that has no known bug or weakness because of a rumor and some handwaving is good policy. It's more likely than not to introduce a regression and create real trouble.

If it's ain't broke...

Or in this case:

If you can't prove it's broke...

EDIT: I would also remind everybody that if they really don't trust rdrand for any reason they can just add the "nordrand" boot kernel param and disable this code. It's a non-issue.


> We allow the wild web to have access to our closed source GPU driver blobs but we elaborate tinfoil hat theories about rdrand. This is insane.

This isn't cryptography, it's politics. The 'taint' is ideological.


> why would you choose to trust all instructions but rdrand

Presumably, there are limits to the amount of silicon that Intel/NSA can devote to backdooring everyone. Since other instructions are supposed to behave deterministically, it could be expensive to backdoor them in a way that would not easily be discovered.

On the other hand, RDRAND could be a straightforward Dual_EC_DRBG implementation, which would be a very cheap and effective backdoor that would also have the nice benefit of keeping people's communications secure against everyone except the NSA.

Of course, there's also the possibility that there's no backdoor, but that the implementation is still buggy.

There's no reason why our trust in the hardware has to be all or nothing.


Yep.

The main concern about random number generation is that if you use the output of RdRand exclusively, it's fairly easy to backdoor in such a way that it looks completely random to all outside observers, but the NSA has a key that could allow them to predict the output based on past output. However, mixing it with the state of an already good random number generator (which the kernel needs to have anyhow for platforms without an RdRand instruction) pretty much negates that attack.

No one (reasonable) is actually particularly concerned that Intel themselves has backdoored RdRand. But they do want to ensure that they are protected in the case that at some point in the future, some other architecture adds a random number generation instruction and that is backdoored. And since you need to continue to do software random number generation anyhow, the best way to use RdRand is to use both, mix them, and get the best of both worlds (fast entropy available in environments with limited entropy sources, and an auditable software random number generator).

All of the other attacks that people are suggesting require a whole hell of a lot more silicon, require changing the behavior of unrelated instructions, and so on. They're just too complex and too fragile to be feasible.

Instead, why don't we spend our time naming and shaming the companies that actually do use backdoored random number generation, like RSA security:

> Apparently, RSA Security BSAFE Share for Java 1.1 has DUAL_EC_DRBG as a default:

> "The default Pseudo Random Number Generator (PRNG) is the Dual EC-DRBG using a P256 curve with prediction resistance off."

> I didn't find an obvious link for the equivalent C/C++ library documentation, but the RSA BSAFE CNG Cryptographic Primitives Library 1.0 FIPS 140-1 Security Policy document from RSA Security at the NIST site says (p.14):

> "The Module provides a default RNG, which is the Dual EC DRBG, using a P256 curve and SHA-256."

> Additionally, the RSA BSAFE Crypto-C Micro-Edition 3.0.0.2 FIPS 140-1 Security Policy says (p.17):

> "For R_FIPS140_MODE_FIPS140_ECC and R_FIPS140_MODE_FIPS140_SSL_ECC, Crypto-C ME implements a Dual ECDRBG (Cert. #137) internally"

> I'd be more than a bit wary of any product using RSA Security libraries.

From: https://lwn.net/Articles/566329/

For a while, people were confused about why the NSA would have gotten the Dual EC DRBG random number generator introduced into a standard, as it's so much slower than most of the other available random number generators, and it was only a year after the standard was released that the potential backdoor was pointed out. Well, apparently RSA has decided that it's the best RNG available, perhaps with some influence from the NSA.


This whole discussion about rdrand reminds me of people arguing about what strength the secondary lock on their upstairs back window should be when the downstairs floor has single pane glass windows all around.

Even if rdrand is backdoored it would have to be a significant supplier of entropy in the resulting random number for this to be a meaningful attack vector, as soon as you mix it with other (good enough, large enough) sources of entropy you get a situation where some other attack is more likely to be far more feasible than to use the knowledge about some of the bits that rdrand contributes to the entropy pool.

Such as:

  - good old b&e and placing a keylogger or hardware bug
    (very easy to hide in a keyboard)

  - a compromised bit of the OS

  - compromising the application that you use to encrypt your messages or finding a significant weakness in the application.

  - doing any of the above with the recipient


Did you actually look at how the Linux kernel is mixing RDRAND output with other randomness, or read the comments by the author of the original change.org petition? Because of the way Linux mixes RDRAND output with other entropy using XOR, a malicious RDRAND implementation can easily make the output of /dev/random totally determinisitc whilst being completely indistinguishable from a correctly-functioning implementation except to the attacker.

All it has to do is detect the code sequence in question and XOR the output of RDRAND with the randomness from the other entropy sources before returning it. The two XORs cancel out, and this is completely undetectable because there's no way to distinguish between a true random bitstream, a good PRNG, and a good PRNG XORed with data you provided based on the bits themselves.


I keep hearing this argument, but I don't feel like it's relevant to RDRAND. Let's say the numbers are generated by by XORing RDRAND as "a" and the other parts as "b", such that for any given call:

/dev/random = a XOR b

If the NSA only knows "a", that's fine, "b" is still pretty random. They can't compromise the randomness of "b" unless they know "b".

Now if they know "b", then we're screwed whether we use RDRAND or not, and safe encryption using Intel chips is just impossible. However I don't think anybody is suggesting that.


There's a difference between the NSA being able to add a malicious circuit into a CPU that has access to "b" and being able to leak the value of "b" to systems they control. Thankfully, in the case of RDRAND they don't have to do the latter - they can just neutralize the effect of "b" on the result on the CPU itself.


All it has to do is detect the code sequence in question and XOR the output of RDRAND with the randomness from the other entropy sources before returning it.

How is that going to work? i.e. how is RDRAND going to 'detect the code sequence'?


RDRAND wouldn't, the control unit would. Whenever it sees the XOR macroinstruction it checks the second operand to see if it's RDRAND. If so, it doesn't order an XOR; rather it just copies the RDRAND value to the first operand address.

That's the straightforward way of doing it. The 'finesse' would be to leave RDRAND as a secure random source, but in the case of it being used as an operand of XOR, simply to ignore RDRAND entirely, substituting an insecure stream. The advantage, other than reduced risk of detection, would be that asynchronous access to RDRAND wouldn't scramble the otherwise breakable output.


Only Intel engineers know exactly how to do this and I doubt they're allowed to reveal hardware internals, but at the point RDRAND actually executes the next fewt instructions should have already been decoded and the data flow between them analyzed. In theory it's not terribly hard to use that information to change the behaviour of RDRAND.


Honestly, and for lack of a more suitable expression, put up or shut up. If you think rdrand actually reads back the output of the RNG from RAM in order to nullify it, then show it.

It's actually possible, you can verify that the timing of the instruction conforms to what it's supposed to be doing, you can check for RAM access. RAM accesses are slow and easy to detect (I'm sure there even are hardware counters for that kind of thing on modern CPUs).

So unless you can get any kind of hard evidence that would even shed the base of the idea of a doubt about what rdrand is doing: this is pure FUD.

Finding out how rdrand is truly implemented is hard, but if it's truly the evil instruction of doom that sends images from your webcam to the NSA then it should be trivial to prove it's not behaving as it should.


Instead of saying put up or shut up, let's think if this is within the capabilities of Intel or an impossible feat.

First off, the RNG doesn't have to reside in RAM as it could already be in cache. So you're already not going to be detected by looking at RAM access. Also, it's not 1992. Modern architectures and modern operating systems are going to throw out instruction timings from Intel manuals. A cache miss and you're toast.

Now if you have a dedicated pipeline to executing a RNG within a code cache, all you would have to do is work out it's inverse. Very plausible.

Unless the above sounds magical, it does seem like this is a possibility. And as it's been shown that the NSA is using it's enormous budget to pay US companies to help do it's bidding, this does seem like it's within reach.


Aren't the next instructions going to be in the code cache? So "detecting the code sequence" would seem trivial.


Remember, this is a hardware implementation you're talking about. Nothing is ever "trivial". It would take a significant amount of extra silicon to add this kind of detection logic.

The reason for the basic paranoia about not trusting RdRand directly is that it's pretty easy and cheap to make it generate a random number stream that looks random, but is predictable (the RdRand function already is documented to use AES; all you would need to do is make it do AES of an incrementing integer sequence, rather than actual random noise, which is a pretty small change). And heck, if RdRand isn't backdoored (no one has presented evidence that is is; it's just a standard level of paranoia because subverting the random number generator is a favorite technique of the NSA), it might be in a future version, or AMD or ARMs implementation of a similar instruction in the future may be.

Detecting a code sequence and subverting it would be far more difficult. For one, there's the extra silicon. There's the extra chance of that change introducing other noticeable behaviors. There's the extra chance of discovery. It's just not worth the costs. And furthermore, if you really are worried about that, then there's no reason to limit your paranoia to the RdRand function; you may as well say you can't trust the chip to run any crypto code at all.


We can already rule out the extra silicon costs. Don't forget that a program like this one would be subsidised.

If you can't trust a chip with one instruction, why trust it with the others. I'm in no disagreement with you here. I was just responding to jgrahamc asking how it was going to work.


> All it has to do is detect the code sequence in question

Extraordinary claims...


> question and XOR the output of RDRAND with the randomness from the other entropy sources before returning it.

How is that easy ? No, predicting or detecting that the returned value of your assembly instruction will later be xor'ed by some other value, in all it's machine code variants that different versions of gcc will produce, is not easy.

It is theoretically possible if you have access to the CPU design and can modify it, but even then it is very non-trivial, if even doable in the general case.

There are several free CPUs around you can instantiate on an FPGA and boot linux on - if someone makes a proof-of-concept rdrand() on one of these that can detect the future bit operations on the value(even when it's moved to another register or to/from main memory) and cancel out that bit operation - then I'll believe it's possible.

Until then, I'm more(more compared to not at all is still very little) worried that:

* the chip (whose part number google knows nothing about) in my dsl modem have a backdoor and being able to mirror all its traffic

* that the baseband chip in my HTC has the same ability - in addition to the know ability of being able to report its gps location without informing me

* that the NSA probably still can read my gmail mail

* that my raspberry pi SoC can contain an unknown component that dumps it's memory out the ethernet card

* that the latest iPhone perhaps complies nicely with the 3GPP TS 33.108 spec.


Did you read the article? They explained why an implementation of RDRAND as "XOR together the contents of all registers and return it" would result in removing nearly all of the entropy in the state vector. And it proposed a simple solution: modify the code so that the hardware entropy is mixed in earlier in the process (in which case it WOULD require the prodigious feats you are talking about).


get_random_bytes() documentation:

    This function is the exported kernel interface.  It returns some
    number of good random numbers, suitable for key generation, seeding
    TCP sequence numbers, etc.
Here is the accepted commit that makes get_random_bytes() use RDRAND directly:

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.g...

Note that this was version 3 of that patch. Versions 1 and 2 also took control of /dev/urandom. Here is v2:

http://thread.gmane.org/gmane.linux.kernel/1173350/focus=117...

A year later, Ted Ts'o made get_random_bytes() go through the usual entropy pool and added get_random_bytes_arch() for a consumer that doesn't want to go through the entropy pool. (The core kernel does not currently use get_random_bytes_arch() anywhere.)

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.g...

Note that the heated discussion came before v3 of Anvin's patch, and thus /dev/urandom was included. Matt's objections were perhaps not expressed very clearly, but Linus was pretty cavalier in overruling Matt Mackall (the /dev/random maintainer at that time) and I think his retort to George Spelvin's very rational objection was unreasonable:

http://thread.gmane.org/gmane.linux.kernel/1173350/focus=117...

I find it scary that these commits made it as far as they did. Note also that on the day of the leaks (ironic timing), Ts'o had to shoot down a RedHat engineer proposing to once again make get_random_bytes() bypass the kernel entropy pool.

https://lkml.org/lkml/2013/9/5/212

As for other locations, the point is to be undetectable while deployed at massive scale. Keyloggers and active backdoors are much higher risk. Great for a targeted investigation, but terrible for untargeted passive surveillance.

https://plus.google.com/u/0/117091380454742934025/posts/XeAp...


Instead of discussing on LKML or other forums, he decides to create a petition on change.org. Got what it deserved.

Somebody should be apologizing here, and it isn't Linus.


Please do not submit professional troll articles.


Agreed. This article is very one-sided, and after posting a calmly worded comment regarding Linus's standpoint[1] on his attitude, it was deleted. The article is simply link bait and is not professional journalism.

That being said, I have more confidence in Linus's knowledge regarding /dev/random. Mostly because XOR in this context is secure:

1. XOR is an incredibly powerful encryption algorithm (not primitive); one of the best we have. The problem with XOR is that you MUST use a UNIQUE one time pad (that is the length of the data) for every message AND you need to be able to securely transmit that one time pad. AES CTR is effectively using AES to create a one time pad for XOR encryption, as an example.

2. The prior steps are effectively creating a irrecoverable OTP meaning that any malicious intent in RRAND is effectively encrypted away.

[1]: http://marc.info/?l=linux-kernel&m=137391223711946&w=2


The argument is that RDRAND may have access to the previously generated OTP. If that is true, a malicious RDRAND can cancel out any randomness from that OTP. In that case the "incredibly powerful encryption algorithm" XOR can be tricked to generate a stream of zeros, shakespeares complete works, or whatever you like.


Agreed. However any form of "tricking" would be ultimately pointless because a small adjustment could be made to random.c. Existing CPUs can't be "changed" (barring microcode, but avoiding that is simple - don't install new microcode) so they would no longer work against the system.

In addition the amount of transistors required to actively circumvent random.c is prohibitive: CPUs would need to be significantly larger to pull off attacks like this.


Who the christ is feeding the output of /dev/random for its use as a cryptographic function without checking that what they read is in fact NOT just a stream of zeroes? Because that's an outcome which can happen from any truly random number generator just by chance - its unlikely, but not unreasonable.

Hence debiasing and the like.


If they can make it look like a stream of zeros, they can make it look like a random stream which is actually a pseudo-random stream as well.

Also, they might leave some randomness in, but it can be a small enough amount of entropy that it would still render crypto keys vulnerable.


Doi. This is an obvious danger and I feel stupid for not putting two and two together.


If you think RDRAND is examining the L1 and registers in order to derandomise it, why wouldn't the evil chip just skip bothering with RDRAND and instead just attack that random buffer it knows how to find...?


Mucking with RDRAND can easily be done in such a way that it's provably impossible to detect any differences in behaviour. It's much harder to guarantee that if you start mucking with buffers in memory, since there are so many clever ways a developer could check their work and catch you out.


How's the provably impossible to detect bit work again? Surely it'd fail a Chi Square test on reruns with the same input? The attack can't use external input e.g. clock because the NSA can't correlate generating the random number with seeing it in flight.


"The attack can't use external input e.g. clock because the NSA can't correlate generating the random number with seeing it in flight."

It could potentially use the number of milliseconds since the last hour, or maybe the state of the branch predictor, or any number of other things that have exploitable biases (with NSA resources, 1/1000000000 is pretty good odds).


Why should the CPU not have an internal counter that is backed up in flash memory between reboots? 128bit would be enough, with the highest bits set to the processor serial number. Using this counter in AES-CTR mode – i.e. encrypt the counter with the secret key to generate the pseudo random data – the NSA could reconstruct the internal CPU state from a single block (16 bytes) of random data. As many random data is published verbatim, for example as nonces, getting such a block should not be a problem.



Yeah, or if the onboard flash memory is too difficult to implement, they could even just initialise the lower bits of the counter from the real hardware RNG every boot. Either of these options would be statistically indistinguishable from true randomness unless you knew the key.


When I first learned about RDRAND I was thrilled because I naively assumed this would be just a hardware RNG with direct link to the CPU register vector capable of delivering randomness with a speed of cache hits or better. This would be an end to all struggles with non-crypto PRNGs (which have zyllions of uses in science, mostly in Monte Carlo methods and machine learning, but also some in "consumer computing" like raytracing).

But no, Intel made a sluggish hardware PRNG that occasionally eats some thermal bits just to make crypto guys happy -- and bang, now everybody thinks it is an NSA backdoor.


You think 500MB/s [1] is slow? Or are you concerned with the latency of each call?

Also I don't understand why you rant against PRNGs. Do you know that this stretching of actual random data makes RDRAND considerable faster than using actual randomness?

[1]Source: http://stackoverflow.com/questions/10484164/what-is-the-late...


This 500MB/s a merged stream when using all cores, I got like 140MB/s on one (from my answer in the SO thread you linked).

And I'm not ranting, I'm just crying over a lost opportunity. Nowadays you must spend time to think which PRNG to use and how to implement it to satisfy some quality/speed trade-off; an RNG (infinite cycle by design) directly connected to CPU (no transfer bottlenecks) that passes Die Hard (read random enough for science) would be a golden bullet.

Yes, PRNG makes RDRAND faster than its entropy source in its current design; but it is not hitting any wall. Intel engineers could made it way faster if they had focused on maximal throughput possible not just big enough for crypto.


> And I'm not ranting

Sorry if I offended you.

I'm still surprised that poses a challenge for your applications. I thought fast non-cryptographic RNGs were a solved problem. How much random data do you generate? If you use a significant amount of CPU just for that I doubt it would be feasible for Intel to build a cryptographically secure RNG with the same throughput without significant extra costs (think one extra core). (I'm no expert on that subject though.)


I'd want to hear Linus responding to both the OP and Taylor. But quick thought: do people like Bruce Schneier ever read this file? I think in the next year or two we will see a huge number of research going into finding "backdoor", suggesting implementation weaknesses. I am not going to speculate too much about who is NSA mole or why certain code got into the codebase. I'm more interested in researchers to find more weaknesses, like how Barton Miller did by fuzzing unix programs back in the 90s! I wish I had enough knowledge to help out.


Bruce Schneier

I have all the respect for the guy, really, but then I read his guardian article "how to keep your data safe and secure" and mentions keeping "air gap" between computers with sensitive data but then he himself admits after all his methods and safeguards for the leaks he is working with, he uses Windows and usb-sticks to transfer (encrypted) files between them.

He uses Windows.

He uses Windows.

And he was claiming in the article that free and open source are better for security. Oh really, so your encrypted files on your usb-sticks for sure cant spread malware through your airgap through a file-system exploit? For sure it has never been done before that a file system could be used to take over an OSs internals, oh never.


It's still good advice. In fact, unless you audit the source of your OS, it's compiler, hardware and any software you need to use you can't really make any guarantees either.


I can make you the guarantee that Microsoft is actively cooperating with the NSA and has backdoors in Windows.

I can guarantee you that few people have access to the Windows source to check it.

I can also guarantee you that many more people have access and have audited the source code of a GNU/Linux system.

I cant guarantee you that GNU/Linux doesnt have backdoors. But for all intents and purposes it is just plain Wrong to use windows for any security related activities.

Would you be more comfortable leaking documents using a GNU/Linux livecd or Windows?


Sorry. I think you are too paranoid. You are. Close source can have backdoor and open source can have backdoor too. I like to use Linux and my Mac to do programming work, but it doesn't mean I don't care about Windows.

If you think Linux has less chance getting backdoor, well, look at all the speculation we got these days. If NSIT cryptographic standard has reduced security as many people believe, then your communicate is dead.

If you believe that all ISP are cooperating with the US government here in the US, why the hell are you still using the Internet as we know it? You are guaranteeing that only an open source system will not have a backdoor while a closed source must have. Microsoft has collaboration with NSA in one way or another. Is that a secret? Most of their "collaboration" probably come from business things like military-kind projects. They might have backdoor. But guarantee is a big assumption. If you don't have solid proof then you are making false accusation.

It's like saying because your friend shakes hands and hang out with a cold-blood murder he must be a cold-blooded person too. Plain wrong, ignorant and simply stupid.

Security has a trust involved. If you don't trust your USB, your own product, then you will not get any security. There is nothing wrong with transferring things between usb and windows computer. It's fine. You can still run SCP, SSH over your Windows socket. Is that now weaker because damn MS is working with NSA as you say?

Someone give this man a cookie because we are obviously living in an fantasy.

If you think your ipod, your laptop don't have backdoor according to how MS and Intel are working with NSA, you are contradicting. Stop using the Internet and stop using anything. Then you are safe.


I think you are throwing the baby out with the bath water here.

Bruce Schneier can be presumed to have evaluated his risks; he's a very well known author on exactly that.


Using [only] Windows should automatically disqualify one from giving and security related advice.

At least Linux and BSD source is viewable by people. Lots of people.


The same lots of people that immediately noticed when Debian shipped a broken openssl? Or the ones that didn't notice at all for years until somebody noticed identical certs showing up in the wild?

https://en.wikipedia.org/wiki/Random_number_generator_attack...


Is rdrand really the very last stage? As in the output is "stream XOR rdrand"? If that is really the case it puts full, 100% trust in Intel not to insert a backdoor. It wouldn't even be hard. All the CPU need do is check for the xor operation used with rdrand as an operand, and instead of performing the xor, substitute the backdoored pseudo-random stream instead. No runtime monitoring of internal state would be necessary, the whole thing could be done at the assembly to microcode translation layer.


Where do I go to elect this guy "King"? All of his suggestions were reasonable and measured.


If I remember right, the original reasoning why this could be a problem was something along the lines:

You are using sources s1, s2, s3. Then final result is combination c(s1,s2,s3). Now somebody screws up something and the sources s1 and s2 start returning just constant values. If you were just using s1 and s2 you would immediately notice this. However since you are combining all three, you are getting something that looks good but what might not be really secure if the source s3 is compromised.

(I think this came up in some HN discussion some weeks ago)

I'm not familiar with the Linux implementation so I don't know if this has any meaning there.


The problem with this argument is that it would also apply to any low quality random source, and linux (and other OSes) all use sources of dubious quality. The reason to use them is that when we mix, they only improve matters.


I never followed this at all prior to reading this article so forgive me if this was covered outside the scope of this write-up, but...

If the CPU did give you a RDRAND value that was pre-baked to weaken the number it thinks you're going to XOR it against it, it would be easy to detect this by feeding RDRAND the same input state repeatedly and seeing if there is a pattern to what is spit out or if it is indeed statistically random... So why hasn't someone (who thinks RDRAND is a trap) done that instead of just claiming it could maybe be doing something fishy?


Ah but that is not quite as easy as you might think. Maybe RDRAND is actually a keyed PRNG, so it is computationally hard to distinguish from "true" randomness. Maybe there are only 1000000000 possible keys, so while you could theoretically detect the back door, it is impractical/unlikely for anyone with less than NSA resources to do so. Conveniently, the NSA has the resources to exploit such a bias.

To further throw you off, it could be the case that the back door is only exploitable after, say, 1kB of output, and it is "truly random" prior to that. That would be plenty useful for the NSA's purposes. It might even be that the back door is only exploitable for some part of the output, maybe the part where you are most likely to find a suitable prime number during some key generation process. Intel periodically releases new products, so the NSA would have plenty of chances to update the backdoor as software changes.


One counter argument I can think of:

The rougue RDRAND would become active only in a CPU which the back-door has been exploited. It wouldn't be active off-the-shelf.


How would you distinguish actual randomness from the output of a CSPRNG with a known or weak seed? Patterns won't be apparent in either output.


The gist of the argument as I understand it is that some people think Intel's chip (at the chip level) is taking a look at data that the RDRAND result will be used as an XOR against and using that to mess with the result RDRAND returns in some way to weaken the overall random number.

If this were true and you set up a repeatable test situation in which you force the other parts of the RNG to generate the same numbers prior to RDRAND and then did the RDRAND and captured the results then I don't see how one could argue RDRAND is compromised in this way if the results coming out of it over time even appear to be statistically random.

...unless people think the chip is also detecting situations where you are actively trying to fool it by setting up repeated simulations of the same initial value to be XORed, which strains credibility way beyond what I'm willing to believe.


Actually that is quite simple. For simplicity let us assume RDRAND will only attack the Linux RNG. Now RDRAND first generates its own weak random stream w_k. When it predicts that Linux will generate l_k it outputs l_k xor w_k thus the final output of the Linux RNG will be w_k. As w_k looks random for everyone who does not have the private key you cannot check that there is anything wrong with w_k or w_k xor l_k.


"For simplicity let us assume RDRAND will only attack the Linux RNG."

Assuming the chip is detecting that the Linux RNG is in play is already way out of the realm of simplicity and frankly way beyond what a company like Intel is likely to be able to keep secret given the number of engineers that would have to be aware of this complex functionality.

This whole conspiracy theory hinges on some wild claims that I haven't seen substantiated in the least.


As far as I have followed the discussion there are no hard facts or claims at all, just the general suspicion against a completely closed system of an US company.

I, too, don't believe that Intel adaptively generates its RNG to spoil the Linux RNG. But be reminded that what would have been wild conspiracy theories just half a year ago is now common believe (NSA deliberately introducing vulnerabilities in software and even in cryptographic standards, routinely by-passing TLS).

Given all we know (and don't know) I think it would be prudent to mix Intel's RNG with the other randomness sources using a cryptographically strong primitive and not just XOR. Personally I'm enthusiastic about Keccak as a reseedable RNG, but these modes will probably be standardized no earlier than fall 2014.

> Assuming the chip is detecting that the Linux RNG is in play is already way out of the realm of simplicity

I meant simplicity of my argument. As an answer to this paragraph I argued that it is indeed possible to generate malicious output that appears completely random:

> If this were true and you set up a repeatable test situation in which you force the other parts of the RNG to generate the same numbers prior to RDRAND and then did the RDRAND and captured the results then I don't see how one could argue RDRAND is compromised in this way if the results coming out of it over time even appear to be statistically random.


It could be summed up as is: either you have a test that shows that rdrand has got problematic behavior or you shut up.

Backdooring rdrand is of little to no interest given how PRNGs are built.


I would agree with you if the Linux RNG was build according to common cryptographic wisdom. But as far as I can tell by reading the article the random sources are compressed in a common randomness pool using CRC. If RDRAND was added to this pool it might reduce the randomness of this pool. If these sources were added by a cryptographically secure mixing primitive this would not be possible. A faster alternative would be to maintain one pool for every randomness source that is added to via CRC and then only mix all pools together with a cryptographically secure primitive when randomness is requested.


Good article if only for "bucket of digital slurry."


The way things are, people are going to post on HN if Linus farts and others would upvote.


Why is this _AGAIN_ here? This has been posted like 50 times already during a couple of days. And what Linus says isn't actually even anything bad that should create this kind of stupid fuzz.


If you consider the title of the article, "Rudest man in Linuxdom", you understand very quickly that the article author is imposing their own moral on Linus, but that's OK because the author is a proven programming genius of greater skill than the person who invented Linux, git, etc.

...

Oh wait, he isn't. The author of the article is just another "tech blogger" that is interested in page views. So what's more interesting an article to write: one about RDRAND, or one about how mean, naughty and rude that ignorant "Linus Torvalds" person is? The second option will generate more page views so the choice is obvious.

Since the author of the article is quick to criticize Linus and his way of expressing himself, even going so far as to, in a bullet list, sentence Linus to community service for the crime of not being as kind, understanding and tolerant a person as the author, I'm going to do the same.

Hey, article author! If I were king, here's what I would want you to do:

* Write an operating system that is used by millions of people and powers a large part of the Internet (together with BSD and friends).

* Write a distributed CVS that is also used by millions of people.

* Stop writing bullet lists about people.

* Stop trying to attract hits to your tech blog and create something of use to the human species. How about rewriting the graphics support in the kernel or something?

* Criticize what people say, not how they say it.

I'm tempted to go on, but I know that the author has no interest in neither learning nor creating anything, only criticizing other people, so anything I write is for the enjoyment of HN (in addition to mine).


I assume your post was intended to be ironic?

> "Write an operating system that is used by millions of people and powers a large part of the Internet (together with BSD and friends)."

Linux wrote a kernel, not the full OS user land. What's more, most of the code in Linux (the kernel) isn't actually written by him - these days he's job is more that of a maintainer. Not that I'm trying to undermine his achievements - just pointing out that you're making the same exaggerated arguments as the blogger you're condemning.

> " Stop writing bullet lists about people."

You mean like the bullet list you've just written?

> "Stop trying to attract hits to your tech blog and create something of use to the human species."

Again, like you've just done. Let me explain with a little code:

    ($potkettleblack = $_) =~ s/tech blog/forums/;
> "Criticize what people say, not how they say it."

Which would be fine if the whole premise of your argument is complaining about not what the blog said by how the blogger said it.

Don't get me wrong, I have a lot of respect for Torvalds, but your comments were -at best- hypocritical. Though I'd say they were much worse than that as at least the blog in question added some content to compliment their clickbait headline. You've not even touched on the real subject matter of this topic.


Regardless of how the author has presented his arguments, why don't _we_ focus on the actual arguments? This is getting rather too meta.

Also, what's the use of an open-source OS that is used by millions if it is shrouded in esoteric arguments? I would certainly like Linux developers to better express themselves even if _I_ can't build such a system myself.


>The author of the article is just another "tech blogger"

So ridiculing him instead attacking his arguments is A-OK but he has to keep silent because Thorwalds is a great coder?


He has the same right as you, as me, and as Linus Torvalds to criticize something. There are better programmers than Torvalds out there that are not as successful as he is, and the other way around.


Excellent trolling. Now lets everyone ignore this post.


The author makes an argument that is somehow both adhom and slippery slope. By pointing out something that everyone knows (Torvalds is rude, which not necessarily a bad thing depending on who you ask) he adds weight to his second argument about /dev/random being insecure (which is complete bollocks). I christen this logical fallacy the "Ducklinism".

And I have to agree with you about his obvious lack of expertise in the department. Where he fails to make a compelling article is not his lack of expertise, rather the lack of interviewing many people who do have that expertise.


> * Stop writing bullet lists about people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: