Hacker News new | comments | show | ask | jobs | submit login
How secure is Linux's random number generator? (randombit.net)
200 points by hpaavola on July 13, 2013 | hide | past | web | favorite | 92 comments

The annoying thing is that the Linux RNG is really limiting without something like RdRand.

It used to be that most drivers contributed to the randomness pool, so it seldom ran short. It used to be that you could configure the size of the pool, so if you were running short you could make it larger. But then it was discovered that the pool resizing had a locally exploitable vulnerability so it was removed, leaving it always at the smallest value; and it was realized that many driver sources weren't very random and/or were externally controllable so most were removed.

The end effect is that much server hardware only gets about 50-100 bits per second added only to a pool of 4096, and /dev/random is constantly running out leading to weird performance problems (like ssh connections taking a long time). This results in a desperate need to replace /dev/random with something like RdRand when it could just otherwise be another untrusted contributor if the rest of the system around /dev/random were sane.

Apparently, for some "security experts" it's damned if you do, damned if you don't

If you don't use RdRand then you have few sources of "true" randomness, hence, your RNG is predictable, manipulable and you're an idiot and a 5 year old can break your crypto

If you use RdRand then "blah blah blah this is opaque", hence, your RNG is predictable, manipulable and you're an idiot and a 5 year old can break your crypto

Perfect solutions exist only in labs and my impression is that most of these "experts" make things less secure.

Meh. Simply making the default pool larger would go a long way towards moving systems out of a desperate situation. With that done there would be a lot less reason to short circuit it and go RdRand only.

No one is concerned about RdRand as a contributing source— with other genuine source of randomness RdRand isn't likely a back door once mixed in.

Making the pool larger isn't sufficient for embedded systems that don't have a lot of sources of entropy in the first place. Especially since very often the most critical secrets (such as the RSA keys for the certificates used by network printers, for example) are generated when the embedded system is first installed, where even if you have a larger pool, there isn't any opportunity to fill with the extremely limited amount of entropy available to said device.

Yes, this is very bad in embedded systems

As in your example, the only source of entropy a network printer has: network data, easy to manipulate or even no activity. So no way to generate keys for example.

In some cases hardware sources are a must. Yes, in the end you'll need to trust them

What happens if I tune my DVB card in, sample /dev/video0 or whatever it is now and use that to add entropy? Is that even possible?

Sure, you can mix more data into the entropy pool just by writing to /dev/urandom. This won't update the entropy estimate though - to do that, you can use the the RNDADDENTROPY ioctl to add data along with the number of bits of entropy you want credited for that data (for this you need to be root, though).

Just because something is closed source doesn't mean it's insecure. RdRand meets various standards for RNGs and the dieharder tests don't show anything of concern. While you can't be 100 percent sure of the reliability of RdRand because you can't audit it, I feel safe trusting it for all but the most critical of applications. Here's a blog post describing testing RdRand with dieharder: http://smackerelofopinion.blogspot.com/2012/10/intel-rdrand-...

You are right that closed source doesn't mean its insecure - on the other hand, open source could prove that it is indeed secure. With new scandals coming up every week these days, about hidden backdoors in security software, I trust open source more than ever before.

Ironically, it's particularly vis a vis cryptographic random number generation where we can most easily show open source cryptography failing its users; Debian fatally broke the OpenSSL CSPRNG so badly that attackers could remotely brute force SSH keys.

Whereas with closed source you would almost never know. Crypto is very hard to do properly, but at least with open source you have the possibility of independent third party analysis.

Wasn't the debian vulnerability discovered because someone noticed that two different servers had the same key? That would have gone down exactly the same with closed source.

Not defending the debain change but openssl code structure / readability is far from great, the only packages I would put behind openssl is libxml2, glib and glibc.

Weird. I find glib quite readable: https://git.gnome.org/browse/glib/tree/

Yes. And didn't that bug stay in Debian for two years?

Open source code still needs people looking at it.

Maybe no-one would have noticed if it was closed source. I bet if Microsoft released everything as Open Source there would be billions of bugs discovered.

The Debian RNG bug was noticed by folks who found identical certificates in the wild, not by code inspection. Similar RNG weaknesses are commonly found in closed systems as well, so it doesn't seem to be a particularly open/closed source thing.

It's true that merely the ability for widespread code inspection doesn't mean all the code really gets widespread inspection [although I'm surprised by the number of messages I see on mailing lists like Q: "Hi, I'm a Chinese grad student and have been reading the gcc source... I don't understand how XXX can work, given that YYY... can you explain? thanks" A: "oh, hmm, actually, that seems to be a bug..."]

Still, I think a common pattern is (1) notice funny symptom, (2) go look at code, puzzle through it for a while, and then "oh!" You're now in a much stronger position to fix the problem or petition for a fix.

With closed-source code, step (2) is a lot harder unless you're in a privileged position...

Open source would not prove it is secure. At best, you could look for obvious attacks. Cryptography is hard.

No, at best you could have a large, diverse group of experts look for potential flaws and fix them. But, I agree being open source is not enough, you need to be open source, and the one implementation that everyone contributes to.

AES of a counter and a key set from a table based on your CPU serial number also "meets various standards for RNGs and the dieharder tests don't show anything of concern".

And, of course, if you make it the kernel /dev/random you're making it the source of randomness recommended for long term keys and other important things... while you don't know what the users will use it for, you can safely assume it will include some of "the most critical of applications".

As you said, "While you can't be 100 percent sure of the reliability of RdRand because you can't audit it" So, if you are willing to disregard the importance of auditing a (critical!) piece of code, and take and use a nice black box given to you by that big company, then it begs the question.... Why on earth are you using open source?? It just makes no sense. To me, at least.

But how can you be sure it's not just a very very good self-synchronising PRNG?

f(<Some number of previous outputs>, <Secret key known to Intel and the NSA>, <Your cpu identifier>) = <next output>

Calculate how many bits intel could fit on a chip. Apply statistical tests until you're sure the output contains more than that much entropy?

"Just because something is closed source doesn't mean it's insecure"

Yes it does. Closed source, to the extent it impairs audits, does mean something is insecure.

It does not prove that the software is backdoored, otherwise compromised, or defective in any way - if that is what you meant, you're correct. But security means not only absence of these conditions, but also that you can verify that these conditions are absent.

Security is relative. "Secure" is a short form of "high enough confidence". There can be a rational basis for high or low confidence, based on various factors, including testing and likely motives of the parties. Closed source itself is a bad factor. Collaboration with the USG is a very bad factor.

Meeting various standards for RNGs doesn't help though if the algorithm is properly backdoored. Like for instance Dual_EC_DRBG could be..

Also correct me if I am wrong, but wasn't this about using RdRand as entropy source to the linux /dev/random, which afaik is not injection-proof..

And I would consider /dev/random among the most critial of applications.

I think that transparency and opaqueness provide different kinds of security, but they are related to security. It would be wrong to say that they are independent simply because an opaque system can still have the ability to be extremely secure.

For those that aren't aware, the security of a rand number generator is very important: http://en.wikipedia.org/wiki/Random_number_generator_attack

That was a wonderful read. Thank you.

Am I the only guy who can't figure out how to navigate mailing lists archives?

These things are internet hell.

I have a conjecture that I believe explains the high concentration of suck in almost all mailing list archive software.

Let R = { people who read mailing list archives }

Let D = { people who design and implement mailing list archives }

My conjecture: R ∩ D = ∅.

The 'read' view linked to here is pretty cryptic and weird, I agree entirely.

Try the "Messages sorted by: ... [ thread ] ..." link, it gives you a hierarchical view that's pretty understandable / navigable IMO.

I agree. Try searching for the title on gmane.org

Here's the list on gmane: http://news.gmane.org/gmane.comp.security.cryptography.rando...

Not sure how to link a particular article in that view. The 'direct link' sends you to an article-only page. But the message by the OP appears as the third top-level thread in that view.

The link is http://thread.gmane.org/gmane.comp.security.cryptography.ran...

You get to that by clicking on the subject in the bottom frame.

And here is the mailing list thread that the author refers to:


There was a lot more follow-up later, see e.g. https://lkml.org/lkml/2012/7/5/422

The important commit here is:



Change get_random_bytes() to not use the HW RNG, even if it is avaiable.

The reason for this is that the hw random number generator is fast (if it is present), but it requires that we trust the hardware manufacturer to have not put in a back door. (For example, an increasing counter encrypted by an AES key known to the NSA.)

It's unlikely that Intel (for example) was paid off by the US Government to do this, but it's impossible for them to prove otherwise --- especially since Bull Mountain is documented to use AES as a whitener. Hence, the output of an evil, trojan-horse version of RDRAND is statistically indistinguishable from an RDRAND implemented to the specifications claimed by Intel. Short of using a tunnelling electronic microscope to reverse engineer an Ivy Bridge chip and disassembling and analyzing the CPU microcode, there's no way for us to tell for sure.


my understanding (and i'm not an expert - just trying to help with the discussion) is that getting sufficient entropy is quite hard. i vaguely remember at least one issue, perhaps on startup, where there was insufficient entropy to do something, and so people switched to some other less random source and screwed everything.

so if this reports unlimited (or at least, larger than anything else) entropy then there are likely situations where it's the only source available (people don't typically have lava lamps wired up). and then an attack seems possible.

[edit: while startup is the case of the bug i remember, you might "consume" entropy faster than it is generated at other times too (i imagine a server running https has a fairly high demand for entropy, for example).]


i'm sorry, this is just incoherent to me. i have no idea what you're trying to say, or how it fits with my comment.

i was trying to explain why an untrusted hardware rng cannot be improved in some cases (because of limited entropy from elsewhere). and i don't see how what you are saying addresses that.

That's why Linux distros save the entrhopy pool when rebooted. /var/lib/urandom/random-seed in Debian

Few points:

(1) Chaining rnd generators is fine as long as they are not correlated somehow. If there is any correlaton, the output may be weaker. (Consider chainig two exactly same generator as a corner case.)

(2) In this case we don't chain generators; you would loose the speed of the integrated one otherwise.

even after reading all that it's not clear to me whether the output from rdrand (the hardware instruction from intel that's opaque, if i'm understanding right) is mixed with other sources of entropy or not.

at https://lkml.org/lkml/2011/7/30/116 linus says We still do our own hashing on top of whatever entropy we get out of rdrand, and we would still have all our other stuff. but then goes on to say I'd be even more willing to just take something that just introduces a per-arch interface to get a "unsigned long [ptr]" that is random, and returning the number of bits of expected entropy in that thing. And for x86 CPU's with the RDRAND capability bit, I'd give Intel the benefit of the doubt and just make it do a single "rdrand" and return the full 64 bit [...] which sounds like it would not be mixed.

so what was the final outcome?

[also, perhaps worth mentioning explicitly - the argument that you shouldn't care too much about this is that if you don't trust intel then you're fucked anyway. which doesn't fill me with warmth and joy, but what can you do?]

And even Linus talks about the NSA in that same thread a little lower: https://lkml.org/lkml/2011/7/30/116

And the next email where Ted says that the NSA are one of the good guys... Yeah, not so much.

You just took T'so's quote totally out of context in order to make a fatuous point. In reality, Torvalds and T'so have a better understanding of software security topics than virtually anyone who comments on HN.

For anyone wondering, he literally does not say that.

He says if you're in the government they're one of the good guys, if you're anyone else you want to mix the results of their RNG with some other source.

Which is apparently exactly what this patch does.

Yeah, how dare the NSA develop SELinux and SHA-1.

I suppose I would just be feeding the conspiracy theories if I mentioned that NSA also pushed security work for X.org forward?

The best approach to have is IMHO here:


Gutterman, Pinkas, & Reinman in March 2006 published a detailed cryptographic analysis of the Linux random number generator[5] in which they describe several weaknesses. Perhaps the most severe issue they report is with embedded or Live CD systems such as routers and diskless clients, for which the bootup state is predictable and the available supply of entropy from the environment may be limited. For a system with non-volatile memory, they recommend saving some state from the RNG at shutdown so that it can be included in the RNG state on the next reboot. In the case of a router for which network traffic represents the primary available source of entropy, they note that saving state across reboots "would require potential attackers to either eavesdrop on all network traffic" from when the router is first put into service, or obtain direct access to the router's internal state. This issue, they note, is particularly critical in the case of a wireless router whose network traffic can be captured from a distance, and which may be using the RNG to generate keys for data encryption.

It shouldn't be a religious but an engineering problem. If you manage keep some state between reboots and use it after the next reboot, you're making it hard enough for anybody not having the physical access to that state. Then you can also use RdRand to mix it with the output of your stream based on your state, and with other sources of entropy if you have them. If RdRand turns out to be suspicious, you're at least much better off than using only hard coded states.

Anybody knows if some kind of described state is used now?

Yes - for example this is done by /etc/init.d/urandom on Debian and Ubuntu systems.

My question was for /dev/random. The main problem RdRand solves is quantity: obtaining a lot of random bits per second. Even if they were produced in a way that somebody knows possible weaknesses, mixing them with something cryptographically strong where we control the seed we'd preserve quite a high throughput. I know that there is /dev/urandom which can often be good enough, but I know that too much applications in fact prefer to use /dev/random so making /dev/random robust has sense.

I see Ted Ts'o commented too, and as I understand, having RdRand is still much better compared to having the platforms without it. There's a lot more to care about than is RdRand "perfect" and once you have something like RdRand you can use it safely enough, compared to not having anything.

It applies to /dev/random too - the same write() implementation is used kernel-side for both devices so it doesn't matter which one you write to.

The seed that is saved at shutdown and reloaded at startup will alter the internal state of the /dev/random pool, but it won't add to the entropy estimate (which makes sense). This means that the output will be more robust, but it could still block waiting for "real" entropy.

Prior two Edward Snowden's whistle blowing I think you could perceive the maintainer as paranoid around leaving the project (see linked thread) however now I think you can't discount what, if any, cooperation technology companies have been providing to the NSA.

I am not sure I agree that before Snowden this could have been perceived as paranoid. As part of the discussion on the crypto list Ben Laurie brings up an important point:

"But what's the argument for _not_ mixing their probably-not-backdoored RNG with other entropy?"[1]

Does your answer to this really change that much "pre-Snowden"?

[1] http://lists.randombit.net/pipermail/cryptography/2013-July/...

So this is logic that more or less rules out all hardware encryption, including HSMs, right?

No. In fact it's a matter of trust.

You can trust Skype that calls are encrypted and cannot be eavesdropped, you can trust Verizon that your cellphone metadata are not passed to government automatically, and you can trust Intel that their rnd is not backdoored.

Or you don't.

Help me understand how someone who believes rdrand might be backdoored could trust any HSM?

You can't, if you're that serious/paranoid about it.

It's possible that the HSM maker wasn't approached by the NSA and is secure, but there are very few of them in the US so chances of the NSA having missed one is very low. Plus, without a STM to inspect the silicon and reverse-engineer it, how would you know?

So what if you buy one made outside the US? Say, China. Well, there's the obvious possibility that the Chinese authorities have backdoored the silicon. But my guess is that the Chinese maker just cloned one of the US vendors, including the portions inserted by the NSA...

I'd -consider- "trusting" a random number generator that collects entropy from a Chinese, a Russian, and a US based HSM manufacturer…

An "array of mutually untrustworthy opponents", if you like…

this is my favourite conspiracy theory that the CPUs are backdoored. just assign a bunch of registers with some special values and execute a specific instruction and the CPU will drop all memory protection. take something like google's NACL or a javascript JIT where you have enough control over the registers and you have a permanent browser exploit.

The best backdoors are indistinguishable from dumb bugs when they're discovered.

They'd looks something like Debian's OpenSSL. But I believe that was not an intentional backdoor.

There are several different ways in which randomness is used in the kernel. One general class of randomness is things like randomizing the sequence numbers and port numbers of new network connections. If you can predict the result of this randomness, it becomes easier to carry out attacks such as hijacking a TCP connection. (Note that if the active attacker controls the path between the source and the destination, they'll be able to do this regardless of the strength of the RNG; this makes just makes it easier if they don't have 100% control of the routing.)

Another class of randomness is that which is used to randomize the layout of shared libraries, stacks, etc. --- address space layout randomization (ALSR). If someone is able to guess the randomness used by ASLR, then they will be able to more easily exploit stack overrun attacks, since they won't need to guess where the stack is, and where various bits of executable segments might end up in the address space's layout.

Another case of randomness is to create crypto keys; either long-term keys such as RSA/DSA keys, or symmetric session keys. If someone screws this up, that's when the "bad guy" (in this case, people are worried about the NSA being the bad guy) can get access to encrypted information.

It is only the first two use cases where we use RDRAND without doing any further post-processing. These are cases where the failure of the RNG is not catastrophic, and/or performance is extremely critical.

We do not use RDRAND without first mixing it with other bits of randomness gather in the system for anything that is emitted via /dev/random or /dev/urandom, because we know that this is used for session keys and for long-term RSA/DSA keys.

The bigger problem, and it's one that we worry a huge amount about it, is the embedded ARM use cases which do not have RDRAND, and for which there is precious little randomness available when the system is first initialized --- and oh, did I mention that this is when long-term secrets such as SSH and x.509 keys tend to be generated in things like printers and embedded/mobile devices and when they are first unwrapped and plugged in, when the amount of entropy gathered by the entropy pool is usually close to zero? What we desperately need to do is to require that all such devices have a hardware random number generator --- but the problem is that there are product managers who are trying to shave fractions of a penny off of the BOM cost, and those folks are clueless about the difference between cost and value as far as high-quality random number generators are concerned.

What if the RNG has been compromised by the NSA? Well, that's where you need to mix in other sources of randomness into the entropy pool. The password used by the user when he or she first logs into an android device, for example. Screen digitizer input from the user while they are first going through the setup process. In the case of a consumer grade wireless router, it could sniff the network for a while and use packet inter-arrival times and mix that into the entropy pool. Yes, someone who is on the home network at that time will know those numbers, but hopefully someone who is in a position to spy on those numbers, isn't also going to have access at the same time the super-secret NSA key used to gimmick the RDRAND instruction (assuming that is gimmicked, which is effectively impossible for us to prove or disprove.) But then again, your wireless router isn't going to have access to unencrypted plaintext which is critical --- if you're sending anything out your wireless network without first encrypting it first, I would hope that you would consider it completely bare and exposed!

If you are super paranoid, you'll need to find a hardware random generator which you've built yourself --- and hopefully you are competent enough to actually build a real HWRNG, and not something which is sampling 60 Hz hum (or 50 Hz hum if you are in Europe :-), and mix that into the entropy pool as well. In that case, even if the Intel RDRAND is compromised six ways from Sunday, the NSA won't have access to the output from the HWRNG --- and if it turns out you were incompetent and your HWRNG is bogus, at least RDRAND is also getting mixed into the entropy pool.

And if I were in China, I'd use a hardware chip built in China for the RNG, and combine that with an Intel chip. That way even if the HWRNG chip is compromised by the MSS, and even if RDRAND is compromised by the NSA, the combination is hopefully stronger since presumably (hopefully!) it's unlikely that the MSS and the NSA are collaborating with each other at that deep a level. Ultimately, of course, if you don't trust Intel, you don't trust the silicon fab, etc., then you'll have to build your own computer from scratch, write your own compiler from scratch, etc.

(MIT CS undergrads used to have all of that knowledge, starting with building a computer out of TTL chips and how to build a Scheme interpreter from machine code, etc. But not any more, alas. Now they learn Python and it's assumed that it's impossible to understand the entire software stack, let alone the entire hardware stack, so you don't even try to teach it. But that's another rant....)

If you are super paranoid, you'll need to find a hardware random generator which you've built yourself --- and hopefully you are competent enough to actually build a real HWRNG, and not something which is sampling 60 Hz hum (or 50 Hz hum if you are in Europe :-), and mix that into the entropy pool as well.

What do you propose as a low cost solution? I've seen some interesting suggestions, such as having a small fish bowl or tube or tank, having air pumped into the bottom, and sampling the patterns of bubbles with some CV solution. Sounds geeky, would make for a nice decoration on one's table, but building it seems like quite a job.

Sit a Geiger counter next to a jar of Brazil nuts, and time events? You would need a way of testing the Geiger counter, but that should be possible. Alternatively, buy one at surplus from a university laboratory. If they're any good, they would have already noticed it misbehaving.

I think a zener diode based solution will suffice in a pinch http://en.wikipedia.org/wiki/Noise_generator

Hmm. Haven't thought of that. Comparatively simple, yet also compact at the same time. Thanks, nice!

Solid state FTW! :)

It's not really low-cost, but:


Too mechanical. I'm afraid I like my bubble idea more. Unless I get my hands on some suitable radioisotope, that is.

That's not hard. Just get an ionizing smoke detector:


One other thing about the quote from Matt Mackall referenced in the mail archive above. I was the original author of the Linux /dev/random code, and many years ago, I let him take over when I didn't have the time to keep up with the maintenance duties.

This is the first that I heard that he had resigned over the RDRAND getting used directly, and I track LKML discussions, especially on topics such as file systems and random number generators, pretty closely. What I suspect happened is someone pushed this patch to Linus without going through him, and he got annoyed and just stopped work.

What happened about 18 months ago was some researchers approached us about the deficiencies hey found documented at http://factorable.net. It when then that I discovered that Matt Mackall had resigned (although the MAINTINERS file still listed him as the maintainer --- normally when people resign in a huff they send a patch to remove themselves from the MAINTAINERS file). So at that point, I took over the maintenance duties.

It was at that point that I changed things so that RDRAND was only used directly for non-critical items, and started addressing how to gather as much entropy as possible for those platforms which do not have RDRAND --- which in fact was the bigger deal, and right now, probably the far greater set of problems.

That leads me to another important point --- don't get too focused on RDRAND. Yes, it's something we need to be concerned about, but there are many different ways people can screw up security, and not all of them require active collusion with the NSA.

For example, there are several years worth of HP printers where it's possible to push an unsigned firmware load to said printers over the network, at which point said printer could be sending copies of everything sent to it back to the NSA or the FBI. This would require no collusion between HP and the US government --- just the incompetence of HP firmware engineers. Could the NSA or the FBI be exploiting such a hole? I'd actually argue that given the FBI's mission, it would be malpractice for them not to develop such a trojan'ed firmware load. Hopefully they would only be using it after getting a search warrant, but if such a hole exists, and their mission is to get the bad guys, subject to US laws and the US constitution, I have no problem with them trying to create such exploits. Of course, I do blame HP for making it be possible to create such an exploit in the first place, and so we need to hold manufacturers accountable.

So we need to be vigilant, and worry about auditing all of the potential attack surfaces, and not just worry about one particular place where Intel might have colluded with the NSA. There are lots of other places where collusion is not required; just simple carelessness and incompetence by software engineers....

What's the suggested attack here?

That Intel is cooperating with the TLAs and providing a weak on-chip random number generator? Or a random number generator that can be made to be weak? Or what?

And how credible is the risk when that information is used to seed a pool of entropy, rather than being used raw?

A longer excerpt from the same email list:


https://lkml.org/lkml/2011/7/31/139 Since there was a minor amount of confusion I want to clarify: RDRAND architecturally has weaker security guarantees than the documented interface for /dev/random, so we can't just replace all users of extract_entropy() with RDRAND.

I still don't get it.

What don't you get? RDRAND is an interface to a non-blocking PRNG backed by a HWRNG. You can't directly get the output of the HWRNG. Intel uses the PRNG to condition the output of the HWRNG, but it will still give you numbers if the HWRNG is having trouble (HWRNG errors can be detected, but the RDRAND instruction itself doesn't trap on HWRNG failures). If you trust Intel's PRNG sufficiently, then you can use RDRAND directly for /dev/urandom, but it takes a lot more trust to use it for /dev/random.

It's documented to have the potential to not return random data, so in that sense it's blocking like Linux' /dev/random. Sample code shows some type of polling loop IIRC.

The RDRAND circuits perform continual health checks and will signal a error instead of outputting bad data. See section 3.3 of http://software.intel.com/en-us/articles/intel-digital-rando...

In NIST parlance (SP800-90{A,B,C}), RDRAND is a DRBG, while /dev/random should be a NRBG.

For what it's worth, the RDSEED instruction has been announced by Intel a while back (but not yet on current processors), which means to be an NRBG as well.

RDRAND does have non-determinism. The problem is that you can't directly measure or monitor how much non-determinism. The values you get from RDRAND come from a DRBG that is periodically and automatically re-seeded from a NRBG, but if you call RDRAND many times in rapid succession, then the DRBG may not have been re-seeded between each invocation. RDSEED differs only in that it guarantees that the DRBG is re-seeded between invocations. If you are using the instructions infrequently enough, then the two are functionally identical, but "infrequently enough" has a value known only to Intel.

This comment in linux/arch/x86/kernel/cpu/rdrand.c claims:

     * Force a reseed cycle; we are architecturally guaranteed a reseed
     * after no more than 512 128-bit chunks of random data.  This also
     * acts as a test of the CPU capability.

FTR, that limit is also mentioned in the Intel manual [1, 3.2.3], and in the Cryptography Inc report [2, 2.4.2].

[1] http://software.intel.com/en-us/articles/intel-digital-rando...

[2] http://www.cryptography.com/public/pdf/Intel_TRNG_Report_201...

So... taking this line of reasoning to its logical conclusion, if you don't trust RDRAND, then you should also not trust _any_ of the hardware the OS runs on. I imagine there would be much easier ways for Intel to implement backdoors to the system than through the non-deterministic random number generator.

What does it take to reverse engineer the silicon? I thought I'd seen an project for automating it, but I can't find it.

Even reversing the silicon won't likely help— and, uhh. Reversing a state of the art CPU is not do-at-home stuff.

The reason it won't help is that the design is _explicitly_ microcoded. E.g. RDRAND triggers running loadable microcode which is supposed to read the real RNG and AES it. Maybe there is an unrelated "bug" that allows that microcode to be corrupted after some particular instruction sequence happens. All your investigation would turn up everything looking like normal.

It looks like the microcode is also encrypted. But perhaps that encryption could be reverse engineered from silicon? The Silicon Zoo tutorial noted that Pentium I-era chips were "easily viewable" [1], probably with optical microscopes. So perhaps some parts of some newer Intel processors can be done at home. So, the "plan of attack" (ha!):

* decap an Intel CPU and scan it

* decode the microcode encryption

* figure out how the hardware RNG works with the microcode (it's AES? ok.)

* and then analyzing the system of microcode and hardware for robustness and security.

Yeah, this is hand-wavey and probably incredibly implausible. But it seems like an interesting and challenging project or three.

[1] http://siliconzoo.org/tutorial.html

Ah. Some relevant information on reverse engineering silicon:

* Degate, a somewhat automated "aid in reverse engineering of digital logic in integrated circuits" - http://www.degate.org/

* Silicon Zoo offers a tutorial / background info on this - http://siliconzoo.org/tutorial.html

* A blog about IC reverse engineering - http://uvicrec.blogspot.com/ (from the owner of http://siliconpr0n.org/ , which is currently down)

Lame answer I know but: recompile kernel (or patch) out this crappy Intel HW support then ? And IIRC the Linux pseudo random generator was quite good. The only problem is exhausting the entropy pool.

I believe you can add 'nordrand' to your boot flags to turn off the kernel's usage of it.

Further down the thread:

> Not to mention, Intel have been in bed with the NSA for the longest time.

> Secret areas on the chip, pop instructions, microcode and all that ...

What does "pop instructions" refer to here?

AIUI the story goes like this: for a long time NSA required all CPU vendors to provide a "popcount" instruction (to count the number of one bits in a register) for any hardware contract. NSA was buying a lot of Intel processors, but Intel CPUs lacked a documented popcount instruction until very recently. So, there was speculation that an undocumented opcode would function as a popcount instruction in older Intel CPUs (perhaps after modifying the CPU microcode), and from there people speculate that there may be other undocumented instructions and CPU features. Or so the story goes.

Wow. So not very?

ugh had no idea intel was built into the core

i distrust

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact