Hacker News new | past | comments | ask | show | jobs | submit login
The Strange Story of Dual_EC_DRBG – suspected NSA backdoor (2007) (schneier.com)
233 points by sfscs on July 6, 2013 | hide | past | favorite | 75 comments



There was significant discussion and concern in the academic community[1][2][3] during the early 90's in response to NIST's draft standard for digital signatures (DSS). The academic community was concerned that field parameters could have been carefully selected such that they contained hidden properties (weak primes, etc). This is why "nothing up my sleeve numbers"[4] must be used in cryptography. The same issue impacts the selection of prime field parameters for use in ECDSA/ECDH (TLS, S/MIME, etc). Worth noting is that NIST P-256 and NIST P-384 elliptic curves were selected from "verifiable random numbers" generated in accordance with ANSI X9.62. This standard is not freely available so I am not sure which PRNG was used to generate the curve parameters and why the PRNG seed is considered a "nothing up my sleeve number".

[1] Daniel M Gordon. Designing and detecting trapdoors for discrete log cryptosystems (1993). http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.97.3...

[2] Yvo Desmedt, Peter Landrock, Arjen K. Lenstra, Kevin S. McCurley, Andrew M. Odlyzko, Rainer A. Rueppel, Miles E. Smid: The Eurocrypt '92 Controversial Issue: Trapdoor Primes and Moduli (Panel). 194-199. http://link.springer.com/content/pdf/10.1007%2F3-540-47555-9...

[3] Miles E. Smid, Dennis K. Branstad. Response to Comments on the NIST Proposed Digital Signature Standard. http://link.springer.com/content/pdf/10.1007%2F3-540-48071-4...

[4] https://en.wikipedia.org/wiki/Nothing_up_my_sleeve_number


This is one of my favorite examples showing the need for 'Nothing up my sleeve' numbers. https://en.wikipedia.org/wiki/Nothing_up_my_sleeve_number

I believe NIST learned their lesson, and it was applied in the design of SHA-3: http://crypto.stackexchange.com/questions/6444/why-are-the-c...


Nothing up my sleeve numbers are very hard to create for asymmetric crypto. Asymmetric crypto is filled with mathematical relationships and it's almost impossible to prove that a certain set of parameters have no "hidden" properties.


What do you mean? The (asymmetric) RSA algorithm has no numbers up its sleeve. All the numbers involved are created by the participants themselves.

These "numbers up ones sleeve" would only be an issue if they are part of the algorithm itself, like the S-Boxes in the (symmetric) AES algorithm.


OK, so why in the world would you use asymmetric crypto in a DRBG? It makes no sense, unless of course you wanted a system with a trapdoor hoping no one would notice.


In this case, it could have been done. Dual_EC_DRBG hinges on the discrete log of two points being unknown; P and Q could have been both generated verifiably at random.


Perhaps something like "The smallest prime number not less than the base-2 expansion of pi*2^(n - 2)".


I'm no mathematician, so this is probably a dumb question that will serve as a great example for why random programmers shouldn't write cryptosystems. However, here goes anyway.

Could a cryptographer please explain why it's not feasible to use multiple such PRNG algorithms in both series and parallel, perhaps even shuffling their order dynamically, at, err, random?

Surely most attacks on PRNGs are based upon the assumption that their state can be modelled to expose weaknesses. By viewing an entire, shifting, 'topology' of PRNGs as a single polymorphic source for pseudorandom data, surely this class of attack becomes much more difficult and we avoid either placing absolute trust in a single PRNG author, input entropy source, or a single mathematical approach.

I suppose this has already been done to the extent reasonable and that tradeoffs versus performance and available entropy input rates are probably responsible. But I'd still be keen to hear someone thrash this out in understandable prose.


Aggregating multiple sources of randomness has the potential to conceal bugs. In the Debian/OpenSSL bug from 2008 [1], randomness was sourced from multiple locations including the current process id. The idea was that more randomness, even the minimal amount from the pid, could only increase the total entropy. However, when the primary source of randomness was eliminated through an overzealous patch, the PRNG still emitted plausible looking numbers due to the remaining sources of low quality entropy. Had the PRNG only used one high quality source of randomness, people would've noticed something strange about their generated private keys much sooner.

[1] http://research.swtch.com/openssl


That's an interesting take and does seem logical for human testing. However, it seems to me the real culprit was a lack of decent automated testing.

More to the point, as well as the potential to conceal bugs it also offers the potential to mitigate the impact of bugs. This was the whole point of my question, which despite some interesting responses, basically remains unanswered.


It is pretty hard to automatically test cryptography. (I do agree however, that C is a rather bad language for implementing cryptography.)


It seems like there is an established test suite in the PRNG space, by NIST. http://csrc.nist.gov/groups/ST/toolkit/rng/documentation_sof...


If you pull data from most OS-level RNGs (e.g. /dev/random on a Linux box), this is basically what you're getting. A bunch of hardware timing data, some of which may be at least partially predictable or observable, is passed through a hashing algorithm.

If the hashing algorithm is good, and an attacker is missing some reasonable chunk of the data that went into it, you're getting secure random numbers.


Thanks, that's almost exactly what I was looking for. However, it doesn't answer the question which is "can't we avoid that one big assumption" (ie. if the hashing algorithm is good) by combining them?


Ultimately the answer is no, because your question is predicated on a nonexistent distinction.

A combination of algorithms is simply a new, more complex algorithm. It may be an obvious and safe algorithm that enhances security, like Truecrypt's option of an AES/Twofish/Serpent cascade, or it may be a totally worthless algorithm -- like if you created a new "hashing" algorithm that consisted of CRC32 followed by SHA256.

As that second example demonstrates, the naïve way of combining hashing algorithms is obviously not safe. If you simply layer them, any one of them could destroy entropy by producing biased/predictable results. Your new algorithm is crippled.

If you try to "mix" them in some other way, all you've done is create a new hashing algorithm and fed it the output of other hashing algorithms. It's perhaps less likely that any one of those sub-algorithms will seriously cripple your system, but it's much more likely that you've made a mistake, and your new algorithm itself produces poor results.

Somewhere along the way, you end up trusting something, even if it's your own algorithm. But since trusting yourself to get crypto right is a bad idea, the next best thing is to trust a well-vetted hashing algorithm, of which there are several that are likely "good enough", and aren't even NSA designed.

If you're super-paranoid about backdoors, RIPEMD-160 might be your best bet -- it's a product of European academia. It gets some use, but not as much cryptanalysis as the SHA family has.


your question is predicated on a nonexistent distinction. A combination of algorithms is simply a new, more complex algorithm.

That's true if you combine multiple algorithms conventionally, for example by feeding inputs from one to the other or by passing the original entropy to the two algorithms in parallel.

However, if you run one algorithm independently of another - with completely different entropy sources - then flip between supplying the output of each in rapid succession, then on the face of it for ~50% overheads you have probably mostly mitigated exploitable vulnerabilities in either of the two single PRNGs for most applications. (Optionally add another for more security, and take it a step further...)

You are right that this would need to be done carefully .. perhaps the flip in which PRNG is used for the output stream occurs at hard-to-predict points in time (the range of periods for which, potentially, might be carefully chosen from a range that takes in to account some other cryptographically relevant factors ... from a complete non-cryptographer like me, this might include things like entropy draw size by typical applications (optimizing for multiple PRNGs over a typical application draw), the length of any feedback register or calibration cycle of individual cryptographic client algorithms, etc. Generally this would mean 'pretty frequently but not so frequently that it's predictably useful' - something of a paradox, true.)

In short, I still think this approach may have some merit.


> That's true if you combine multiple algorithms conventionally

What I said is true no matter how you combine them, including in the manner you propose.

> flip between supplying the output of each in rapid succession

You've just exposed yourself to a possible related-key attack. Also, the effective entropy of your key could be halved simply by one of the PRNGs being compromised, so even absent a related-key attack, you'd better be using an encryption algorithm with a big key size.

(BTW, it's 100% overhead, not 50% overhead.)

(Oh, and if both algorithms happen to be compromised, the results are or might as well be 0-entropy. Yet again, somewhere along the way you end up trusting something.)

> perhaps the flip in which PRNG is used for the output stream occurs at hard-to-predict points in time

"Hard-to-predict" requires a secure random number generator. You've gone on to describe yet another ad-hoc random number generator that you would use in the construction of your random number generator.

Cryptography is hard.


You've just exposed yourself to a possible related-key attack

It's a tradeoff for mitigating other, perhaps more realistic attack vectors.

the effective entropy of your key could be halved simply by one of the PRNGs being compromised

Halved is a whole lot better than wholly negated, which was the point of the suggestion.

Cryptography is hard.

Yeah.

Architecturally, though, it seems that (dis-)trusting two supposedly PRNG sources is better than all eggs in one basket with one.


Basically, cryptography requires a PRNG whose output is provably indistinguishable from a true RNG. Your algorithm cannot be proven to have this property, and might even be susceptible to some attacks.

Basically, it's a band-aid, but cryptography is rigorous and averse to band-aids, requiring everything to be mathematically proven before use.

It's not enough for an algorithm to "seem reasonable", it must be mathematically proven to have certain properties. It's really an extremely interesting subject to study, I'd give it a go. Join the coursera class on cryptography.


Basically, it's a band-aid, but cryptography is rigorous and averse to band-aids, requiring everything to be mathematically proven before use.

To be honest, I am quite partial to the description of modern economics given by George Soros, namely: the entire thing is based on a false analogy with Newtonian physics.

This is the proverbial 'false sense of security', lent credence by its ... ubiquity ... in the field. Perhaps sometimes mathematicians in general take a not wholly dissimilar bent in their reasoning; ie. they think that because something is proven in a mathematical sense using their current knowledge of viable routes of deducation that it remains 'true' and unassailable.

The threat of someone else having or coming up with a faster physical or logical method of deduction does threaten these assumptions.

The broader goal, then, is not to trust a single PRNG algorithm, or, if possible, branch of mathematics (I am not skilled in that area but heard Pythagorean vs. Elliptic Curve mentioned). I am positing that this level of paranoia is, whilst computationally and resource-wise somewhat more costly, probably a good idea, and that the example of the present article is a good one that demonstrates the efficacy of this line of thinking.

The secondary route of side-channel attacks is also hampered by this strategy, since arbitrary entropy sources (potential side channels) may be (re)assigned to arbitrary PRNG algorithms running in parallel at any time.

In summary: hedge thy bets.


> they think that because something is proven in a mathematical sense using their current knowledge of viable routes of deducation that it remains 'true' and unassailable.

That's because it does.

> The threat of someone else having or coming up with a faster physical or logical method of deduction does threaten these assumptions.

No, it doesn't, because there are no assumptions.

> The broader goal, then, is not to trust a single PRNG algorithm, or, if possible, branch of mathematics.

No, if you don't trust something, use something you do trust. The change you're proposing has more potential to do harm than good, e.g. render nine perfectly good PRNGs broken just because you mixed them with one broken one. If you had only used one PRNG, your adversary had to break that PRNG, but, now that you used ten, he only has to break the weakest one, so you've made things much easier for him.

> whilst computationally and resource-wise somewhat more costly, probably a good idea.

Nope, it probably isn't.

> In summary: hedge thy bets.

In summary, the entire crypto community is smarter than you (or me), and any improvement you (or I) think might work has probably been proven hopelessly broken a thousand times over. There's a reason people use a single PRNG, and the reason is that, when your PRNG's output is provably indistinguishable from true randomness, messing with it will certainly not improve it, and will probably ruin it.


> There's a reason people use a single PRNG

I just want to know what it is.

(Edit as can't reply to child: Sure, but this gets back to the mathematical analysis does not equal real world proof, since we do not know all possible computational (or side channel!) vectors. Assertion otherwise is reminiscent of Socrates: οὖτος μὲν οἴεταί τι εἰδέναι οὐκ εἰδώς, ἐγὼ δέ, ὥσπερ οὖν οὐκ οἶδα, οὐδὲ οἴμαι This man, on one hand, believes that he knows something, while not knowing [anything]. On the other hand, I – equally ignorant – do not believe [that I know anything]. https://en.wikipedia.org/wiki/I_know_that_I_know_nothing ... OK that's weak, but honestly that's how I tend to think (ie. with great cynicism about any assertion) and it serves me alright in general security / non-cryptographic discourse. Basically this approach is treating the PRNG algorithm as a black box, refusing to trust it, and attempting to design a system incorporating the black box along with others claiming the same functionality, based upon that assumption. If this works for any other algorithm - and it seems to - then why not PRNGs? It's not like cryptography has a monopoly as an arbiter of risk. Cryptography is not a silver bullet.)


It's in the next sentence:

> and the reason is that, when your PRNG's output is provably indistinguishable from true randomness, messing with it will certainly not improve it, and will probably ruin it.


Running different pseudo-random generators in parallel and then XORing their results should have the property that if at least one of the input algorithms is "good", then the output algorithm is "good".

My intuition is based on the fact that this statement is correct for true random sources: If you take a bunch of random sources that generate streams of bits, then XORing those streams together will result in a stream of uniform and independent random bits if at least one of the source streams is a stream of uniform and independent random bits (this is an elementary argument in probability).

I'm not certain at the moment whether this generalizes to pseudo-random generators, but it seems plausible enough to expect that the XORed bit stream is at least as "good" in terms of pseudo-randomness as the best of the input streams.

Edit: Okay, I just realized that this is obviously false when the input streams themselves are correlated. For example, suppose that one of the input streams is PRNG A, and the other input stream is simply the negation of the output of PRNG A with the same seed. Then the output stream will consist of all 0s, which is not good. Perhaps some result can be salvaged when the seeds of the different PRNGs come from independent, truly random sources, but this means things get very tricky very quickly.


Right, I came to the same realization. You'd need to make sure the input stream(s) (ie. entropy source(s)) to the PRNGs in question were not shared.

Even if that was impractical, and this is pure conjecture from someone who is not a mathematician or cryptographer, then at the very least they should be well obfuscated, eg. unpredictably delayed throughout a large enough temporal keyspace with a not insignificant degree of feedback, or with large initial random seeds. But I'm not a cryptographer, so that's all baseless assertion. IMHO the notion stands, though.


XORing multiple sources throws away lots of useful entropy. There are better approaches available, and they are used.

Edit: nhaehnle, I can't reply to your comment (yet) because it's too deeply nested. I don't know the direct answer, but if you want to do some research yourself, you could do worse then reading this relevant Wikipedia article (https://en.wikipedia.org/wiki/Randomness_extractor) and following the links. They don't talk about provable extractors in the article itself, but you can probably identify at least the right terms you need to search for on arxive or so.


Is there a provably better way, though? I think this was basically the premise of the original question: Given that there is doubt about the safety of some PRNG scheme, is there a proven way to combine multiple PRNG scheme with the property that the resulting PRNG is at least as secure as the most secure "input" PRNG?

If there is such a better way, then please provide more information - links and such.


Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. - John von Neumann

The very definition of a pseudo-random number generator is that it's deterministic, i.e. an algorithm. If you know in what the state of the algorithm is, you know what the next number will be. Attempting to add some ad hoc behavior on top of crypto has two common issues:

First, adding of deterministic behavior to deterministic behavior does not in any way spontaneously add randomness (see quote). In fact, it can expose additional state on which an attack can be focused. For instance, perhaps when I learn that the system you are using is an interleaved set of numbers, I can use the starting set from each to guess the seed on which all PRNGs started. Second, crypto PRNGs that are well vetted are already at the top of the game for being apparently random in output. There shouldn't be a need to do anything like what you're suggesting, ever, with a valid PRNG. What Schneier is pointing out is that good crypt doesn't have a whiff of smell. Dual_EC_DRBG does. And, badly. Just use the others, instead.

Importantly, learn the first rule of crypto: Unless you're well versed in crypto, presume you will do a very bad job of trying to implement something yourself. Understand for what the crypto library is meant to be used, understand how to use it properly, and don't try to do something on your own. It is hard for even the experts to do it well. Some simple "why don't I just.." is practically guaranteed to have a flaw. Adding your own deterministic behavior is a sin.


How about using a stream of true random numbers and putting it through a set of deterministic functions that "multiply" the quantity of random numbers.

e.g. make thousands upon thousands of functions, where the function names are sequential. Use the data itself to determine which of the functions will be called. Allow functions to call each other based on the random number values it calculates from its true random input. To figure out how much random data to produce, simply add an extra bit that is passed along to all the functions that indicates depth (how much show the true random data be processed before returning it to the program that needs a random number). This approach would produce a lot more random data that could only be determined if you knew the original true random data input and the amount of entropy that was required at the time since you'd need to know the depth used to know when to exit the functions.


Err, I think you'll find what you described is roughly the definition of a PRNG (though usually more mathematical in its verbiage). The problem is, we don't trust any particular PRNG. My question was, why not combine them so as to remove the need for absolute trust in any given PRNG.


When part of your system becomes flawed (part or whole), the whole can be attacked. Thus, adding more deterministic behavior doesn't add more randomness (i.e. more safety). Generally, when crypto falls down, it falls somewhat gracefully (i.e. there are a class of known attacks under certain particular situations.) As Crypto is an on-going war between those creating the security and those trying to break it, expect to have to change your crypto over time.


> When part of your system becomes flawed (part or whole), the whole can be attacked.

This is the most concise and meaningful argument against the proposed approach thus far. Hats off.


I really need to get up to speed on cryptography. I've been a software developer for a while but haven't had a chance to get into it. Part of the reason is time, the other reason is that I haven't a clue where to start. I've heard about the books Security Engineering and Cryptography Engineering. Are they good books for someone new to all this?


I'd recommend the Coursera crypto course: https://www.coursera.org/course/crypto


shouldn't be a need to do anything like what you're suggesting, ever, with a valid PRNG

OK, but the point was that if we approach PRNGs like any other element of a secure system, ie. without trust, as the article was suggesting, then combining them in some way may assist with QA on the overall output. (Example: I mean, we don't trust the NSA - but do we trust Schneier? He could be paid handsomely by the NSA as a false stooge and EFF board plant, who are we to know? And if you're an overseas government without a native cryptographic/cryptanalytic tradition, would you trust him? He's a US citizen connected to BT - that's UKUSA - two of the 'five eyes'!)

Similar to how multiple hashing algorithms are often used to preserve overall functional integrity in the case of individual hash function vulnerabilities that enable an attacker to propose collisions.


The difference is between a PRNG and encryption. Encryption's design is to obscure the data stream; thus, if there is a failure in an encryption it is masked by another encryption. Personally, I do not think this is a particularly useful activity. Public vetted encryption rarely spontaneously falls over. A modern encryption, like AES256, is well beyond the far future in computing requirements to brute force. It's better to be aware of changes over time and switch to a current crypto if things change, like how 3DES died. PRNG, on the other hand, is a string of numbers. They don't obscure anything. Either A) you don't try to obscure, and part of your numbers are weak anyways and need to be replaced or B) you're attempting to write your own crypto and you're going to do it badly. Using PRNG to obscure another one is a sin, you're trying to write encryption. (Sin being "to miss the mark" or point.)


The similarity was drawn to hashing algorithms (which function to produce a message digest or checksum given a message as input), not to cipher algorithms (which perform encryption to ciphertext given one or more keys and a plaintext as input).

A lot of people have responded suggesting there is no way to combine PRNGs to offset the risk of a single PRNG's potential compromise, however I have not seen any citations to this effect and it does really make logical sense to me. Arguments tend to fall back to table-thumping on mathematical proofs, which is a demonstrably naieve way of building a secure system if, for example, your mathematical process, computational platform or side-channel security assumptions are outmoded by an attacker.


After a beer last night, feeling unimpressed by the rest of the respones here, I wound up the courage to email the ninja himself - http://www.schneierfacts.com/

His answer, of course, is both simple and perceptive... somehow we all missed it. Cheers, Bruce!

Of course it's feasible.

The question is never about how to add a whole bunch of rounds, or multiple ciphers, to make something secure. The question is always security for a given unit of performance.

With no performance constraints, just use any old thing for a thousand rounds.


The most important thing to know about Dual_EC_DRBG is that practically nobody uses it (I'd say "nobody ever uses it", but who knows, maybe something did?). It's a CSPRNG that involves bignum elliptic curve point multiplication. It's not something that's maybe slower than an alternative; it's something noticeably, horrendously slower than its alternatives. Even systems that use number theoretic crypto use it sparingly; nobody uses it to generate random numbers.


Intercepting signals is only half of the NSA mandate, the other half is securing national communications against interception by foreign governments and organizations.

It is possible that the NSA want other nations to have the impression that NSA audited communications protocols are insecure or may contain backdoors. They have never expressly come out to deny any of these accusations.

The involvement of the NSA thus doesn't imply that they wish to weaken standards. Perhaps they want people like Schneier to question the security of these protocols and create a false aura of potential vulnerability so that their opponents don't make use of them.


_NSAKEY discovered two years before patriot act is passed? coincidence? i think not!!1

said another way, what relates the two events in the editorialized title? Just an end of the innocence type vibe? Trusting the sigint guys to design your crypto has always been a well acknowledged double edged sword.


The Patriot Act was passed in late 2001. It was renewed in 2009.



So either the NSA wanted a backdoor, in which case we learn that even the NSA can't build a backdoor that academic cryptographers can't detect.

Or, much more likely I think, it was just a project that some NSA employees had sitting around and they wanted to get something out of it. In that case we learn that the NSA isn't so far ahead of academic cryptographers that their designs will always be better.

Either way I don't find this as scary a story as Schneier does.


They might be able to build backdoors that academic cryptographers can't detect, but this backdoor was special - even after the academics figured it out, the NSA are still the only ones that could use it because it requires a secret key whose public counterpart was baked into the Dual_EC_DRBG specification. Backdoors with that special property are going to be much harder to create.


Out of curiosity, why can't we just use a series of sensors on the computer to generate random numbers? Between mouse movements, touch inputs, video camera input, microphone movements, the behavior of applications in your system and how they use resources like RAM, CPU, hard-disk, listening to all the wifi + bluetooth signals around you and munging them, etc. I would imagine that there is enough entropy coming in through input and being generated by whatever the computer is doing to be able to have lots of unpredictable random numbers.

I don't know much at all about cryptography, but why aren't all the natural sources of entropy an adequate source of random numbers?


Every time you do a Diffie-Hellman key exchange you need to generate a random number. Which includes any time you open an https connection.

A busy webserver does this much more often than it can easily generate entropy for. So you have to take shortcuts. That's where a pseudo-random number generator comes in.


I don't think that application behavior would be a good source of entropy, since it is, at least in theory, predictable. The others are probably good, although you need to be sure to only use the least significant bits, and make sure that the sensor doesn't have some weird behavior which makes the least significant bits predictable somehow.

If I had to hazard a guess, I'd say that this isn't often done simply because computers didn't typically have a lot of sensors until recently, and now you're likely to have a good-quality dedicated hardware random number generator built in, e.g. Intel's RDRAND instruction.


> I'd say that this isn't often done simply because computers didn't typically have a lot of sensors until recently

I thought it was done a lot, for example, in the Linux kernel: "The random number generator gathers environmental noise from device drivers and other sources into an entropy pool." [1]

[1] http://man7.org/linux/man-pages/man4/random.4.html


Indeed, although as the man page describes, /dev/random is too slow for most practical purposes. The entropy pool drains rather quickly and then the device will block while the pool refills.


This sort of thing is used, e.g. [0]. It collects entropy at a limited rate, though, so for demanding applications (SSL servers, say) you want dedicated hardware: [1], [2].

[0]: https://en.wikipedia.org/wiki/Hardware_random_number_generat...

[1]: https://en.wikipedia.org/wiki/RdRand

[2]: http://www.entropykey.co.uk/


How do those Entropy Keys work? What sort of environmental random data does it rely on for creating random bits?


Thermal noise of various kinds, usually.

(Edit: that is, the noise introduced in a circuit by the fact that it's not at absolute zero, e.g. Johnson noise. Not just gathering entropy from a temperature sensor reading or something like that.)


I'm not sure if its still that way but I know PGP used to do that on Windows 95/98. You would have to move your mouse around in a window until it generated enough entropy to generate your key. I'm pretty sure I've seen it in either ssh keygen or openssl stuff also but that might have been in a virtual machine/jail with a poor or missing /dev/random.

Also I believe that is where /dev/random might get some of its information from, but I'm not too sure.


I'm pretty much 100% sure that's how /dev/random works. Not only have I heard that multiple times, but once when I was installing arch linux or something, my computer ran out of random bits and told me I had to wait for it to get some more from the hardware. I don't think it actually asked me to play with the keyboard, but IIRC that's what it took. Letting it just sit there didn't seem to do the trick.


you're right, my friend. look up entropy. it's used in linux and windows kernels, and some processors have it implemented at a hardware level.


If you've got time for a network call, there's http://www.fourmilab.ch/hotbits/ (There's another lab somewhere measuring quantum fluctuations in a vacuum that I think also provides access to random numbers. Edit: Here it is: http://qrng.anu.edu.au/index.php)


I believe that, at least in Linux, ACPI thermal sensor readings are used for this purpose.


It has probably been done, but there is not much entropy in those low-precision temp readings. Linux support for ACPI is not great anyway.


Lava Lamp and camera?


OK I have a probably naive/nonsensical cryptography question. It seems that one good way of transmitting a secret would be to do so with everyone thinking it is encrypted one way (or possibly believing it has not been encrypted) when the truth is it has been encrypted some other way. I.e. it would be a difficult problem to know when the message has truly been decrypted. So basically there would be multiple decrypted versions which are plausible as the true decrypted message. Is there any parallel to that sort of thing in modern mathematical cryptography?


Encrypted traffic should be indistinguishable from random numbers. A quick test is trying to compress it, as true randomness has no patterns and hence nothing to compress.

In general you can tell exactly what compression was used because protocols start out in the clear and include some sort of negotiation. https://en.wikipedia.org/wiki/Transport_Layer_Security#Proto...

However both ends can separately agree that if they select one cipher suite, they will be secretly using a different one.


TrueCrypt does something like this by hiding encrypted data within random bytes that are supposed to be free space.

In cases where the key does not need to be memorized, a simple XOR cipher provides the property that you can create a "key" for any ciphertext and desired message.


I'll always question any cryptographic standard adopted or advised by government.

I have an hard time even trusting AES.

http://crypto.stackexchange.com/questions/579/how-can-we-rea...

Honestly I think those issues (like cryptanalysis) are totally out of reach by the public to comprehend, it's hard to really know how to make strong assumptions.


Can anyone confirm that 09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0 is not the the backdoor sequence?

Seriously though, have we the people figured out this sequence yet? If there is a base sequence, and a relationship to a second sequence, I feel like this can't be too difficult to figure out mathematically, or through brute force means. Can someone who actually knows what they're talking about properly rebut me?


The ANU Quantum Random Number generator http://qrng.anu.edu.au/ can be used as a source of additional randomness for your server's pseudo RNG, or directly. They monitor quantum fluctuation of a vacuum to generate the random numbers at a high rate.


While it's an interesting project (I had fun playing with it) you really shouldn't be relying on remote sources of entropy for security purposes.

Generally the entropy needs to be secret to be effective so sourcing remote entropy reduces your security to that of the transport security (if any). It's hard to imagine a situation where you have a secure transport mechanism but do not have enough entropy since most encryption schemes that you might use to secure the transmission require secure random number generation.

Disclaimer: I am not a cryptographer, cryptology frightens and confuses me.


> most encryption schemes that you might use to secure the transmission require secure random number generation.

If I understand this correctly, most channel securing schemes require RNG for key exchange (e.g "no prior knowledge" DH key exchange) or pair generation (RSA), then you can move on to a symmetric cipher (whose key is derived from the resulting DH shared secret, or exchanged encrypted via RSA), which requires no RNG (until you want to change the key for PFS).

So one can assume scenario where only a quantum of entropy is needed to establish a secure connection, then refuel the entropy bucket with random data transmitted over the now secure yet "non entropy consuming" channel.


Note that if you have the option, it's almost certainly better to get a new Intel processor that supports rdrand (not quantum, but still true hardware random) than to trust some university's server. Interesting, though.


Much of the discussion in previous comments has to do with the generation of true random numbers. Does this need to happen on a client or on a server? If it's on a client, then isn't a cell phone an ideal source of randomness, due to all of the sensors on board?


This random generator is found in some versions of Windows:

http://www.schneier.com/blog/archives/2007/12/dual_ec_drbg_a...

I don't know if it's in Windows 8 or not.


The link to msdn in your post is still working.

> The dual elliptic curve random-number generator algorithm. Standard: SP800-90

> Windows 8: Beginning with Windows 8, the EC RNG algorithm supports FIPS 186-3. Keys less than or equal to 1024 bits adhere to FIPS 186-2 and keys greater than 1024 to FIPS 186-3.


Great reading. Have you any idea how things evolved after that with those 4 proposed methods?


No mention on authors from nsa or reviewers from nist...


[deleted]


"Daytona" was built to collect all CDRs. It dates back to the '70s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: