You don't need a hardware RNG. If you're running a modern Linux distro, you can safely just pull as much randomness out of /dev/urandom as your heart desires.
http://www.2uo.de/myths-about-urandom/ explains the issue well, but I hestitate to recommend it becuase it uses that horrid "Myth: Fact" antipattern (with the Myths in bold!) that has the psychological effect of making the reader remember the false statements and come away with the wrong memories.
I recommend copy-pasting the document, deleting all the "myth" garbage, and reading the explanations.
Good point. True for about any commercial system as they're commonly used. Host, app, and network are where the attack will be 99 times out of 100. So that's where 99+% of effort should go.
No. There isn't even a 1% use case where this is useful. I'm not saying "your effort is better spent elsewhere". I'm saying "doing silly things to get better random numbers is just as likely to harm your security as to help it".
High security and low subversion systems aren't allowed to have complex crap like Linux in TCB. They also prefer to offload RNG and crypto onto dedicated hardware. Example would be NSA's Type 1 certified products and my own gear from past.
Mere requirement of knowing every execution and failure state in TCB eliminates Linux + /dev/urandom instantly. Some FSM's + some analog circuits can be verified by eye, hand, and formal verification if desired.
I don't doubt that there are regulatory regimes with broken rules that require people to add harmful complexity to systems in order to satisfy rubber chicken security requirements, but I'm going to call those rules what they are.
They actually force simpler designs and catch more defects per... every report ever done on one. Last survey I read had 96% report great increase in QA metrics with almost half saying cost/time was negligible. Esp if you use automation. Reason? Virtually no time spent debugging broken components or integrations. The Windows and Linux stuff I've seen you trust has horrid track record on other hand. So many preventable errors, lack of POLA, covert channel analysis is about impossible... list goes on. Those processes you "call out" lead to none of that as they force discipline on the developers and reviewers.
Now the certification bodies, paperwork focus, etc can be harmful. It's why I got private evaluations for my stuff. I saw Sentinel do same for HYDRA using NSA and Secure64's SourceT (medium assurance) had positive review from Matasano for its POLA/resilience. And that's COTS. Sirrix too using Nizza architecture with tiny TCB. So, the basic principles are there, improvements are often dramatic, several companies are doing it, and so rest have no technical excuse for using weaker methods.
Do you mean there are any special cases situations where entropy is extremely low? Just after reboot? Just after first install? Just after booting a VM from an image (where the "entropy" could be public knowledge)? Container?
I was referring to assumptions that let one equate breaking urandom with breaking arbitrary cryptosystems in other ways. examples:
1. Breaking urandom's construction is much less likely than breaking a less-trodden hardware device.
2. If hashes in general are broken, then any crypto of interest is also broken (counter example: non-complexity based crypto like a one-time pad (note: Do NOT construe this as any sort of practical endorsement of "one time pads". If you do not understand the concept of malleability, then forget this entire comment exists and go read up on that instead!)).
3. Backdooring RDRAND is equivalent to backdooring the entire CPU (counter: a model of auditability that constrains the complexity of a backdoor)
These assumptions are mostly reasonable, but they are assumptions nonetheless.
Embedded systems do suffer from a demonstrable lack of entropy. Studying techniques for hardware entropy collection is interesting in its own right, which is why it's disappointing that this page seems more focused on productizing their device as a black box USB stick rather than concentrating on open review.
Maybe for the rare cases you might need an extremely high rate of random numbers? I'm trying to think of applications where you need both the security and high rate but it's quite hard indeed (simulations and things like procedural texture/art gen are high rate but non-secure, while secure things are all low rate).
I know it doesn't matter security-wise, but the hardware still has to do the (crypto) work, whereas hardware TRNGs can get it for "free": the price per bit of entropy can be smaller for a well designed TRNG.
This device apparently generates reliably a 300 Gbps random stream. Not easy/cheap to do with DRBGs. But as I said, I have no idea of any application where this would be required.
Got some downvotes for the parent, and I'd like to be corrected. Don't IV's and nonces for AES-CBC and AES-CTR require 16-32 random bytes, among other things?
Seems like there's a lot of good information there, but I'm way out of my league so YMMV.
It's too bad they dismiss using radioactive decay for "practical reasons". There's a guy[1] who has been playing with that, off and on, for over 30 years. He's still at it, I just got some fresh random bits from him a few minutes ago. His hardware doesn't seem that expensive.
BTW we just had a discussion on weak RNGs on the web a few days ago: https://news.ycombinator.com/item?id=10030036
As usual with HN, the comments are at least as informative as the article itself.
There's a ubiquitous product, probably 100+ million instances currently operational just in the USA. This product costs about $10 to make. It has both a radioactive source and a way to (indirectly) detect alpha particles.[1]
Yes it's not a perfect analogy. But surely there's a way to build a much cheaper alpha detector?
They seem to be saying the right things, but there's no schematic? How is anyone supposed to evaluate, discuss, and independently test whether their design fulfills their criteria?
I like it if implementation matches spec. Below quote is about every desirable property you want in a TRNG for high-assurance security. Using simple circuits based on widely-proven physics is much better than complex circuits leveraging cryptographic assumptions. Often more efficient, too. This could have also have application in security-critical boards and ASIC's (esp mixed-signal) aiming to re-use proven blocks. I've forwarded it to a high-security specialist that sometimes makes and breaks hardware TRNG's to get his take on it. Hopefully, he'll reply while it's still getting HN attention.
"So from this, we have a theoretically strong basis for efficient extraction of good entropy. We have sources of noise which are naturally white and for which the best scientific understanding is that the mechanisms underlying them are subatomic scale events which are as independently random as any kind of physical phenomenon is known to be. We have multiple sources of noise that are not all influenced by the same set of physical or environmental conditions. We have a mechanism to extract it which ensures good mixing over time and is robust against interference from predictable signals, over a wide range for the noise floor mean. We can implement it in a way which makes it obvious that no programmed manipulation of the process output is taking place. It is self stable and doesn't require continuous tuning to operate correctly. And we can clock bits out of it at just about any rate we please up to the effective bandwidth of the circuitry responsible for doing that."
What I never get about analog RNGs is, couldn't you send a strong, cyclically repeating RF signal at the RNG, and reduce the entropy (sort of) remotely?
All new Intel CPUs now contain a built-in true hardware RNG (RDRAND), why not just use that?
Unless you think Intel might have put some kind of backdoor in it (hard to believe), in which case they could also be intercepting your USB communication with this device and substitute their own evil numbers.
People that buy aftermarket hardware RNGs generally operate under the assumption --- which I agree is crazy --- that Intel backdoored RDRAND.
But this is all so silly. So far as computer science currently understands, the modern Linux software CSPRNG on modern Linux distros is perfectly adequate for all crypto.
I'd probably use RDRAND if I could count on having it, and if I was designing my own virtualized cloud hosting environment that was going to run at scale I might spend a few hours figuring out how to provision cold-start seeds to new VMs, but other than that I never think about this stuff. Just use urandom!