Hacker News new | past | comments | ask | show | jobs | submit login

"...if you have two low-quality random sources...you can combine them in a way to produce a high-quality random number..."

I tried to skim the paper, but it's really dense. Can someone who understands it explain how what they did is different than the obvious approach of running inputs from the two sources through a cryptographically strong hash function?




The most notable thing is the lack of assumptions. You don't have to assume anything other than the min-entropy of the sources. With a hash function, you generally need to assume that the hash function is a good extractor, instead of showing that it is one. Instead, what we have here is an extractor built up from first principles.

Also, note that you need at least two independent sources. This is because with a single source you just can't have a good extractor for any imperfect source. For example, imagine you are using a hash function to extract one uniform bit from an n-bit input. If the input source is the set of all n-bit strings such that H(x) = 0, which has pretty high min-entropy, you end up with a 'randomness extractor' that is just a constant function. This is highly contrived, but that is what you have to work with when you say "any" in theorem statements.

To fix this, you need at least two sources, where one source 'keys' the other, and therefore as long as the sources aren't working together (i.e., they are independent), there is nothing they can do to sabotage our randomness extraction. The inner product is a simple example of an extractor that works as long as each source has at least n/2 min-entropy. I have no idea what the construction of this new paper looks like, but it's an improvement on this min-entropy required. However, since this improvement is asymptotic, it's unclear whether it is any useful at all for realistic ranges of length and min-entropy.

In reality, this is entirely irrelevant for practical purposes. The work-horse tools of randomness extraction in practice are hash functions, block ciphers, and universal hashes, so this new paper is interesting from a theoretical point of view only. Yet another example of university PR departments being insultingly misleading in their press releases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: