Hacker News new | past | comments | ask | show | jobs | submit login

yup. To elaborate (informally) on why theorists believe BPP = P, basically if we believe cryptography and hence PRGs exist, then every algorithm can be derandomized.



How does this follow? I understand that PRNGs let you construct randomized algorithms in practice, but how do you transfer probably polynomial runtime to definitely polynomial runtime?


It's not "probably polynomial time", it's polynomial time with bounded error probability. Derandomization means bringing the probability to 0.

The definition of a PRNG in complexity theory (for the purposes of this topic) is that no polynomial-time algorithm can look at its output and distinguish it from real randomness (with high confidence etc etc).

Now imagine you have a PRNG whose seed size is logarithmic in the output size. You can compose it with our BPP algorithm and iterate over all the possible seeds in polynomial time (because the seed is logarithmic), and take the majority result. You built a polynomial time algorithm that, if it gave incorrect results, could distinguish the PRNG from real randomness. So it must be correct.


it can be probably polynomial time... that's ZPP! (ZPP is the complexity class of languages that are accepted by a randomized turing machine in expected polynomial time. However, in the worst case, the turing machine is allowed to take arbitrary time). In ZPP note the error probability is 0... it always returns YES or NO correctly.

It's unknown whether ZPP = P (but it almost certainly does). It is known that ZPP is the base of the random hierarchy... ZPP = RP u co-RP, where RP is the class of languages that are accepted with bounded error.

Any good complexity textbook beyond Sipser will discuss the randomized complexity classes at extreme detail, since they are crucial for constructing all the other interesting complexity classes (notably IP and MIP).


Sure, there are plenty of complexity classes :) but TFA is about derandomization of BPP, even if it doesn't mention the class by name, it talks about Nisan-Widgerson.


If you can make pseudorandom numbers well enough deterministically in polynomial time, you can simulate any algorithm in BPP in polynomial time on a deterministic turing machine, so BPP = P


Can you recommend an accessible proof for this? (is this simply because if the error probability is too low, it must be zero?)

Are there problems in BPP that can defend against "helping" PRNGs by diagonalizing against them somehow?


A BPP algorithm can be seen as a deterministic algorithm that takes a string of random bits as part of its input. One way to derandomize a BPP algorithm is to run it with every possible random string and pick the answer that the majority of random strings lead to. This blows up the running time by 2^r with r being the length of the random string, making the running time exponential.

If the PRNG can take a seed of length log(n) and generate a random string from it, then you might try to take a BBP algorithm and reduce the length of the random input to log(n) using the PRNG. We assume that the PRNG being good enough means that this new algorithm still has the usual BBP guarantee that it gives the correct answer for at least 2/3 of the seeds.

However, now the derandomization of running it on every possible seed just gives a slowdown of 2^log(n) = n, which means that the running time stays polynomial.


Cannot you get extremely unlucky so that for one input all the PRNG seeds map into a subset of the 1/3 unhelpful random strings?


How does PRNG follow from cryptography?


The cryptographic definition of a PRNG is an algorithm whose output is indistinguishable from true randomness by any polynomial-time process. You can think of a BPP algorithm A as a deterministic one that takes randomness as an additional input. If the PRNG is truly a PRNG, then one can replace this random input to A with the output of a PRNG, and the output of A can't change.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: