Is there a proof of this somewhere? Operations that seem vaguely similar, like doubling repeatedly, can be easy to "parallelize".
Each step depends on the result of the previous step. I don't know of any claim that the crypto/maths behind the paper do not hold.
You can use a FPGA or build an ASIC: but as far as I know as long as the crypo in this paper hold, it's not parallelizable.
But this doesn't mean anything. If I said to take a number and add 1 eighty trillion times, each step would depend on the result of the previous step, but obviously it would be unnecessary to actually compute each step.
A fast way to calculate something like `g^x` for some large `x` is to iteratively square:
Of course, this doesn't prove that computing `g^x` is sequential, but it does show why it is different than computing `1+1+...+1`.
You can compute g^(2^i) by just multiplying g's in the same way that you can compute 2^i by adding 1s, and multiplication is as associative as addition is. Specifying that you want g^(2^i) rather than g^k for some arbitrary k looks like a way of ensuring that a partial product computed in parallel (say, g^3) will be of nearly no value, because every partial product that helps compute the total will already be produced by the single agent doing squares.
So why compute g^(2^i) rather than just 2^i? Is 2^i in fact hard to compute? (In binary representation, it certainly isn't, but maybe 3^i is?)
OK, I read the paper.
This is the entire discussion of the inherent difficulty of repeated squaring:
> More importantly, repeated squaring seems to be an "intrinsically sequential" process. We know of no obvious way to parallelize it to any large degree. (A small amount of parallelization may be possible within each squaring.) Having many computers is no better than having one. (But having one fast computer is better than one slow one.) The degree of variation in how long it might take to solve the puzzle depends on the variation in the speed of single computers, and not on one's total budget.
I see two problems here:
(1) This is a purely conclusory argument, suggesting that a better response would have been a simple "no, there's no proof".
(2) This does not even attempt to address the question I posed, which was whether it is necessary to compute the intermediate results in order to compute the final result.
In fact, we know that the intermediate results are not necessary to compute the final result, because that is how the message gets enciphered in the first place. It can be done by factoring the modulus, and obviously there is no proof that that is difficult to do.
But beyond that, it also isn't immediately obvious that it must necessarily be impossible to compute the final result directly by means that don't require factoring the modulus. It is possible that somewhere in the literature on Blum Blum Shub this question has been analyzed, but that goes a bit beyond your suggestion to "read the paper".
So, I'm stumped. Tell me what you saw in the paper that you thought would answer my question?
Well, there's no mathematical proof. But physicists and judges would accept that there's lots of money and fame to be made breaking eg RSA, so countless people have tried and keep trying, but haven't come up with much good so far.
This is similar to the answer to, "Is there any proof that RSA is uncrackable?"
The security is based upon the assumed hardness of certain mathematical problems.