
Who needs qubits? Factoring algorithm run on a probabilistic computer - Tomte
https://arstechnica.com/science/2019/09/who-needs-qubits-factoring-algorithm-run-on-a-probabilistic-computer/
======
NoKnowledge
It's strange that the authors pick integer factorization as the problem to
solve on their machine. Although their machine may provide a speedup for
optimization problems, these speedups are not relevant for the problem of
integer factorization, as shown in
[https://arxiv.org/abs/1902.01448](https://arxiv.org/abs/1902.01448)
(disclaimer: I am a co-author of that paper).

Skimming over the paper it seems their method of translating factorization to
the optimization problem consists of simplifying equations without
justification that this can be done efficiently. I suspect that their
preprocessing step is already NP-hard.

The second and more important issue, is that the overall strategy---of
translating a problem with a sub-exponential classical complexity (via the
Number Field Sieve) to an optimization problem with exponential runtime---is
not expected to succeed, as confirmed by careful measurements in our paper.

~~~
sokrates85
I agree that the integer factorization is a poor choice. Last statement in the
Abstract talks about sampling and optimization which could be the bigger
point.

Their pre-processing seems like simply expanding out their cost function
that's of the form E= (F - XY)^2. Of course it's a lot of multiplications
since X and Y are binary and multi-dimensional. Not sure if it would be NP-
hard though.

------
signalsmith
I thought the whole magic of quantum-computing was that instead of just having
individual bits in an uncertain/random state, you entangled a whole bunch of
bits together, meaning you can meaningfully talk about the probability
distribution of an 8-bit value, that's different from 8 independent 1-bit
distributions.

Then, you perform operations where the value interacts with alternate values
for itself (i.e. the full wave-function) - a bit like the double-slit
experiment. For example, you can end up with a new 8-bit value where the
probability distribution is the Fourier transform of the previous one's
distribution.

So, if you can engineer the initial probability distribution to be
"interesting", you can then sample its Fourier transform - using only the
8-qubit values, and not storing 2^8-point distribution in an array. Scale that
up, and you could calculate a useful 2048-bit Fourier transform (or more
accurately: observe a random sample from the result) with a 2048-qubit system,
instead of a 2^2048-point array.

It's not obvious to me how stochastically-changing bits of state can get
anywhere close to self-interacting (double-slit-like) calculations.

~~~
DennisP
They used 8 p-bits and could factor numbers up to 950, which took about 15
seconds. They say "it's possible that a larger number of p-bits will mean a
significantly larger sampling time." That seems likely to be a drastic
understatement.

~~~
aflag
Funny that's what a human takes to factor 950 with pen and paper

------
bo1024
This doesn't make sense to me (theoretical computer scientist). If I run the
code

    
    
        x = rand() % 2
    

Then mathematically, x is a p-bit. That is, it is in a classical
"superposition" having value 0 with probability 0.5 and having value 1 with
probability 0.5.

So you don't need any special hardware to do classical probabilistic
computations. You just need a source of entropy and a standard computer.

Now it might be that a different architecture than the standard von Neumann
processor+memory one could give a polynomial speedup for certain problems. And
that would be fine. But the focus of this Ars article as well as the Nature
letter abstract is on p-bits, which seems nonsensical to me.

(Added later) I see there is an emphasis on the bits fluctuating or evolving
over time, annealing I guess. We could of course simulate this on a standard
computer, but not as easily. Still I am reminded of Aaronson's soap bubbles...

~~~
deepnotderp
If I understand correctly, the key difference is that the rand() function in a
Turing machine cannot generate _true_ random numbers (I think it's something
like Martin-Lof?).

~~~
bo1024
Pseudorandomness is related, but doesn't affect this issue. We can hook up a
physically random process to a computer for a source of "true" random bits,
then go from there.

~~~
sokrates85
x = rand() % 2 always gives a random number. You can't correlate them that
way.

Needs to be a tunable random number and technically needs to be "true" as
mentioned.

But I don't think that the authors would disagree with your larger point -
their point is to provide polynomial speed-up over digital computers for a
certain class of problems. That this can be done with classical computers
isn't exactly a deep insight, it's rather obvious.

------
amelius
This seems fundamentally flawed, because you can sample the solution space at
most with a speed that is comparable to existing logic gates (and the
interconnect is the main bottleneck).

Not sure why anybody would make the effort of researching this idea, or even
why ars would publish an article on it.

------
Ar-Curunir
What complexity class describes these computing devices?

~~~
fooker
[https://en.wikipedia.org/wiki/PP_(complexity)](https://en.wikipedia.org/wiki/PP_\(complexity\))

~~~
marris
PP implies that that computation runs for a polynomial number of steps before
producing its probabilistic answer. In the article description, the
computation seems to be run "until the answer is separated from the noise."
And although they are hopeful, I don't think they are asserting at the moment
that this will happen after a polynomial number of steps. So PP would not
apply.

~~~
Khoth
If it happens in a polynomial number of steps on average, but you might get
unlucky,
[https://en.wikipedia.org/wiki/ZPP_(complexity)](https://en.wikipedia.org/wiki/ZPP_\(complexity\))

If it can just take as long as they want, then the probabilistic part is
basically irrelevant and you're probably looking at
[https://en.wikipedia.org/wiki/PSPACE](https://en.wikipedia.org/wiki/PSPACE)

In any case, I very much doubt they'd made any real breakthrough in factoring
numbers.

------
deepnotderp
So I know nothing about probabilistic computing, but is my intuition about how
these work correct?

Essentially, a qubit can have a superposition over 0 and 1 and operations upon
that qubit will change the probability and then when you "pop it" to read it
out you will get the output that you want depending on the program that you
set up.

By contrast, this "pbit" works by having some probability between 0 and 1 and
this is represented by something, for example a time varying flicker between 0
and 1 which is weighted by the same aforementioned probability. Then,
operations upon the pbit can change the probability depending upon the program
and you read it out by sampling to see what the output of it is supposed to
be.

If so, wouldn't it suffer from the Monte Carlo integration issue? Namely, in
theory it is nearly independent of the number of dimensions, but in practice
as your number of dimensions (here the number of bits, pbits or qubits) goes
up you will get worse results?

------
FrozenVoid
Its going to be much cheaper because all the cooling, error correction and
other quantum interface stuff is not needed - its just a bunch of magnetic
memory cells. Lets see how it scales.

------
taneq
It seems to me that this is a kind of monte carlo implementation of fuzzy
logic.

