
Memcomputing NP-complete problems in polynomial time - cbennett
http://arxiv.org/abs/1411.4798
======
Animats
Their "memprocessor" is a set of standard analog computing elements - four
analog multipliers, two op amps, and a "controlled inverting differentiator".
General purpose analog computers are rare today, but you could set this up on
the patchboard of an analog computer from the 1950s or 1960s.

The idea that analog computers are in some sense more powerful than Turing-
complete digital computers comes back now and then. Here's a paper on that
from 1998:

[http://www.eetimes.com/document.asp?doc_id=1138111](http://www.eetimes.com/document.asp?doc_id=1138111)

"The neural network that represents the analog computer proves to be
inherently richer than the standard Turing model."

The idea that having variables which can take on an infinite range of real-
valued values fascinates some people. But you can't, not really. Resolution is
limited by noise. Noise is inescapable, since electrons are discrete. Infinite
resolution is impossible in a granular universe. Real numbers are a convenient
fiction which cannot be realized in hardware. ("God created the integers; all
else is the work of man." \- Kronecker)

The author of the paper almost gets this. They mention the Nyquist-Shannon
sampling theorem, so they know about Shannon. But they don't seem to be
considering the problems of the noise floor. They also write "the multimeter
of the hardware implementation analogically measures the integral over a
continuous time interval, directly providing the result, thus avoiding the
need of sampling the waveform and computing the integral". Amusingly, they're
using a digital multimeter.

~~~
mikeash
Just to expand: I think it's important to note that the resolution limits are
not only a theoretical problem with analog computers, but a practical problem
that strongly limits their power.

If your analog machinery can distinguish between a million different values in
practice, that sounds like a lot! But it's equivalent to only twenty bits of a
digital computer, and you can probably build the digital one much more easily,
and it will probably be a lot more reliable.

Maybe you're going to go completely nuts and build a machine that can
distinguish between a quadrillion different values. Much better, right? Well,
that's still only about 50 bits.

Let's go really nuts. Let's say you had a machine that could represent numbers
using the entire diameter of the observable universe (about 92 billion light
years) with a resolution of one planck length (about 1.6e-35m, way smaller
than any known particle or other object, and one of the smallest lengths
related to any known physics). Even that is not so impressive. That's
equivalent to about 205 bits of digital storage.

~~~
Animats
_resolution limits are not only a theoretical problem with analog computers,
but a practical problem that strongly limits their power._

Um, yes. Distinguishing between 1000 signal levels is considered really good.
100 is more typical. There are 14-bit A/D converters, but they're really to
give you some resolution on small signals and more dynamic range on signals
where noise goes up with level.

~~~
mikeash
Right. I know it sounds fairly obvious, but I have encountered real people who
truly didn't understand that a couple dozen bits gets you beyond any practical
ability to measure, and a couple hundred gets you even beyond theoretical
limitations on analog.

------
shoyer
I wrote Scott Aaronson about this and he showed me where he already addressed
this on his blog:

[http://www.scottaaronson.com/blog/?p=2053#comment-269770](http://www.scottaaronson.com/blog/?p=2053#comment-269770)

Note the response from one of the authors of this paper:

[http://www.scottaaronson.com/blog/?p=2053#comment-270906](http://www.scottaaronson.com/blog/?p=2053#comment-270906)

And Scott's response to that:

[http://www.scottaaronson.com/blog/?p=2053#comment-271488](http://www.scottaaronson.com/blog/?p=2053#comment-271488)

~~~
cbennett
Thanks for linking to this, Aaronson brings some good critiques (as did others
on this thread). I do however think it is a bit disrespectful to the authors
to use the word debunk,. Debunk implies a myth or a belief, or a completely
unsupported theory, such as creationism. In contradistinction, this is peer
reviewed science, and while there are undoubtedly probably faulty assumptions
behind their work (or merely overstated significance), there may be some
underlying validity and modest importance within the field of unconventional
computing as well, so let us not throw out the baby with the bathwater.

Also, just curious as a parallel example, Tononi and co had their day in the
sun on Aaronson's blog a few months back, and there was a vigorous back and
forth including a great response from Tononi ; would you say that now IIT has
been 'debunked'? If so, how do you justify that? If not, why not wait for the
authors of UMM theory and its hardware extension to have a chance to reply?
This is how progress is made in science, deliberation & iteration, not instant
victories.

~~~
14113
This is not peer-reviewed science yet, it's a preprint on Arxiv. Anyone (more
or less) can upload anything to Arxiv.

~~~
cbennett
The paper is submitted but not accepted (passed review yet), in this you're
right, but I think its a growing practice for reputable scientists (as well as
nobodies) to post to Arxiv immediately after they've submitted to a 'top'
journal.

Anyways, here's the lead authors page, for the curious:
[http://physics.ucsd.edu/~diventra/](http://physics.ucsd.edu/~diventra/)

~~~
14113
I accept that Arxiv material is generally eventually published, and serves as
a staging post for publishable material, however I was protesting to the idea
that being on Arxiv in any way constitutes peer review.

I note that the lead author is a Physicist, which makes it even more suspect
for me - Physics and Theoretical Computer Science share some of the same
toolkit, but the tricks and corner cases in each case can be wildly different,
meaning that even if someone is a world leader in one field, they can be a
complete novice in another!

------
Strilanc
Uh huh. \derisive

I think this all comes down to the claim that their "memcomputing"
architecture allows you to put exponential amounts of data into a polynomial
number of things. My best guess for where they went wrong is something like
measuring an exponentially small voltage difference.

A good rule of thumb: if a proposed thing violates well-known bounds, like say
the Bekenstein bound [1], _and the authors don 't mention this at all_, that
is not a good sign.

The paper would probably be fine if they took out everything implying that
UMMs can be physically instantiated, but I dunno.

1:
[http://en.wikipedia.org/wiki/Bekenstein_bound](http://en.wikipedia.org/wiki/Bekenstein_bound)

~~~
JacobEdelman
To me it looks like they have exponential growth in the number of connections
between the memory elements. This would account for having exponential storage
while still not violating any physical principles (also, it would not be
nearly as useful as they imply).

~~~
lomnakkus
That _would_ violate a physical principle (The Holographic Principle[1]),
namely that the amount of information in any given volume of space is limited
by the volume's _surface area_. At some point an exponentially growing number
of connections is going to overtake the surface area of the volume of space
occupied by the supposed physical device.

[1] I understand that this is largely theoretical, but it seems to be pretty
much accepted by the Physics community AFAICT.

------
jmount
At best this is using the fact that computation of the real numbers is very
powerful (and then pretending their electrical circuits at hand actually work
over the real numbers). The point is none of our constructible devices
implement ideal real number arithmetic, we only abstract them as nearly doing
so. Likely their "figure 1 decoder" needs more and more accuracy (and becomes
less and less possible to actually implement) as problem size goes up.

Idea analog computing is very powerful (in theory). Actual analog computing is
also powerful. But one of the things digital avoids is: if each analog
component is only faithful to a factor of (1-epsilon) then it is easy to run
into problems where an n-stage analog system is off by (1-epsilon)^n ~ e^{-n
episilon}. Which is exponentially bad in n (meaning if you assume epsilon=0
you may be assuming way an exponential amount of loss).

Also they are not ambitious enough. Assuming constant time arithmetic of the
real numbers should get you more (like maybe even the halting problem).

~~~
xxxyy
That was my impression too. Real numbers are in fact very surreal [1][2].
Using higher and higher frequencies is somewhat equivalent to saying: first
step will take us 1 second, second step will take us 0.5 seconds, and in less
that 2 seconds we will have executed anything you want. Although I'm not quite
sure I understand this paper.

[1]
[http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox](http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox)

[2] "God made the integers; all else is the work of man", Leopold Kronecker

~~~
jmount
Yah- I got the impression they are running white or pink noise through a bunch
of filters and looking to see if there is any spectrum remaining at the
perfect balance condition that represents a successful bin-packing. The idea
is they thus exploit the assumed linearity (and accuracy) of ideal circuits by
claiming an independent calculation is happening at each frequency (and they
trigger the calculations form the noise source). Not implementable.

------
33a
I still don't fully understand what they mean by a "memcomputer" here, but if
you take a boring old real RAM computer + floor function you can even solve
all of PSPACE in polynomial time:

[http://dl.acm.org/citation.cfm?id=682381](http://dl.acm.org/citation.cfm?id=682381)

Regarding models like FPGAs, etc., these can all be simulated on boring old
Turing machines with at most a polynomial time overhead, so it probably isn't
what they are talking about here. It seems like they have some mixed analog
digital model of a computer here, but the details are a bit obtuse. It
wouldn't really surprise me if they were solving NP-hard problems given that
regular old real arithmetic + thresholding /rounding can lead to some crazy
behaviors.

~~~
JacobEdelman
I'd agree except for this bit:"ndeed, a practical implementation of a UMM can
already be accomplished by using memelements such as memristors, memcapacitors
or meminductors, although the concept can be implemented with any system with
memory, whether passive or active. For instance, in this work we have proposed
a simple topologically-specific architecture that, if realized in hardware,
can solve the subset-sum problem in just one step with a linear number of
memprocessors" That seems to be pretty suspicious to me.

~~~
33a
That's true. It might be that under some theoretical assumptions their model
could actually solve an instance of an NP-hard problem, but it could also be
that the physical realization doesn't scale.

They report solutions for a few small cases of subset sum, I would be more
interested to see it run on something with say a few thousand variables.

(After all similar claims have been made about other analog systems, like soap
bubbles, etc.)

~~~
halcy
If I understand correctly, that would be under the theoretical assumption that
you can perform computations on numbers that require infinitely many bits (or
at least, likely, exponentially many bits) to represent in constant time,
which is an assumption that you are not just generally allowed to make.

------
bglazer
From the conclusion of the paper:

> In conclusion we have demonstrated experimentally a deterministic
> memcomputing machine that is able to solve an NP-complete problem in
> polynomial time (actually in one step) using only polynomial resources. From
> complexity theory we then know that we are able to solve any other NP-
> complete problem in polynomial time. We stress again that this result does
> not prove NP=P, which should be proved only within the Turing paradigm.

I'm very confused by this statement. If their Universal Memcomputing Machine
is able to solve __any __NP problem in polynomial time, what is the relevance
of whether NP=P in a Turing machine context?

Also, this seems like a very important result, but my skepticism is really
high.

~~~
Chinjut
They are guarding themselves against charges that they have claimed something
which they haven't (to have proven that NP = P in the standard Turing machine
context, and thus earnt the accolades which would accompany the resolution of
this longstanding mathematical problem).

~~~
bglazer
Ok that makes sense. From a practical perspective, would this essentially make
Turing machines obsolete in solving NP problems?

Essentially does this mean that NP=P, in a non-Turing machine context?

~~~
Sharlin
There are plenty of formalisms that can solve all of NP in polynomial time.
Problem is, they are all theoretical and there is no compelling reason to
believe any of them can be actually constructed given the laws of physics.

------
TTPrograms
The ability to solve NP-complete problems seems to be dependent on the concept
of information overhead. It's explained a bit in the theory paper here:
[http://arxiv.org/pdf/1405.0931.pdf](http://arxiv.org/pdf/1405.0931.pdf)

One concern I have is in section VI-A. The author refers to the ability to
read out of a collection of memory elements the sum of their contents. Since
the sums are totally defined by the other numbers it doesn't seem that you can
count those bits as additional information. Maybe I'm missing something,
though. The bit after on Exponential Information Overhead seems more robust.

~~~
JacobEdelman
Without Exponential Information Overhead I don't think any of it really works.
I mean, that's the part that seems hard to believe. I think they get the sums
based off each element being able to store data in relation to other elements
but I don't get how that can physically work without effectively adding more
elements.

------
al2o3cr
Given the hassles with getting electronic systems to respond cleanly all the
way down to DC, I'd say this sounds more like a hardware-implementation of the
known polynomial-time approximate method:

[http://en.wikipedia.org/wiki/Subset_sum_problem#Polynomial_t...](http://en.wikipedia.org/wiki/Subset_sum_problem#Polynomial_time_approximate_algorithm)

------
Aardwolf
From the paper:

Sentence A: "unlike the latter, UMMs are fully deterministic machines and, as
such, they can ac- tually be fabricated"

Sentence B: " no experimental realization of such a machine, (...), has ever
been reported"

Do you think it might be possible, given sentence A? Such machine would own
quantum computing, wouldn't it?

------
JacobEdelman
Scott Aaronson looks at possible ways to solve NP-Complete problems using
physics and argues that it isn't really possible. Link:
[http://www.scottaaronson.com/papers/npcomplete.pdf](http://www.scottaaronson.com/papers/npcomplete.pdf)

------
rep_movsd
Either I'm a complete idiot or this is bullshit - Just an analog computer. Are
they simply confused by the fact that analog voltages can theoretically hold
infinite information because they are real numbers? Which is actually false,
because at some level everything is quantized and continuity went out in the
1900s

As far as I know, you cannot have non-Turing machines - quantum computing
notwithstanding.

It seems like common sense. To solve a problem, you need to do something, and
then something else. Sometimes if something is something you do something,
else you do something else.

That's how the world works and that's all anything can do and that's how a
Turing machine works, and there is no other way.

------
acjohnson55
For a grad class on future computing paradigms, I wrote a paper on computing
with FPGAs that are capable of reflashing themselves every cycle, allowing a
system to exchange memory for logic on a fairly arbitrary basis. As a total
non-expert, it seemed to me like there was a lot of potential there. I wonder
if this is a similar concept, or something totally different.

In any case this sounds truly revolutionary.

------
JacobEdelman
We don't know if we can build UMM's. This does seem to strongly contradict a
lot of ideas that no computational system in the physical world will be able
to beat a Turing Machine exponentially. In a way UMM's right now are just
saying "what if we can do this special computing thing" and proving that than
P=NP. It only becomes surprising if we can actually build them.

------
JacobEdelman
In many areas this paper seems to be over stating its findings. They imply
heavily that memcomputing must exist and talk about the human brain and
neurology in a manner that really has no place in this kind of paper.

------
mjfl
Not sure what to make of this. Is this as important as it sounds? Does this
mean cryptography is broken?

~~~
JacobEdelman
Only if it can actually be made. It seems unlikely that it could be.

------
ikeboy
... So have they released all the
[http://en.wikipedia.org/wiki/RSA_numbers](http://en.wikipedia.org/wiki/RSA_numbers)
yet? If not, it's a bunch of bs.

~~~
DCKing
If you would have looked at two more Wikipedia articles you would have known
the following:

Integer factorization is likely not an NP-complete problem at all. It is
suspected (but not proven) _not_ to be in the class of NP-complete problems.
There is no reason to believe that an NP-complete problem solver can factorize
integers in polynomial time.

Contrary to this machine it has been shown that quantum computers can solve
the integer factorization problem in polynomial time through Shor's algorithm.
But quantum computers are not known to be able to solve NP-complete problems
in polynomial time.

Even quantum computers are far off factoring the RSA numbers because of the
practicalities of the real world. The current world record Shor's algorithm
computation is finding that 15 = 3 x 5.

~~~
teraflop
> There is no reason to believe that an NP-complete problem solver can
> factorize integers in polynomial time.

You're misunderstanding the relationship between NP and NP-complete.

Integer factorization is in NP, which by definition means that anything that
can solve NP-complete problems efficiently can also solve factorization. The
question that's currently unresolved is whether factorization is _easier_ than
NP-complete, not harder.

~~~
DCKing
Ah, my apologies. It's been some time since my courses on this.

