

Non-Universality in Computation: The Myth of the Universal Computer - ewakened
http://research.cs.queensu.ca/Parallel/projects.html#Current

======
cperciva
_Alan Turing was wrong._

Correction: Alan Turing was right, but if we redefine "universal computer" in
sufficiently weird ways, Turing's results no longer apply.

In particular, while Turing was concerned with _machines computing functions_
, Akl is looking at _machines interacting with a changing environment_.

~~~
jgrahamc
Thank you for that reply. I started reading his page and it was getting
stupid. He almost deliberately seems to not want to understand the definition
of a Universal Computer (by which he must mean a Universal Turing Machine).

It would surprise me if his example functions are actually computable. In his
paper on the subject he comes up with a scenario in which a machine needs to
compute a function of n variables that vary with time. He then places the
restriction that reading each variable takes one unit of time so that once
you've read the first one the value of the second one has changed and so you
can't do the computation.

Then he goes on to say that if he supposes a computer capable of doing n
computations at once he can get round the restriction he's imposed because
he's able to read all the variables at the same time.

How is this supposed to make me think that Turing was wrong?

Having worked on software/hardware interactions quite a bit you actually see
this sort of thing happen. Lots of data comes in on different ports and you
need to read it all at the same time. This is solved by latching the data into
a buffer which the CPU can read at its leisure.

But the fact remains that the function you are computing is actually
computable (he gives the example of summing the data). He's just produced an
artificial restriction on the computer being incapable of reading the data
fast enough. His n-processors is a complicated way of solving what we solve
with buffers.

And also who said there's a lower bound on the per-instruction speed of a
Turing Machine. Sounds like this guy just needs a faster machine. After all a
Turing Machine is an abstract idea, so let's just redefine it's operating
speed by a factor of n and his function becomes computable.

~~~
neilc
_How is this supposed to make me think that Turing was wrong?_

Akl would certainly not claim that "Turing was wrong" -- the submitted title
is obvious flamebait.

 _He's just produced an artificial restriction on the computer being incapable
of reading the data fast enough._

Well, he is assuming a different model of computation. Whether it is
"artificial" or not is debatable: in an actual physical system, you can't
pause the world, fix all the inputs, and then compute for an arbitrary length
of time.

In any case, this is theoretical work: investigating the nature of computation
under a different set of assumptions is perfectly valid, and I think it's very
interesting.

 _let's just redefine it's operating speed by a factor of n and his function
becomes computable._

For a given function, sure. But unless your machine can perform an infinite
number of computations per time step, no machine will be fast enough to
compute all possible functions, which is the point.

(BTW, I took some classes with Akl as an undergrad. He's a very sharp guy, and
a great professor -- and definitely not a crackpot.)

~~~
antipax
>>> For a given function, sure. But unless your machine can perform an
infinite number of computations per time step, no machine will be fast enough
to compute all possible functions, which is the point.

I mentioned this somewhere further down in the comments, but a machine that
could perform an infinite number of computations would break Turing's proof of
the undecidability of the halting problem over Turing machines. That probably
furthers Akl's claim.

I most definitely think this is interesting in that it is a new set of
assumptions about what a computer is and what it should be capable of doing.
All we have to do now is define a new (probably recursively-defined) automaton
capable of infinite growth in it's possible inputs.

Then again, if a computer had an infinite amount of inputs, that raises even
more interesting questions about what problems possibly then become decidable.

------
almost
This appears to me to be a little (or in fact a lot) crackpotish.

A universal machine with capable of n operations per time unit can be simulate
a universal machine capable of n+m operations per time unit. It's just a bit
slower.

His argument seems to be that this will make it so slow that that it won't be
able to keep up. Which is just stupid.

The Church-Turing thesis says that a universal machine can simulate any other
computing machine given infinite memory. It doesn't say anything about how
fast it will be :p

Maybe I'm misunderstanding this guy and he's not saying something so
completely and utterly stupid. Anyone who wants to correct me please do...

~~~
jmatt
He seems to be misunderstood. Which is no surprise since he's into
unconventional computation - whatever that is[1]. He's well published[2] and
has chaired at least one conference on unconventional computing[3]. So that's
more than I can say about my career in theoretical computer science. I suspect
having unconventional thoughts alone will create some dissonance in related
communities. Similar to what Doron Zeilberger gets for his beliefs in
Ultrafinitism.

I agree that Akl is talking about a very specific circumstance of UTMs. How
many thousands of Mathematicians and Computer Scientists have read that paper
in the last 60 years and verified it? If there were ever going to be any true
challenge to UTMs it had to be obscure and weird like this. That doesn't mean
that I believe he's right, but I don't think he's a crackpot just
unconventional.

[1] <http://en.wikipedia.org/wiki/Unconventional_computing>

[2]
[http://research.cs.queensu.ca/Parallel//publications.html#Pu...](http://research.cs.queensu.ca/Parallel//publications.html#Publications)

[3] <http://en.wikipedia.org/wiki/Selim_Akl#Conferences>

~~~
almost
Your right, I probably shouldn't be so quick to call someone a crackpot.

But still, unless I'm very much mistaken all he's doing is done is to expand
the definition of "able to compute a function" to include "able to run fast
enough to gather the input data for a function". That might be a useful
definition for some things but to use that to claim that "Turing was wrong"?

------
shalmanese
Turing machines are constructed in a closed, static universe. This guy says
that if you need to deal with a source of changing data, turing results don't
apply. This is both obvious and important to state, although not with as much
verbiage.

~~~
jerf
Expanding on that, it should also be pointed out that this is well-known as
being among the list of assumptions that prevent Turing Machines from working
in the real world. We speak of "steps", but they do not have any actual
relationship to "time". The tape has no obvious relationship to "space",
either.

So, right off in the first sentence "I have recently shown that the concept of
a Universal Computer cannot be realized.", I thought "well, it's hardly like
that's a _challenge_..."

There's actually some interesting theory here, I think, but it's only obscured
by being wrapped in rhetoric about Church-Turing being wrong and such. All
mathematical theorems include a set of assumptions, and it's not news that if
you change the truth of the assumptions the theorems may stop holding.

------
trjordan
Look at it this way: imagine you have a server that's totally maxed out (CPU-
bound), 24 hours a day. If you add just one more connection to its load, it
won't be able to handle it. No matter how long you give it, it'll just fall
further and further behind. The requested pages are uncomputable! A faster
computer, on the other hand, will keep up just fine, and finish all the
computations.

This isn't how computations have been traditionally defined, so that's why it
seems so unintuitive. But, he's right nonetheless.

~~~
tybris
The server will get further and further behind, but you can not give it work
it will never finish.

~~~
trjordan
Sure, you can. The work is "handle 350 requests per hour".

~~~
dhs
Turing's computer was an idealized human - to him, computers were humans who
did computations for a living - who were granted infinite time and tape -
that's the idealization - to solve exactly one problem. Anything that talks
about "x problems per hour" has nothing to do with what he said.

------
amalcon
Suppose that my "computing device" were a machine designed to build and
program more copies of itself. This device would "compute" a function across n
variables in time log n by building n-1 copies of itself, and then
parallelizing the computation across them. With sufficient resources, this
machine could could compute the function for any n in finite time.

All the author has done is posit an ever-increasing volume of data, such that
the rate at which the volume increases causes it to outpace the speed of the
computation. It seems that positing an ever-increasing computational capacity
is a reasonable solution to this problem.

~~~
psb217
Your time complexity of log n appears from nowhere. It is entirely possible
that the complexity of calculating the original function F was worse than
polynomial in the number of variables, in which case a polynomial/linear
increase in the number of machines could not reduce the complexity by more
than a polynomial/linear factor, thus leaving the computational complexity
worse than polynomial (e.g. exponential/super-exponential). I've made a
comment further down that posits an idea similar to yours, though with a bit
more precision.

------
antipax
Since when was there a minimum amount of time for a theoretical computer to do
any number of computations?

I suppose that any computer must take some finite, non-zero amount of time to
perform an "instruction", because if the amount of time per instruction is
zero, that would violate Turing's proof of the undecidability of the halting
problem over Turing machines. His idea does have some merit, but I still don't
think that this means there cannot be a Universal Computer -- just that it may
be something that operates on a level above a Turing machine. I'd imagine it
would involve some sort of recursive structure.

------
psb217
The way I see it, you could program a Turing Machine MF which has some finite
"per-time-step" computation speed _n_, such that MF "knows" the following:

1: The general form/program for solving function F for any arbitrary number,
_m_, of variables.

2: How to construct a Turing Machine MF' capable of performing _m_ "operations
per-time-step", and programmed with 1.

Even in his crazy-town world of redefined computation, the Turing Machine MF
is capable of computing any computable function F, of any number of time-
varying variables _m_. I.e. given such a function F, MF simply creates a
suitable machine MF' and copies the output of MF' to its own output. Thus, MF
computes F, albeit by proxy. QED, etc.

It's possible that I'm now in some TM dual to his time-varying primal, but
whatevs, I can do what I want.

ADD: As far as the computability results that I am familiar with are
concerned, without loss of generality one could also just assume that all
Turing Machines have infinite speed. The necessity of introducing the concept
of a "time-step" into the definition of Turing Machines only occurs when
computation time is actually of interest. Though, for complexity results one
could still assume infinite speed, albeit the number of operations would still
need to be counted.

------
bcf
After reading Mr. Akl's website, I'm reminded of this recent HN post:
[http://trc.ucdavis.edu/bajaffee/NEM150/Course%20Content/danc...](http://trc.ucdavis.edu/bajaffee/NEM150/Course%20Content/dancing.htm)

Also, I would be interested to hear what Stephen Wolfram has to say about all
of this (<http://www.wolframscience.com/prizes/tm23/index.html>).

------
joshu
<http://math.ucr.edu/home/baez/crackpot.html>

Do we need a CS equivalent?

------
codeodor
How does one go about showing that there is no way to solve such a problem?

What about the argument that perhaps the author was not clever enough to
devise an algorithm which solves the problem he stated?

Perhaps when I read the papers I will be enlightened?

~~~
cperciva
_How does one go about showing that there is no way to solve such a problem?_

With great difficulty. Usually the approach is to suppose that you have an
algorithm and show that either (a) this algorithm allows you to do something
impossible (e.g., solve a different impossible problem, or reach a
contradiction); or (b) there are two different problems with different answers
which the specified algorithm cannot distinguish between (this is generally
only possible where you're proving that it's impossible to compute something
in less than some number of steps).

The first approach is used for things like showing that the halting problem is
impossible; the second approach is used for things like showing that it's
impossible to have a comparison sort which runs in less than O(N log N) time.

~~~
codeodor
Actually, now that you mention it I can recall these. Thanks for the
refresher!

------
ggchappell
Folks, the only problem with this article is the HN headline. Aki never claims
Turing was wrong.

And I think he is demonstrating something rather interesting: that time
dependence has a significant effect on the power of a computing device.

------
stonemetal
Isn't this more or less the same issue as multiple infinities. Whole numbers
are infinite but countable, but rationals are infinite and not countable. UTMs
are able to compute the infinite but there are infinities that are not
computable.

~~~
banned_man
Rationals are countable but dense. You can biject Z * Z <-> N trivially (start
from origin, spiral out) and Q can be constructed as a subset of Z * Z. Reals
are uncountable.

------
forinti
I think Peter Wegner made this same point in the paper "Why interaction is
more powerful than algorithms" in CACM (Volume 40, Issue 5, May 1997).

~~~
dhs
Yes, interactive computation is proven to be - statistically - more powerful
than algorithmic computation - sometimes. It's also "less powerful" some other
times, and since it's not computable _at which times_ interaction is "better",
it's not a general advantage. But Turing never considered the case of
interaction in his definition of computation, so Wegner is talking about
something else.

~~~
forinti
Turing did in fact consider this possibility, and he instinctively knew that a
machine that allowed interactions would be more powerful, but he couldn't
prove it.

~~~
dhs
Just to make sure, I searched for "interaction" in "On computable numbers...",
and it's not in there. And, like I said, in the general case, interaction is
"more powerful" than algorithm as often as it is "less powerful".

Edit - Example: You're on "'Who Wants to Be a Millionaire?" You're in
"algorithm mode" for most of the questions, but you're granted one
"interaction", via a phone call. On the show, you're allowed to call anybody
you want, and if you know to call somebody who's really clever, chances are
that interaction will be more powerful than computation. But that's not the
general case: the general case would be for you to call a random phone number,
and chances are that won't help you - so, in that case, not more powerful.

~~~
forinti
Please look up <http://en.wikipedia.org/wiki/Oracle_machine>.

The purpose of a machine with interactions is not to be more intelligent, it
is just to be able to execute a larger body of computations than a machine
without interactions would. You can't model operating systems on Turing
machines, for example, precisely because they need to interact.

This is from the Wikipedia entry on Alan Turing:

"In June 1938 he obtained his Ph.D. from Princeton; his dissertation
introduced the notion of relative computing, where Turing machines are
augmented with so-called oracles, allowing a study of problems that cannot be
solved by a Turing machine."

~~~
dhs
Right, oracle machines are - or can be interpreted as - interactive. However,
whatever it is that oracles do - it's often called "hypercomputation" -
they're not doing computation, in the sense that Turing used the term in "On
computable numbers...", which is why Turing couldn't prove that they are more
powerful. The apparent - to me - problem here is that people use a definition
of "computation" which is different from Turing's (which is alright), and then
say that Turing was "wrong" in what he said about "computation" (which is not
alright).

I actually think that operating systems are a good example of how interaction
can be less powerful than algorithm - the "oracle" in this case might be a
user, and the user might be new to computers, so that in this case the result
of the interaction might be worse than if the "oracle" were another (non-
human) computer, the interaction with which might in turn give a worse result
than if _you_ were the "oracle".

~~~
forinti
The object of this study is not to prove that all forms of interaction are
better than algorithmic computing, it is to prove that machines that interact
during computation can execute a larger set of computations.

You also insist in factoring in intelligence, which is not at all revelant to
this discussion. This is about which sets of functions can be executed with
which machines.

Oracle machines are capacle of executing all functions that a Turing machine
can (if you simply shut off interaction) and also all those that involve
interaction. Having been proved that the latter set of functions is larger
than the first, we can conclude that oracle machines are more powerful.

------
varjag
Stopped reading at mention of "physical variable".

------
milkmandan
He basically says that the UTM cannot run a TM in real time, so that makes it
non-universal. It's pretty inane.

------
smoofra
this is crackpotery

