
Hypercomputation: Computing more than the Turing machine (2002) - ColinWright
https://arxiv.org/abs/math/0209332
======
z1mm32m4n
Maybe I'm biased, but I generally view posts about hypercomputation with
skepticism. From _The Myth of Hypercomputation_ [1]:

> _Under the banner of "hypercomputation" various claims are being made for
> the feasibility of modes of computation that go beyond what is permitted by
> Turing computability. In this article it will be show n that such claims fly
> in the face of the inability of all currently accepted physical theories to
> deal with infinite precision real numbers. When the claims are viewed
> critically, it is seen that they amount to little more than the obvious
> comment that if non-computable inputs are permitted, then non-computable
> outputs are attainable._

The original idea in Turing's thesis was that the Turning Machine was
something _that could actually be implemented_ , and indeed you can find many
examples of people's side projects where such machines can be made and can
actually compute things.

On the other hand, hypercomputation has no physical basis. So what's the point
of using it to model anything real? It's next to useless.

[1]:
[http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf](http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf)

~~~
setra
Your critique of hypercomputation applies all the same to Turing complete
machines. A Turing machine can never be implemented and is a totally
theoretical model. At best we can implement a class of machine below called a
"Linear bounded Turing machine". For these machines the halting problem is
solvable (it can just take a while). You could even implement a linear bounded
Turing machine as an FSM it would just take an ungodly number of states.

~~~
frankmcsherry
> You could even implement a linear bounded Turing machine as an FSM it would
> just take an ungodly number of states.

To be clear, these things have formal definitions, and this statement is not
correct.

1\. A linear bounded automaton[0] is a Turing machine that can only overwrite
symbols presented on its input tape as input. However, the definition of the
automaton is still required to be finite, but it is required to operate on
unboundedly large input tapes.

2\. A finite state machine[1] is a model of computation where the sequence of
input symbols are observed once and the machine is at all times in one of a
finite number of states. It is equivalent to a TM that can only move right
(and consequently cannot read anything it writes to the tape).

They are different, formally[2]. There are languages that a LBA can accept
that a FSM cannot accept (famously, the strings of balanced parentheses cannot
be recognized by a FSM).

[0]:
[https://en.wikipedia.org/wiki/Linear_bounded_automaton](https://en.wikipedia.org/wiki/Linear_bounded_automaton)

[1]: [https://en.wikipedia.org/wiki/Finite-
state_machine](https://en.wikipedia.org/wiki/Finite-state_machine)

[2]:
[https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_lang...](https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages)

~~~
setra
This is correct from the unbound theoretical tape perspective. I should have
expanded my point you are very correct. If you look in the context of physical
realization (finite) a tape that can be written is some form of RAM. Beyond
O(1) complexity all of these machines can express each other with different
amounts of RAM. Even in the theoretical model an FSM can accept balanced
parentheses of finite depth. My FSM point is basically wrong though.

------
mcguire
Way back, years ago, A.K. Dewdney published one of his computing recreations
columns in _Scientific American_ where he discussed models of computation
stronger than Turing machines/lambda calculus/recursive functions/etc., etc.
His contribution was a model that solved a NP-Hard (?) problem in linear time:
it passed the input to two sub-processors, which either solved the problem or
passed it to two sub-sub-processors each. Since each sub-processor was half
the size of its parent and required time proportional to its size due to wire
delay (?).

Anyway, the bottom line was that it did an unbounded amount of work in a
bounded amount of time.

I stopped paying attention to Dewdney after that.

~~~
gavinpc
Zeno's paradox?

~~~
mcguire
Yup.

~~~
BooglyWoo
Also sounds a bit like Thompson's Lamp.

EDIT: Whoops, that's pretty much the same thing as Zeno's paradox.

------
idlewords
In a nutshell: you can get a lot of work done if you have infinite time or
infinite memory.

The implementation of either is left as an exercise for the reader.

------
trishankkarthik
I conjecture that hypercomputation is impossible for the following simple
reason. If you could solve the Halting Problem [1], then you could build a
"living" contradiction, where a machine halts if and only if it does not halt.
(See proof in [2]). It won't be a mere description of a paradox, it would be
an actual one. If contradictions are possible (something we implicitly believe
to be impossible), then Reality is far stranger than we ever suspected or
observed.

[1]
[https://en.wikipedia.org/wiki/Halting_problem](https://en.wikipedia.org/wiki/Halting_problem)

[2] [https://www.amazon.com/Introduction-Theory-Computation-
Micha...](https://www.amazon.com/Introduction-Theory-Computation-Michael-
Sipser/dp/113318779X)

~~~
eighthnate
That's the accepted theory for sure. It's what every CS student learns.

But never say never. "Reality/logic/math" can be a lot stranger than you
think.

There used to be a time when we thought that infinity ( aka countable infinity
) was the largest number. Turns how there are bigger infinities.

And of course quantum mechanics. The duality of light ( both a wave and a
particle ).

~~~
trishankkarthik
_But never say never. "Reality/logic/math" can be a lot stranger than you
think._

I understand it's a Black Swan kind of problem, but I don't think we live in
that reality. Please show me a contradiction. The ramifications would be
stupendous, to say the least.

 _There used to be a time when we thought that infinity ( aka countable
infinity ) was the largest number. Turns how there are bigger infinities._

Yes, and we know how to build Turing machines that use countable infinity.
Please show me a machine that actually uses bigger infinities. Please
hypercompute the digits of a real number that cannot be computed by a Turing
machine.

 _And of course quantum mechanics. The duality of light ( both a wave and a
particle )._

Universal quantum computers are faster than classical computers, but they are
no more powerful (they are not known to be able to hypercompute).

~~~
eighthnate
Why are you getting defensive? I agree with you more or less. All I said was
reality can be stranger than we initially believed possible and showed you
examples in other fields ( math, physics ) where "reality" got turned upside
down.

Computer science is a fairly new field. It's within the realm of possibility
that reality can be turned upside too. I'm not said it is or will be, but it
can. Okay?

~~~
trishankkarthik
I'm not getting defensive, my friend. I was merely challenging you to think
through the logical implications of hypercomputation. What many people miss is
this: hypercomputation leads to living paradoxes. They might then have to bend
over backwards to explain why these paradoxes are not possible, but that
leaves the question of why some things are hypercomputable, and some are not.

~~~
eighthnate
I wasn't even talking about hypercomputation specifically. I was talking more
in generalities about computer science and things we hold to be
"reality/absolutes".

------
davesque
I've often thought that results in computer science about the performance
characteristic of classes of algorithms are also, in a sense, making
statements about physics. In other words, in a universe where bits of
information can be sent instantaneously through worm holes, it may not be
correct to say that there's no better comparison-based sort bound than O(n log
n). I'm not an expert at physics or computational complexity so maybe my
intuition is way off here. But I wonder if, when I hear about innovations in
"hypercomputation", I should understand the researchers to be making claims
which are essentially as likely to pan out as a claim of having sent bits
through wormholes.

~~~
mikeash
Results about computational complexity are always relative to some specific
model of computation. For example, a standard Turing machine cannot sort in
O(n log n) time. Because it has to walk along the tape one position at a time,
it takes longer. When the big-O of an algorithm is given without mention of
the machine model, it typically refers to a machine with random-access memory,
since that's approximately what real computers have.

So yes, physical possibility is definitely a big factor here, even if it's not
always stated explicitly. If it were impossible to build practical random-
access memory, then that wouldn't be the standard model used for computational
complexity.

Consider the big question of whether P=NP. The question itself is unrelated to
physics, since P is defined as polynomial-time algorithms on deterministic
Turing machines, and NP as polynomial-time on nondeterministic Turing
machines. But the question gets a lot of attention in part because it's
considered to be impossible to actually build a nondeterministic Turing
machine. If that were untrue, then we could solve problems in NP in polynomial
time even if P≠NP.

In theory, you could build something like a nondeterministic Turing machine
(without infinite memory, but close enough, much like we consider real-world
computers to be close enough to normal Turing machines) if you could, say,
send information back in time, or if you could split the universe into
arbitrary many parallel universes, then destroy all but one. Obviously, nobody
has the slightest idea of how to do this, but at least there are some vague,
crazy possibilities.

Hypercomputation is even worse. To accomplish it in anything like physics as
we know it would require infinite time, space, and energy. In theory you can
get infinite time by putting your computer into just the right orbit around a
rotating black hole. Infinite space and energy are somewhat harder to come by.
(You know you're in trouble when you need to do something _harder_ than
putting something into a precise orbit around a rotating black hole.)

There's always the possibility of new physics. Nothing says it's _impossible_
for there to exist some fundamental particle which acts like a Turing oracle
when you poke it just right. But the chances seem pretty poor.

In short, yes, barring any massive physics breakthrough, treat
hypercomputation as a theoretical math construct, nothing more.

~~~
davrosthedalek
You don't have to put the computer in the right orbit, you have to put
yourself into the right orbit. Your time goes slower, so it gives the computer
outside more time to work.

Please don't try it at home.

------
sddfd
How is the hypercomputation mentioned in the report related to a Turing
machine with oracle?

~~~
gfody
the paper is all about Turing machines /oracle

"The paradigm hypermachine is the O-machine....."

------
colordrops
On a related note, anyone know anything about this company?

[http://memcpu.com/](http://memcpu.com/)

They claim to be able to solve NP-complete problems with polynomial resources
using non-Turing architectures. I'd normally call them crackpots but they have
serious pedrigree and have been published in Science and Nature. I'm still
very skeptical, but don't have the chops to evaluate their caims.

~~~
mikeash
Can't a Turing machine emulate a digital circuit in polynomial time? Then
either they've proven that P=NP, or they're full of it.

~~~
colordrops
Supposedly it's analog, but the Aaronson article linked by others gets into
why that doesn't help either.

~~~
mikeash
I see. The page mentions "digital" a few times, but I guess they're either
only referring to part of the machine, or are using the term unconventionally.

------
nielsbot
would love a layperson-friendly summary of this. sounds interesting but I am
not a CS PhD or mathematician.

~~~
throwawayjava
(Some of what I say here isn't technically accurate; i.e., it's "close to the
right idea but technically wrong". Parent asked for a layperson's
explanation.)

There's a famous problem in CS called the halting problem. The halting problem
asks for a program that tells you whether an arbitrary program (turning
machine) ever halts (a.k.a. stops executing). A function that could compute a
solution to the halting problem is called a halting function.

Turning machines can't compute a solution to the halting problem.

Therefore, if you add an _oracle_ for the halting problem to an otherwise
normal turing machine, then the resulting model of computation is stronger
than turing machines alone. And oracle is basically a magical box that answers
your questions in O(1) time, nevermind how.

Merely assuming an oracle for the halting function is only one of many ways to
build something that does things turning machines can't do.

For example, allowing infinities in various places where the classical theory
of computation assumes finite sets also sometimes increases the computational
power of machines. E.g., we normally assume that a turing machine's tape is
uninitialized or initialize only a finite subset of the tape. But by
initializing a turing machine with a carefully selected infinite tape, you can
compute the halting function. Some other "now make the finite thing infinite
and then code up the halting function" constructions can be found in this
paper.

If you fix one of these various constructions, you can start to do all the
classical CS theory stuff in the context of a slightly different machine. It's
interesting to see results transfer, which break down, and what surprising
new/nice results show up here but not in classical cs theory.

It's hard to say how useful any of this is because all of these constructions
basically assume we already have at hand something that we have no idea how to
construct in reality. I.e., the whole paper begins with "assume false" as far
as us pragmatic engineering folk are concerned. But that's perhaps basically
the same thing Euclidean geometers said during the emergence of non-Euclidean
geometry, so perhaps this work will end up being as revolutionary as turing
machines themselves and we just can't see exactly how yet.

~~~
nielsbot
thanks for this. very interesting!

