
Turing machines that should run forever - cab1729
https://www.newscientist.com/article/2087845-this-turing-machine-should-run-forever-unless-maths-is-wrong/#.VzNiKTPbcto.hackernews
======
davesque
To give some more context, this is the work of Adam Yedidia, Scott Aaronson's
PhD student at MIT. Here's a related blog post on Shtetl-Optimized:

[http://www.scottaaronson.com/blog/?p=2725](http://www.scottaaronson.com/blog/?p=2725)

~~~
leecarraher
man, again shtelt-optimized takes a bogus sounding, garbage article from
newscientist, and exposes the actual important result. I remember
newscientists p=np confusion on graph iso, that shtelt-optimized offered the
only useful explanation of.

~~~
ikeboy
Aaronson is a coauthor on the paper, he's not just explaining the work, he did
the work.

~~~
davesque
Actually, if you read the post, you see him downplaying his involvement:

"Here’s our research paper, on which Adam generously included me as a
coauthor, even though he did the heavy lifting."

------
bertiewhykovich

      Z is designed to loop through its 7918 instructions forever, but if it did eventually stop, it would prove ZFC inconsistent. Mathematicians wouldn’t be too panicked, though – they could simply shift to a slightly stronger set of axioms.
    

Am I correct in thinking that this is completely wrong? Strengthening a set of
inconsistent axioms -- taking a "stronger" set of axioms to be a set of axioms
that can prove more statements -- will just produce another inconsistent set
of axioms. (The inconsistent set <A, B->~A, B> cannot be salvaged by extending
it to <A, B->~A, B, C> \-- no matter what C is, you can always derive <A, ~A>
from the original three axioms.)

Incompleteness is a whole other thing, of course.

~~~
saghm
I noticed the same thing while reading it. I assume that this is just a
mistake or misunderstanding by the author, and they actually meant "weaker".

Of course, switching to a weaker set of axioms would mean that some of the
things proved by the old set might not be provable anymore, which still seems
like an issue...

~~~
dnautics
As a mathematician, I a appreciate the notion that a stronger set of axioms
means more power and a smaller and more defined working universe, but I think
the author meant "stronger" against contradiction, which is consistent with
colloquial use, if not jargon.

------
dzdt
A Turing machine is just a computer program.

The Goldbach Conjecture example is just a program to loop through and test
that each successive even integer can be reached as the sum of two prime
numbers. If there is a counterexample to the conjecture, then theoretically
this program will eventually find it. For these purposes, the authors don't
care if the program calculates efficiently, but are interested in the length
of the program. So brute force is fine.

The Riemann Hypothesis and ZFC programs similarly are just brute force
searching for counterexamples.

The existence of programs to search for counterexamples is not new or
surprising. I am not so sure what part of this work is new. I guess it is the
discussion of philisophical implications.

~~~
sp332
What's new here is putting a specific upper bound on the complexity of the
problem. A Turing machine using only binary symbols and at most 7,918 states
is enough to prove or disprove ZFC.

~~~
chriswarbo
To me, the use of Turing machines seems a little gimmicky; they're very far
removed from the way we actually program, or reason about programs. The
difficult problem here seems to be compilation: going from a usable program
definition, either formal or informal, down to the bizarre world of Turing
machines, either by hand or automatically.

To me, binary combinatory logic and binary lambda calculus seem like much more
sensible languages for performing these kinds of algorithmic information
theory experiments.

~~~
tromp
Scott commented on the Turing Machine vs lambda calculus approaches in:

[http://www.scottaaronson.com/blog/?p=2725#comment-1085117](http://www.scottaaronson.com/blog/?p=2725#comment-1085117)

showing a clear preference for the "physical" operation of Turing Machines.

In the end, complexities are not that far apart, considering that

[https://gist.github.com/anonymous/a64213f391339236c2fe31f874...](https://gist.github.com/anonymous/a64213f391339236c2fe31f8749a0df6)

describes a 27 state Goldbach checking TM, which takes no more than 378 bits
to encode, while

[https://github.com/tromp/AIT/blob/master/goldbach.lam](https://github.com/tromp/AIT/blob/master/goldbach.lam)

describes an equivalent 267 bit Lambda Calculus term, which can also be
represented as this

[https://github.com/tromp/AIT/blob/master/goldbach.gif](https://github.com/tromp/AIT/blob/master/goldbach.gif)

lambda diagram, as explained in

[http://tromp.github.io/cl/diagrams.html](http://tromp.github.io/cl/diagrams.html)

------
alistproducer2
This is the first time I feel like I missed out on something by not taking
Theory of Computation in college. Anyone care to break down what this article
is talking about?

~~~
dr_zoidberg
A turing machine is a mathematical abstraction that is the simplest machine
that can compute things. Due to it's nature of "mathematical abstraction",
it's actually impossible to build one, though you can restrict it and build
one of course. In particular, we have real computers (more complex than Turing
machines, and with other restrictions).

These particular Turing machines that were built, have special properties that
allow them to test mathematical axioms/theorems, so for example, if the Z
machine halts, then it means that the ZFC axioms are wrong. In the article,
this is stated in the negative form ("Z will run forever if ZFC is right").

Disclaimer: it's been a long time since I last read the formal definitions of
turing machines, and my explanation may be sloppy -- but I aimed for a more
general thing to give you an idea.

~~~
Jtsummers
Really the only thing preventing the construction of a _true_ Turing machine
is the inability to have an infinitely long tape. Fortunately, that limitation
isn't a problem in practice for most things you'd wish to compute.

At it's simplest a Turing machine consists of an infinite tape of memory
cells, a read/write head, a state register, and a state transition table
(functions). Read the current cell and look up the pair (current state, cell
value) to get the effect (new cell value, direction to move head [left, right,
stay], new state).

 _Nothing_ in real computers can't be translated to a Turing machine
description. It just won't be efficient :P. And anything written for a Turing
machine will run on a real computer, modulo memory available.

------
mazsa
FYI: first-order logic and ZF set theory in Adam Yedidia's Laconic
[https://gist.github.com/sorear/f56f447e2684ab4593d758da4ef42...](https://gist.github.com/sorear/f56f447e2684ab4593d758da4ef4290a)
cf.
[https://groups.google.com/forum/#!topic/metamath/OumBq83Ksqs](https://groups.google.com/forum/#!topic/metamath/OumBq83Ksqs)

