
The Power of One-State Turing Machines (2018) - nabla9
http://www.nearly42.org/cstheory/the-power-of-one-state-turing-machines/
======
todd8
## Computability

Turning Machines are a simple, purely hypothetical, machines that have allowed
computer scientists/mathematicians to conclude _very profound_ properties of
computation. Although they are mentioned with some frequency here, it's not a
subject that most of us have any training in.

I had the good fortune to study computer science while in college many years
ago. There was a lot less to learn back then and I was introduced to the
theory of computation at the undergraduate level.

For those that haven't had the pleasure of problem sets covering Turing
Machines but would like to learn something about them, I recommend the
_Stanford Encyclopedia of Philosophy_ entry on Turing machines [1]. It goes
into more depth than Wikipedia while being very easy to understand.

Turing machines are extraordinarily simple, much simpler than any real
computer. The fact that Turing Machines deserve an entry in a Philosophy
reference is because of their power in elucidating the very nature of
computation.

About 80+ years ago, it was discovered that the set of problems that can be
solved/computed by a Turing Machine is _exactly_ the same set that can be
solved with the Church's Lambda Calculus, [2] and recursive functions as
formalized by Gödel's [3]. This, to me surprising, result led to the _Church-
Turing_ thesis: the hypothesis that any effectively computable (i.e. solvable
through some mechanistic procedure) can be solved by a Turning Machine. From
this we know, for example, that true algorithms don't exist for a whole host
of problems. This isn't the same claim that some problems are hard and don't
have tractable exact solutions (e.g. the knapsack problem or the traveling
salesman problem [4]); there are algorithms for these problems--they simply
take a very long time. Some interesting problems on the other hand have no
algorithm at all. This can be shown by demonstrating that no Turing Machine
could be constructed to solve the problem (e.g. the halting problem [5]).

What about quantum computers? They look like an approach so different and so
powerful that maybe they will provide a stronger model of computation than
that limited by the bounds of Church-Turing. Unfortunately, the qubit based
approaches that we are now pursuing have been demonstrated to be "equivalent"
to classical (Turing) machines, just with different performance time/space
tradeoffs. So qubit based quantum computers may someday solve challenging,
exponential time problems, but they won't solve every problem.

## Turing machines

Turing's insight was that any mechanical procedure for computation could be
modeled by a simple machine, the Turing Machine. A Turing Machine has a
"tape", think of it as an infinite strip of paper divided into cells, like an
infinite array: tape[k]. Each cell can hold a symbol from some finite
alphabet.

There is a single read/write head positioned at a location on the tape. The
head can read a symbol from the location or erase/write a symbol the location,
and then it can move one location left or right.

Turing machines also have a state from some finite range of states, say 1...N.
One state is designated as the initial state.

Finally, there is a state transition table in the Turing Machine that
determines its operation. For each combination of (1) current state and (2)
current symbol under the read/write head, the transition table gives (a) the
new internal state and (b) the symbol that should be written on the tape at
the current location and (c) the direction the head should move--left or
right.

When I first saw the definition of a Turing Machine, I thought that it looked
unlike a computer. The state table seemed to encode the entire operation of
the machine. It could potentially be a huge table of specific operations to
solve some problem. Turing machines seemed to me to be special purpose
constructions unlike the more flexible architectures of programmable
computers. Indeed, we studied a number of different Turing Machines that would
solve simple problems with inputs from the tape.

The big surprise for me was seeing a _Universal Turing Machine_. It turns out
that there are Turing Machines, having a fixed reasonably small state
transition table that can do anything that _any_ other Turing Machine can do.
These "Universal" machines use a program on the tape that provides the
instructions to process the remaining input data from the tape.

The idea of a Turing Machine doesn't rely on lots of states, a few is enough
to construct a universal machine. Likewise, there are Universal machines that
need a tape that extends in only one direction and tape vocabularies that have
a very limited number of symbols.

[1] [https://plato.stanford.edu/entries/turing-
machine/](https://plato.stanford.edu/entries/turing-machine/)

[2] [https://plato.stanford.edu/entries/lambda-
calculus/](https://plato.stanford.edu/entries/lambda-calculus/)

[3] [https://plato.stanford.edu/entries/recursive-
functions/](https://plato.stanford.edu/entries/recursive-functions/)

[4]
[https://en.wikipedia.org/wiki/Travelling_salesman_problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem)

[5]
[https://en.wikipedia.org/wiki/Halting_problem](https://en.wikipedia.org/wiki/Halting_problem)

~~~
tzs
Another good resource, both for people whose CS education covered Turing
machines well and for people who don't know much about them, is Charles
Petzold's book "The Annotated Turing" [1].

It takes the paper where Turing originally introduced and analyzed Turing
machines, and explains it line by line, provides examples and corrections, and
historical and biographical material to put it all in the context of the time
and of Turing's life.

[1]
[https://en.wikipedia.org/wiki/The_Annotated_Turing](https://en.wikipedia.org/wiki/The_Annotated_Turing)

------
javajosh
Okay I'll volunteer to be the dummy who says I don't get it. The cell of a
Turing tape can contain an arbitrary number? The head of Turing machine can
add?? I think my underlying assumptions have been violated, which means either
I have something interesting to learn or this is a bad result. Probably the
former.

~~~
progval
There are many ways to define Turing Machines:

* only binary alphabet, or allow any finite alphabet on the tape(s)

* tape is infinite in a single direction or in both (and various possibilities as to how to handle reaching the edge)

* there is one tape or there are more

* the head has to move every time, or can it remain on the same cell

* etc.

And they are all equivalent in terms of computability (ie. they can compute
the same functions), and mostly equivalent in terms of algorithmic complexity
(ie. they take the same time to run, up to a polynomial).

So almost no one bothers actually saying which model they use, unless they are
doing a rigorous proof.

However, when papers like this one say "Hey, let's take my favorite model of
Turing Machines and restrict it so that [X]", then the results highly depends
on what their favorite model is. Except they don't always bother saying which
one it is.

For example, if you restrict yourself to Turing Machines with a single state,
two tapes, and any alphabet allowed on at least one of the tapes, then it's
enough to emulate any one-tape Turing machine, simply by writing the state on
a single cell of the second tape and repeatedly read it. Which in turn are
known to be able to emulate any conventional model of Turing Machines.

~~~
daveFNbuck
Looking at the first page of the linked PDF, it's not explicit about whether
the alphabet must be finite, although it's implied by the summary saying that
addition can only be done modulo some finite value.

It is explicit about having an arbitrary alphabet, a single tape that's
infinite in both directions and that the head has to move every time.

I don't think I've ever seen anyone restrict a Turing machine in a novel way
and not follow that up with a rigorous proof.

------
beaker52
I've never understood what the fascination with Turing machines is or what
they actually are.

~~~
kd5bjo
It’s one of two(1) concurrently-developed mathematical formalisms that each
have been proven to be at least as powerful as any possible algorithm, and
were in fact used to codify what an “algorithm” is in a formal sense. It had
only been a vague, intuitive concept before.

(1) The other is Church’s lambda calculus

~~~
beaker52
This is why I've never understood what it is. I see words, (and it's not that
I don't understand what those words mean) but I simply don't know what is
meant by them arranged in this way.

There's some understanding that I'm fundamentally lacking which I've been
searching to correct for years. I've put it down to "Math" which seems to be a
whole impenetrable language itself. A language that my lack of understanding
of (despite many unguided attempts) means I'll continue to be downvoted and
snorted at when I say I still don't understand what a Turing machine or a
monad actually is.

~~~
caspper69
You are correct; you can learn what these things "are," but without at least
_some_ of the mathematics, it's going to be tough to fully grasp.

And yes, mathematics is a language; it is _the_ language for modelling the
phenomena we experience in the world, including the non-physical (contracts,
data / counting, etc.).

I so, so, so wish that people were taught that mathematics is a language of
relationships rather than one of rote numerical calculation. I think more
people would stick with it. It's a shame, because the doors it opens are huge.

~~~
The_rationalist
I don't quite understand how mathematics are more adapted to model the world
than a programming language.

~~~
caspper69
The short answer, they are not;

The long(er) answer? Mathematics are more limited, and thus, at least on the
surface, provide a more rigid model.

Plus, we have about 4500 years worth of equations that we know to be accurate;
not so true for C or any other programming language. Plus, we can model all
the way down in mathematics, as it is more abstract in nature.

The long(est) answer? It's turtles all the way down.

~~~
kd5bjo
> Mathematics are more limited

The fundamental point of the Church-Turing thesis is to show that everything
computers can do can be described by mathematics, so any programing language
has either the same or less power than math. They are certainly easier to use
for many problems, though.

Additionally, Gödel showed that there are true statements that can be
described but not proven by math, ao it can never be a perfect model of the
real world.

~~~
progval
> Additionally, Gödel showed that there are true statements that can be
> described but not proven by math, ao it can never be a perfect model of the
> real world.

I assume you are refering to Gödel's incompleteness theorems; but this is not
what they "say".

What they say is that any formal system powerful enough to contain basic
arithmetic is either inconsistent (ie. it can both prove and disprove a
particular theorem) or incomplete (ie. cannot either prove or disprove all
theorems).

If you take the ZF set theory, for example, it is incomplete, because some
statements cannot be either proved or disproved within ZF, such as the Axiom
of Choice (AC) or Zorn's Lemma.

However, that doesn't imply that ZF cannot be "a perfect model of the real
world", because the AC is irrelevant in a the real world, because the AC only
applies to infinite sets, which do not exist in the real world.

