
A collection of things that are Turing-complete by accident (2013) - chii
http://beza1e1.tuxen.de/articles/accidentally_turing_complete.html
======
toomanybeersies
Missing from the list is Dwarf Fortress:
[http://dwarffortresswiki.org/index.php/DF2014:Computing](http://dwarffortresswiki.org/index.php/DF2014:Computing)

Great when you want to try and create a computer while trying to fend off
goblin attacks and zombies.

Technically, it's not Turing Complete, as the "tape" is not unlimited, but
other Turing Complete games have the same issue.

~~~
simias
If you limit Turing Complete strictly to systems with unlimited "tape" then no
computer would qualify except arguably if you stream data in and out, but even
then it's only acceptable if the universe is infinite.

------
carlsborg
The paper where they build logic gates with BGP is fascinating. Thanks for
posting this.

"In this paper, we answer computational complexity questions by unveiling a
fundamental mapping between BGP configurations and logic circuits. Namely, we
describe simple networks containing routers with elementary BGP configurations
that simulate logic gates, clocks, and flip-flops, and we show how to
interconnect them to simulate arbitrary logic circuits."

------
nothis
>Super Mario World

Okay, with some script...

>Here is a video, where a human (not a script!) creates a playable Flappy Bird
clone this way.

Wait, what?!

~~~
ericfrederich
Super impressive. I wouldn't count this as being turing complete. You could
say that the SNES itself is turing complete. He successfully got a turing
complete machine to run arbitrary code. It was the SNES itself running this
code he injected, not some rule engine built into the game. This is much
different than getting CSS or HTML to run arbitrary code.

~~~
johnnyfived
Still counts since he managed to create the working method to inject the code
which allows the game to be Turing complete. A medium with no way of entering
or traveling through it isn't really a medium.

------
chengiz
> The C preprocessor is only Turing-complete, if executed in a loop, which
> feeds the output to the input ad infinitum.

I was momentarily confused by this until I realized there shouldnt be any
commas in that sentence.

------
0xdada
Isn't the bug in Super Mario just exposing the underlying system's Turing
Completeness?

~~~
bena
Yeah, I don't think arbitrary code execution counts as the game itself being
"Turing complete".

~~~
OscarCunningham
On the other hand, what is "Super Mario World" if not the game as it runs on
the SNES?

~~~
euyyn
Yeah but it's a different kind of fuckup, designing a system with a
vulnerability that lets you "escape" into the underlying machine, and
designing a system that, per-se, turns out to be Turing-complete.

------
egeozcan
Also TypeScript Type System:
[https://github.com/Microsoft/TypeScript/issues/14833](https://github.com/Microsoft/TypeScript/issues/14833)

------
Gormisdomai
A nice reminder that not all Turing-Complete things were created equal.
Programming anything using these is a nightmare.

(Except maybe the template / type system examples - I hear that's a lot like
programming in Haskell)

~~~
lmm
C++ templates are like programming in Haskell _without a type system_ \- all
of the pain, none of the gain.

------
catnaroek
It would actually be more interesting if someone made a list of useful systems
that _cannot_ perform general computation. Some examples:

(0) Parsers for context-free grammars

(1) Relational algebra, and the subset of SQL that maps to it

(2) Hindley-Milner type inference

------
j_m_b
Someone wrote an article demonstrating that Magic: The Gathering is Turing
Complete:
[https://www.toothycat.net/~hologram/Turing/HowItWorks.html](https://www.toothycat.net/~hologram/Turing/HowItWorks.html)

------
xucheng
Another missing is the X86 mov instructor.

[https://github.com/xoreaxeaxeax/movfuscator](https://github.com/xoreaxeaxeax/movfuscator)

------
Asdfbla
It's very cool how such a level of computational power is so seemingly easy to
reach. I guess it's not surprising if you consider how simple Turing machines
themselves are, but I'm always amazed that this is really enough.

~~~
vog
The problem is exactly reverse: From a security and anaysis point of view, you
want languages that do the job with the _least possible power_. And those are
surprisingly hard to find.

You start with simple and not too-powerful languages (rules systems, regexes,
parsing stuff), note they are insufficient for some real-word use case, add
one or two features, and bam!, that mess becomes turing complete.

~~~
willtim
Strongly agree. With more constraints in the language, one gains greater power
to reason. This is what the Ethereum creators are learning the hard way. I
also think folks would be surprised with how much can be accomplished without
Turing completeness. For example, merge sort can be implemented as a simple
functional unfold, or "primitive corecursion" in a total language.

~~~
pron
This is a very common misconception. While Turing complete computational
models "suffer" from the halting problem that makes arbitrary "reasoning"
uncomputable, that is 1. not quite the problem that makes reasoning hard, and
2. that does not mean that reasoning in weaker models is _feasible_.

1\. What makes reasoning hard is not quite the halting theorem as normally
stated, but a generalization of it that's sometimes called "bounded halting",
and was used in the proof of the time-hierarchy theorem, possibly the most
important theorem in theoretical computer science. Bounded halting _roughly_
states that it's impossible to know in under N steps whether an arbitrary
program P would halt when operating on input X in fewer than N steps. A simple
corollary also shows the impossibility of generalizing the program's operation
from one input to another. In short, this theorem means that it's impossible
to know what a program would do for any input any faster than running it for
every input. It can roughly be stated as saying that the verification or
"reasoning" complexity for a program with a state space S of size |S| is
θ(|S|) (where theta is like big-O, except it roughly means "no more and no
less than", whereas big-O means "no more than"). Most importantly, _this
theorem does not require the programming model to be Turing complete_. For
example, it applies directly -- using the same proof -- to programming models
that only allow primitive recursive functions.

2\. Even the simplest computational model -- the barely useful finite-state
machine -- is already too complicated for arbitrary reasoning. Its
verification complexity also happens to be θ(|S|) (but with a different proof
than that of the bounded halting theorem), where |S| is usually _at least_
exponential in the size of the program. In general it can _very_ easily be
shown that if your programming languages has nothing more than boolean
variables and the ability to write loops of length of no more than 2 or,
alternatively, it has subroutines even if they cannot be recursive and there
are no first-class subroutines -- reasoning is also infeasible (it's PSPACE-
complete) and you might as well be Turing complete, as that won't make the
worst-case any harder in practice (it makes no difference if the worst case is
truly infinite or "just" much longer than the expected lifespan of the
universe). The above fact means it is provably impossible to create a
programming language -- no matter how limited -- where every program would be
easily verifiable. To make a language feasibly verifiable in all/most cases,
its state space must grow slowly (polynomially) with program size; this means
that not only must the programming model be weak (basically just FSM), but the
language exceptionally inexpressive. Regular expressions are pretty much the
farthest we can go, and even then some simple questions about regexps are
infeasible (such as whether two regexps are equivalent; I believe that problem
is PSPACE complete, even though there are heuristic algorithms that work in
many instances).

For further discussion and complete results, see my post:
[https://pron.github.io/posts/correctness-and-
complexity](https://pron.github.io/posts/correctness-and-complexity)

~~~
adwn
You're not wrong when it comes to the theoretical, general case. However, you
shouldn't forget that in practice, we're not interested in _every_ possible
problem instance. A weaker model often allows for better heuristics.

Take SAT solvers, for example. Since propositional logic encoded in CNF is so
restrictive, we can find and implement relatively simple heuristics that solve
huge, real-world problem instances in a reasonable time. The same
propositional formula can also be represented by a C program which is much,
much smaller [1], yet the best heuristic to determine whether the C program
outputs 1 for any of its possible inputs, might be to run it once for each
input combination (which would take longer than the age of the universe for
even 10k variables).

[1] "Proof": the CNF is typically generated from a hand-written program in a
practical programming language; therefore, the program represents the
propositional formula.

~~~
pron
> A weaker model often allows for better heuristics.

Do you have any evidence for that? I have not seen this claim made anywhere.
The reason is that heuristics apply in cases where the problem at hand can be
stated in terms of a lower-complexity problem, and that happens regardless of
model. Is there empirical data showing this is happens more often in
restricted models? Intuitively, I see no reason why it should.

> Take SAT solvers, for example.

SAT is NP-complete, which is (well, we don't know for sure, but that's the
working hypothesis) _much_ easier than even verification of FSMs (PSPACE-
complete). We don't have any similar working heuristics for, say, TQBF
(PSPACE-complete), and even SAT solvers fail often.

> yet the best heuristic to determine whether the C program outputs 1 for any
> of its possible inputs, might be to run it once for each input combination
> (which would take longer than the age of the universe for even 10k
> variables).

Most autoamtic software verification doesn't work like that at all, but
through a process called abstract interpretation which can conservatively
project the problem onto a much smaller state space (that is basically one way
to look at how type systems work). The beauty of SAT solvers is that they
require no such semantic approximations, and appear to work on arbitrarily
difficult problems (except when they don't). So far, they work by magic -- we
don't know why they're so effective in practice. But the hope of finding such
a general algorithm that happens to work on many real-world instances of
problems that are _at least_ PSPACE- or 2EXP-hard problems are not great. It
all comes down to studying special cases, and I don't see a particularly good
reason (and I haven't encountered any research to that effect) for how to make
such "good" instances more likely using some restricted language.

~~~
adwn
> _Is there empirical data showing this is happens more often in restricted
> models?_

Are you asking for studies? How would you even rigorously define the
hypothesis?

> _SAT is NP-complete, which is [...] much easier than even verification of
> FSMs (PSPACE-complete)._

So? How does that invalidate my example?

> _The beauty of SAT solvers is that they require no such semantic
> approximations, and appear to work on arbitrarily difficult problems (except
> when they don 't). So far, they work by magic -- we don't know why they're
> so effective in practice._

But that's the point! For real-world problem instances, you'll find that
automatically SAT solving a CNF model works much better than applying an
automatic theorem prover to the C program that generated the CNF model. That's
because in practice, it's easier to design heuristics that work on a
propositional formula than it is to design heuristics that work on CNF-
generating C programs. If that were not true, then we wouldn't need SAT solver
– we'd just run the automatic theorem provers on the C programs that generate
the CNF model.

> _I don 't see a particularly good reason (and I haven't encountered any
> research to that effect) for how to make such "good" instances more likely
> using some restricted language._

And yet it works extremely well in the case of the CNF-generating program vs.
SAT solving the generated CNF model. That is a pretty strong argument, you
cannot just dismiss it because PSPACE harder then NP.

~~~
pron
> Are you asking for studies? How would you even rigorously define the
> hypothesis?

Preferably studies, but even intuition would do at this point. You don't need
to rigorously define the hypothesis, just demonstrate which kinds of patterns
susceptible to commonly applied automated reasoning occur more frequently when
the same application is written in a TC or non-TC language. The thing is that
it's often easy to recognize special cases even in TC languages (e.g.,
deducing that a function is total in a TC language is easy in those cases
where the function resembles those used in non-TC languages).

I'm not saying languages can't make verification easier. For example,
recognizing a terminating loop may be a bit easier in a language with `for`
loops than in a language that only has a `goto` construct, and a language with
less/no aliasing makes local reasoning much easier; I'm saying that that help
does not stem from restricting the computational model (just as removing
`goto` does not reduce the computational power). Turing-completeness in itself
is a red herring when it comes to the question of how languages can help
verification.

But I don't understand even the general line of your argument. Even supposing
that SATing a CNF formula automatically generated from C is easier than other
approaches (it isn't), this does not show that writing CNF directly is more
advantageous than writing C. After all, a simple mechanical translation
exists, so why should I write CNF formulas if writing equivalent C code is
easier?

> you'll find that automatically SAT solving a CNF model works much better
> than applying an automatic theorem prover to the C program that generated
> the CNF model.

It doesn't, though. Many automatic problems are easier to verify using
abstract interpretation than SMT.

> So? How does that invalidate my example?

That a magical 50-year-old algorithm that no one understands to this day
happens to exist for the very simplest of intractable problems does not mean
that we can hope for similar success for much, much harder problems. When it
comes to program verification, SAT (used in SMT) is effective only in a very
controlled and limited way (I know because I use formal verification). No one
believes SMT solvers could be extended to verify full programs.

~~~
adwn
> _but even intuition would do at this point._

Maybe this example helps: I should be easier to write analysis heuristics for
a language that only allows loops of the form "for i = 1...n" than for one
which also has while-loops, gotos, and unrestricted C-style loops. Maybe not
from a theoretical point of view – after all, both languages are equally
Turing-complete – but from a practical one, since there are more language
constructs that have to be covered by the analysis. Sure, you can mechanically
translate all loops from the second language to "for i = 1...n" loops, but
that transformation is not trivial and has to be implemented first.

> _But I don 't understand even the general line of your argument. Even
> supposing that SATing a CNF formula automatically generated from C is easier
> than other approaches (it isn't), this does not show that writing CNF
> directly is more advantageous than writing C. After all, a simple mechanical
> translation exists, so why should I write CNF formulas if writing equivalent
> C code is easier?_

My point isn't that we should hand-write CNF formulas. What I'm saying is,
that automatically solving CNF formulas is easier than automatically solving a
C program that expresses the same thing.

Example: a Sudoku solver. Given a partially filled 9x9 matrix,

* ) Program 1 takes as input an assignment for the unfilled fields (free variables). It outputs yes/no indicating whether the input satisfies the Sudoku rules.

* ) Program 2 generates a CNF instance which encodes the Sudoku rules and the partially filled matrix. The variables represent the unfilled fields.

Program 1 and the output of program 2 are equivalent in the sense that they
encode the same problem instance, but the CNF generated by program 2 is in a
much more restricted language than program 1. You can write a decent (although
not state-of-the-art) CNF parser and SAT solver in a few 100 lines of code,
but you will be hard-pressed to write even the parser for a general C program
in that space, let alone something that can understand C's semantics, an
automatic solver using abstract interpretation, and some heuristics to speed
things up (only those heuristics are allowed to be somewhat tailored to this
type of problem, everything else has to be general-purpose – just like the SAT
solver, which is general purpose and has a few heuristics that speed up many
practical instances).

~~~
pron
> It should be easier to write analysis heuristics for a language that

Quite possibly, and that's exactly what I said. Languages may assist formal
reasoning (although _how_ is not always clear[1] and your example may or may
not be valid -- the transformation in the well-behaved cases may be quite
easy; compilers and static analyzers do such analysis all the time), but
that's not due to a weakening of the computational model.

> What I'm saying is, that automatically solving CNF formulas is easier than
> automatically solving a C program that expresses the same thing.

Right, but as the transformation is mechanical, there is no reason to prefer
one language over the other. They're equivalent -- in this particular scenario
-- from both a theoretical and a practical point of view. Static analyzers
commonly perform transformations -- from the programming language to SMT-LIB
and/or using abstract interpretation.

> but you will be hard-pressed to write even the parser for a general C
> program in that space

Writing a compiler for Pascal is much harder than writing an assembler. That's
not an argument for writing assembly by hand over writing Pascal.

[1]: One example is a pure-functional language like Haskell vs. an imperative
one like Java. Well-behaved Haskell allows for equational reasoning while Java
"only" allows for assertional reasoning, but the common use of higher-order
functions makes some automatic reasoning methods much harder than the familiar
and powerful assertional reasoning approaches.

------
unit91
That Minecraft CPU video from the article was amazing. It never would have
even occurred to me to make such a thing.

~~~
etskinner
With the advent of pistons in the game, people have even added hard drives[0]
by using a loop of glass (non conductive) and other blocks (conductive).

[0] [https://youtu.be/q7clz1TPK8o](https://youtu.be/q7clz1TPK8o)

~~~
mrguyorama
More accurately, delay line memory

~~~
Doxin
It's not a delay line though. It's linear access, sure, but you can decide
when and if to step it.

~~~
mrguyorama
Shame on me for not watching the video then. I (rather short sightedly) though
you were referring to a separate implementation that WAS delay line.

That's a significant improvement then

------
godelski
Here is the current world record for Super Mario World (Kill Bowser) that was
recently set. He does reprogram the game (includes explanation)

[https://www.youtube.com/watch?v=YZZhbTtsTts](https://www.youtube.com/watch?v=YZZhbTtsTts)

------
virtualfudge
Redstone in Minecraft could be added to the list.

------
blubb-fish
looks like a really cool - yet expensive - hobby!

