Hacker News new | past | comments | ask | show | jobs | submit login
Accidentally Turing-Complete (tuxen.de)
108 points by ggreer on Oct 19, 2013 | hide | past | favorite | 48 comments

"Mean time to turing completeness" is a mental metric that I use to decide whether making a domain-specific language is warranted or it would be better to just use a general purpose language. If you can really limit the scope of the task to something non-turing complete, then a DSL might be a good idea. If you don't think you can keep the thing from becoming turing complete, you're going to fall prey to Greenspun's Tenth Rule [1] and you might as well skip all the nonsense and just use a Lisp in the first place (or another well-designed, fully powered language).

[1] http://en.wikipedia.org/wiki/Greenspun's_tenth_rule

Deliberately limiting the power of your (domain specific) language can be a useful thing to do, too.

As an example, SQL (ignoring the crazy bits) is so amenable to query optimization precisely because it is not Turing complete.

This has been codified in the Rule of Least Power: http://en.wikipedia.org/wiki/Rule_of_least_power

Indeed. Bitcoin transactions all contain validation scripts that are run through a stack-based interpreter. This Forth-like language does lack loops but, even after being there for years, the current client still only accepts a small set of standard templates.

This shows up all the time in the myriad Haskell eDSLs. Monad-based interfaces are obvious but throw away information that is useful for optimization (see http://homepages.inf.ed.ac.uk/slindley/papers/unembedding.pd... for how to get some of it back). Applicative interfaces are really amenable to optimization, though (see Facebook's Haxl project: http://www.reddit.com/r/haskell/comments/1le4y5/the_haxl_pro...)

Applicatives were the example that I had in mind. See also http://gergo.erdi.hu/blog/2012-12-01-static_analysis_with_ap..., or think about how regular expressions' limited power makes parsing with them a linear affair. (Incidentally, regular language parsers are Applicatives, too.)

Ah! That was the blog post I was looking for... but I kept searching for "Cactus Applicative" with no luck.

Apache rewrite rules, which isn't that surprising if you've seen as many abominations of config as I have:


I'm reasonably sure that one was intentional. Or at least, noted by the creator upon creation.

The notoriously complex config syntax of sendmail has also been proven to be Turing complete: http://okmij.org/ftp/Computation/sendmail-as-turing-machine....

Many of these have entries in the 99 Bottles of Beer website http://www.99-bottles-of-beer.net/ .

Konrad Zuse's Z3 computer belongs on the list https://en.wikipedia.org/wiki/Z3_(computer) , making it accidentally the first ever actually-constructed Turing-equivalent* computer. There may be some kind of lesson there. (Zuse evidently wasn't unaware of the possibility of Turing-complete computation though: he designed the first high-level programming language https://en.wikipedia.org/wiki/Plankalk%C3%BCl before the end of the war.)

* (Or memory-limited finite approximation to a Turing-equivalent computer, like any other actual computer, of course.)

Server Side Includes are turing complete http://www.janschejbal.de/projekte/ssituring/

Another example of this that I love is that you can build logic gates in Transport Tycoon (well, OpenTTD) using trains and rail signals: http://www.tt-forums.net/viewtopic.php?f=29&t=37902.

I'd add to that list XSLT: http://www.unidex.com/turing/utm.htm

XSLT is somewhat explicitly a programming environment: that's how people use it, and the extent to which the syntax for it was awkward in XSLT 1 it was all made almost reasonable in XSLT 2. Like, even in XSLT 1, the only mental gymnastic you have to do is say "function" every time you see "template": the language already has "parameters" and a "call" primitive, so it is a straightforward functional language. What makes this list interesting is that these things have to be twisted in weird ways to become programming environments. Putting XSLT on the list is more like putting brainfuck on the list: its irritating the program in, but it isn't surprising that it's turing complete.

I would advise that everyone take more care when assessing the Turing Completeness various models of computation. Many analyses that I have seen do a check for logical completeness, but forget to rigorously check that you have scalable and fully accessible memory, even in the most generalized cases. Even if you assume that some resource is unbounded, your other mechanics might not actually allow you to interface with it, even if they do work on a constant/small scale (locality of computation does not necessarily imply this). You could take a turing machine its infinite tape, but forget to show that you can move it left or right an arbitrary number of times. On the flip side, another ridiculous assumption involves including the power of human operation. Yes, an abacus is "Turing Complete" if it is infinitely large and you, as a human being, do the logic in your head. Pen and paper is Turing Complete by the same reasoning (this is ridiculous). Yes, we know that humans are capable of computation. I might grant that you have scalable memory in these situations, but where's the automation of your logic?

Here's an example of the first: I suspect that the in-game mechanics of a generalized version of Minecraft to not be Turing Complete. By generalized, I mean that you remove the chunk loading and update limitations, giving you a countably large playing field. I, of course, am not considering limitations (or benefits) of Java, the machine running it, or unintended behavior (bizarre bugs). Forgo things like command blocks and bizarre uses of spawn eggs/entities, just consider legitimately obtainable resources (we can assess TC with these later).

To briefly sketch the idea, when you have a constant sized machine (accessing arbitrarily sized input), the selected mechanics are too limited by construction to "grow" the equivalent to a Turing Machine's work tape. A superficial reason for this is because the mechanics are coded to only extend locally (note that this is distinct from the "computation is local" result). For example, using pistons to push only goes out 11 blocks before it fails. You can chain multiple of these together, but only a constant number. You also need controllable Redstone signal to extend far enough to change their states. With a bit more work, you can show that mechanical self propulsion using pistons is not possible (there's a giant thread on Minecraft Forums concerning this), even in one direction. Three simple reasons for this: 1. You cannot push extended pistons, 2. you cannot push/pull/power your back most piston without "leaking" blocks, and 3. you cannot make a pushable clock. Your ability to shift around and access your tape/memory or move your Turing "head"/logic in this approach goes out the window.

For block based memory (to make use of infinite "block space") in general, you not only have to move a number of blocks of an arbitrary number to locations an arbitrary distance away, but you have to autonomously generate and destroy an arbitrary number of blocks, too. Cobblestone, Ice, Pumpkins, and Melons all do this and are thankfully Redstone distinguishable, so we at least have our endless supply of symbols for such a tape. There are additional requirements and engineering challenges, but they fall outside the purpose of this post.

Issues like these are not considered in almost all the claims Minecraft's Turing Completeness that I have seen. They absolutely should be. Obviously there may be ways around this, but I hope my point is clear. Make sure the application of your "proofs" or constructions actually make sense. Make sure they demonstrate how to handle the memory and the logic (algebra) necessary. And for god's sake, make sure that your model is actually an automaton (I'm looking at you Magic and LBP).

On a side note, I did start compiling a paper on the computational complexity of Minecraft a while ago, but left it unfinished. It goes through various generalizations, starting from some most basic (just blocks, redstone, etc.), adding some components like randomness, commandblocks, entities. Even some structural assumptions such as superflat worlds and tiled machines (works for Conways game of life). I lost some motivation because I did not believe people would care enough to be worth my time. Even if I were to pick it up again, the game has changed quite a bit since 1.7 came out. My results might be outdated (new commandblock functionality changes things greatly). However, if others show interest, I could perhaps start working on it again.

Magic is just as Turing complete as a Turing machine is. Magic is an abstract system, not pieces of cardboard. Just like a Turing machine is an abstract idea rather than a physical box. You can run the logic of a magic game using circuits, or gears, or humans, but it's fundamentally based on taking a set of rules and implementing them. The reason pen and paper is different is that a pen is not a rule-based system, it's merely an object. Same for an abacus.

I believe you have misunderstood what I have said. My issue with Magic was not with the logic specifically. It is with the fact that it requires the player to initiate state transition by specifically selecting actions. It would be permissible if the player simply functioned as a clock, but some of the state transition logic is actually shifted over to the player. It certainly is better than the pen and paper scenario, but the rule system is incomplete. It still requires a couple of rules for the player to follow (just as pen/paper or an abacus does).

The author mentions this problem explicitly on the "Difficulties" section of the site. It is very important to note this. The hope is for the effects of some cards to allow for the necessary computational depth to capture all of computation without additional human action (besides, lets say, initialization). People often look at Turing Machines and their components. They forget to think about state transitions and the recursion involved (recursion really is the "meat" of computation).

I really thought that all human-initiated action had been removed from the Magic machine. Oops. It's been a while since I read the complete writeup. But it looks like it should be possible to fix this.

I think it would be possible given new cards, but I imagine it would be incredibly difficult (not to mention game breaking). It would basically require cards that would handle arbitrary branching (algorithmically determined). A set of cards would have to contain effects that form a general template for doing this based on cards in a implementation of memory. I personally think it is unlikely and would be tricky to pull off. I admittedly don't have a very concrete reason for this.

What I find really cool is that, while they might not add a set of cards allowing for arbitrary branching, they could introduce cards that allow for some bounded branching. This could add limitations of resource bounds that are still scalable: polynomial time, space, exponential time, primitive recursion. There seems to be a lot more flexibility here than with some other mentioned "Turing-Complete" systems.

Someone else, here, commented that you would also need to implement infinite loops. Assuming you don't have to fuel the machine with cards, this method could be used to stall a real game indefinitely! Apparently this is possible. O_O

I'm a unsure what you're trying to say here. Are you talking about having a situation where the Turing machine resolves without any physical signs of execution? I don't see what's wrong with the current method of using tokens as a tape and infinite triggers based on those tokens until a halt state is reached. It just needs a bit of refinement on the triggers.

Just like a theoretical Turing machine doesn't require a human to feed it with a strip of tape, a theoretical Magic-Turing machine doesn't require a human to feed it with tokens and damage markers.

And game-breaking infinite loops in Magic are dime a dozen. Did you know that there's a combo that allows you to rip every card in your opponent's deck into confetti? (Unless they forfeit.)

Sorry if I entirely missed the point of what you were saying.

I didn't know there were that many infinite loops in Magic (I don't happen to play it). Could you link something describing the hilarious confetti combo?

I must have not been clear and concise. I apologize for this. I was discussing the scenario where we introduce a set of cards that allow for an arbitrary number of triggers without human involvement. We clearly need to implement a way of handling nested loops, branching, and recursive calls. So the set of cards facilitating this need to either need to do this completely by themselves or construct the behavior given the state of the playing field. I did not assume we can do what is described in the paper (the site) because I have no reason to think the new cards must function in a way that is compatible with their setup. This is why I am interested in the possibility of implicit and bounded recursion.

The fuel comment was just relevant to implementing an infinite loop in a real game. If our supposed 100% autonomous system (with the new cards as mentioned above) relied on an additional resource (such as burning through cards each "tick"), then this wouldn't work in real life but would work if we generalized the size of the deck or available cards. This is not supposed to be a characteristic of Turing Machines in general (or at all). It was just a precaution for attempting to translate an instance of the generalized case to real life.

My question is what you mean by 'human involvement'. For example: if there is a card that says to add a specific token to a specific place, with no choices, does everything that happens after the card is triggered count as human involvement? Or if the card says to shuffle the library, does the shuffling count as human involvement?

If your answer is no, then I think the current cards are already sufficient, it just needs some clever finagling to take the current design and remove all 'may' choices.

If your answer is yes then I think your standards are flawed.

>Could you link something describing the hilarious confetti combo?

Okay, looking into it it was less of a loop than I thought: http://www.mtgvault.com/doompie/decks/the-deck-ripper/

So I got you this instead: http://www.mtgvault.com/planestalker/decks/16-infinite-loops...

The answer is no. My standard mainly focuses on preventing the player from making choices after the "machine" executes. Choices here include instructions for the player from the spec about how to handle certain situations. As the author describes it, we would want the entire execution of the machine to act autonomously without choices. I guess it may be permissible to let the player choose not to act, but the idea is to not sneak any of the machines logic to the player.

As for whether or not the current cards suffice, I can't say much (I don't care about Magic tbh). It just speculation as far as I can tell. I was not presuming that new cards are necessary. I just considered it for the sake of the hypothetical.

Interesting links. Thank you.

> Pen and paper is Turing Complete by the same reasoning (this is ridiculous).

You know, codifying pen-and-paper computations is how Alan Turing came up with his machine in the first place.

Can you elaborate on this? I'm not sure what your point is or how it relates to my comment.

Have you ever actually read On Computable Numbers, With an Application to the Entscheidungsproblem? The whole point was to isolate the necessary components of what computers (which was a human job title at the time) do.

I have, but thank you for bringing up something incredibly obvious and well known. Honestly, perhaps I shouldn't have feigned confusion in my reply. I just felt that it was courteous to have them reframe their question again before I concluded that they really did miss the point completely.

My point was that pen and paper, itself, is not a sufficient model of computation (it is not capable of any computation). Pen + paper + human is. The fact you mention is precisely why I chose to bring this situation up. If we throw in humans to the mix, our candidate "model of computation" can become heavily reliant on the capabilities of humans. At this point, we need to explicitly say that we include human interaction in this model. Based on my standards, this is not a really fruitful statement, since humans are almost capable of general computation alone. They just need the memory. Analogously, pen and paper is just a work tape. a work tape is not a system or automaton at all. I find it mind boggling that I have to explain this to people, especially those who appear so confident that they understand theory of computation.

Wouldn't that apply to a normal computer though? Nothing has infinite memory.

When we talk about models of computation, we typically look at ones where some sort of resource is unbounded. In the case with generalized Minecraft, both time and block space are unbounded. The hope is to translate that block space resource and use it as memory. I was bringing up the point that this may be not possible since the mechanics do not allow enough access. This is interesting because a first approach to constructing scalable computer memory in Minecraft is to encode things with infinitely generable blocks. We have infinite sources of necessary components used for memory, but run into the barrier of operating things over arbitrary distances (communication). Even the case where you tile redstone based memory cells is just as difficult for the same reason. This is different than the case with a "normal computer" since there's no unbounded/scalable resource to consider.

I did not expect Magic: The Gathering

Well, it must depend on what cards are available. If it's turing complete, you must be able to get infinite loops. You won't be able to do that just by playing land and summoning monsters, but if you have cards that say something like "put your graveyard at the bottom of yor library", you could get an infinite loop. If there is some horrible card that says something like "take any valid card that you own and shuffle it into your library" you have something that goes beyond turing complete and into oracle territory.

It is entirely possible to generate infinite loops in MTG. You can generate inifinite mana, damage, health etc, bring back cards from the graveyard etc. If you have the right hand and cards in play these are easy to do. I remember playing at a tourney and my opponent got off a combo that did unlimited damage and mana, I was very sad.

I used to have a Myr deck that you could combo a Palladium Myr (tap to give +2 mana), and two Myr Galvanizers (Tap and pay a mana to untap all other Myr) to get infinite Mana. Combo that with any number of cards (X mana to do X damage, or X mana to make target player draw X cards, etc), and you get a fun way to win a game (though hard to pull off).

Here's a link that explains the Turing-completeness of Magic the Gathering. http://www.toothycat.net/~hologram/Turing/index.html

Technically, "from outside of the game" just refers to your sideboard, and cards that have been removed from the game. But in a casual setting it's usually used for anything.

The official stance is that in sanctioned tournaments you only own the cards in your registered decklist (or put another way, only the cards in your registered decklist exist). And outside of sanctioned tournaments, you own all the cards you own.

The official stance screwed up those cards anyway. The card clearly states to remove death wish from the game. Logically you should be able to use another wish to get it back. But they retconned it to say 'exile', and 'exile' puts cards into a place that is still 'inside' the game.

So we're talking about c++ templates?

I don't think Turing-Complete is much of a barrier anymore. I could write a Turing Machine in QBasic, or Java, or MineCraft.

Ok, I couldn't do it in minecraft cause I don't play, but I know that it is possible to do in minecraft.

I could probably simulate a turing machine on a TI-81 graphing calculator depending on if it fit in memory.

Saying something is Turing Complete is kind of like saying "Could you write a Nintendo emulator" for it. Once you get beyond a certain level of computational power and memory, you can, even if it would be very slow.

> I could probably simulate a turing machine on a TI-81 graphing calculator depending on if it fit in memory.

Well of course. A TI-81 is "a computer" in the layman's sense of the term. It has a general purpose Zilog z80 processor that can be programmed by the end user with a BASIC dialect, and (recently, with a exploit discovered in TI's software on it) with z80 assembly.

Pointing out that a TI-81 is Turing equivalent is like pointing out that an iPhone is Turing equivalent. It was never pretending not to be.

The point of this is that these things are entirely not intended to be computational systems, yet just through their complexity they ended up being Turing Complete unintentionally.

> I could probably simulate a turing machine on a TI-81 graphing calculator depending on if it fit in memory.

Depending on how you define 'Turing-equivalent' this is never a problem: Something is Turing-equivalent if and only if you always have enough memory for the current computation.

Note that you do not have infinite memory, merely sufficient memory, another pedantic distinction sometimes harped upon.

The above distinctions are frequently disregarded in actual conversations because they reduce a useful term to something which can only have application to the most theoretical parts of Computer Science.

Finally, a third distinction: Problems are Turing-complete. Machines are Turing-equivalent. Cite: http://scienceblogs.com/goodmath/2007/01/05/turing-equivalen...

Your definitions of Turing-Complete and Turing Equivalent, as phrased, are incorrect or perhaps mistaken to some extent. I don't want to

Arbitrarily scalable memory is just one necessary component for Turing-equivalence to a Turing Machine. Two machines are Turing Equivalent when they can simulate each other. Not that this does not necessarily extend to general computation and can actually be a limited for of such.

Countably unbounded memory is not necessarily a requirement (ex. some symbolic systems), but might as well be for the class of machines that you are thinking of. A lot of models of computation become Turing-complete when they receive this benefit. You also need to keep in mind that some automata can have infinite memory, but don't have the ability to take advantage of it. Straightforward examples include pushdown automata and time bounded machines. Machines that do not have a logically complete algebra will also have this problem.

I complete agree with you about the "finite memory" objection people like to use. I find it incredibly annoying and not very productive. It is almost ignorant to discussion going on as they are completely missing the point. It's insulting that they think others don't realize this. I'm glad someone else (you) here shares this feeling. :)

I believe you meant that problems can be R.E.-complete (semidecidable in a complete way; ex. Halting problem). The term, Turing Complete, is inapplicable in this situation. It describes properties of a formal system and/or model of computation (which I'm certain you knew).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact