As an example, SQL (ignoring the crazy bits) is so amenable to query optimization precisely because it is not Turing complete.
* Little Big Planet 
* Scala's type system 
Konrad Zuse's Z3 computer belongs on the list https://en.wikipedia.org/wiki/Z3_(computer) , making it accidentally the first ever actually-constructed Turing-equivalent* computer. There may be some kind of lesson there. (Zuse evidently wasn't unaware of the possibility of Turing-complete computation though: he designed the first high-level programming language https://en.wikipedia.org/wiki/Plankalk%C3%BCl before the end of the war.)
* (Or memory-limited finite approximation to a Turing-equivalent computer, like any other actual computer, of course.)
Here's an example of the first: I suspect that the in-game mechanics of a generalized version of Minecraft to not be Turing Complete. By generalized, I mean that you remove the chunk loading and update limitations, giving you a countably large playing field. I, of course, am not considering limitations (or benefits) of Java, the machine running it, or unintended behavior (bizarre bugs). Forgo things like command blocks and bizarre uses of spawn eggs/entities, just consider legitimately obtainable resources (we can assess TC with these later).
To briefly sketch the idea, when you have a constant sized machine (accessing arbitrarily sized input), the selected mechanics are too limited by construction to "grow" the equivalent to a Turing Machine's work tape. A superficial reason for this is because the mechanics are coded to only extend locally (note that this is distinct from the "computation is local" result). For example, using pistons to push only goes out 11 blocks before it fails. You can chain multiple of these together, but only a constant number. You also need controllable Redstone signal to extend far enough to change their states. With a bit more work, you can show that mechanical self propulsion using pistons is not possible (there's a giant thread on Minecraft Forums concerning this), even in one direction. Three simple reasons for this: 1. You cannot push extended pistons, 2. you cannot push/pull/power your back most piston without "leaking" blocks, and 3. you cannot make a pushable clock. Your ability to shift around and access your tape/memory or move your Turing "head"/logic in this approach goes out the window.
For block based memory (to make use of infinite "block space") in general, you not only have to move a number of blocks of an arbitrary number to locations an arbitrary distance away, but you have to autonomously generate and destroy an arbitrary number of blocks, too. Cobblestone, Ice, Pumpkins, and Melons all do this and are thankfully Redstone distinguishable, so we at least have our endless supply of symbols for such a tape. There are additional requirements and engineering challenges, but they fall outside the purpose of this post.
Issues like these are not considered in almost all the claims Minecraft's Turing Completeness that I have seen. They absolutely should be. Obviously there may be ways around this, but I hope my point is clear. Make sure the application of your "proofs" or constructions actually make sense. Make sure they demonstrate how to handle the memory and the logic (algebra) necessary. And for god's sake, make sure that your model is actually an automaton (I'm looking at you Magic and LBP).
On a side note, I did start compiling a paper on the computational complexity of Minecraft a while ago, but left it unfinished. It goes through various generalizations, starting from some most basic (just blocks, redstone, etc.), adding some components like randomness, commandblocks, entities. Even some structural assumptions such as superflat worlds and tiled machines (works for Conways game of life). I lost some motivation because I did not believe people would care enough to be worth my time. Even if I were to pick it up again, the game has changed quite a bit since 1.7 came out. My results might be outdated (new commandblock functionality changes things greatly). However, if others show interest, I could perhaps start working on it again.
The author mentions this problem explicitly on the "Difficulties" section of the site. It is very important to note this. The hope is for the effects of some cards to allow for the necessary computational depth to capture all of computation without additional human action (besides, lets say, initialization). People often look at Turing Machines and their components. They forget to think about state transitions and the recursion involved (recursion really is the "meat" of computation).
What I find really cool is that, while they might not add a set of cards allowing for arbitrary branching, they could introduce cards that allow for some bounded branching. This could add limitations of resource bounds that are still scalable: polynomial time, space, exponential time, primitive recursion. There seems to be a lot more flexibility here than with some other mentioned "Turing-Complete" systems.
Someone else, here, commented that you would also need to implement infinite loops. Assuming you don't have to fuel the machine with cards, this method could be used to stall a real game indefinitely! Apparently this is possible. O_O
Just like a theoretical Turing machine doesn't require a human to feed it with a strip of tape, a theoretical Magic-Turing machine doesn't require a human to feed it with tokens and damage markers.
And game-breaking infinite loops in Magic are dime a dozen. Did you know that there's a combo that allows you to rip every card in your opponent's deck into confetti? (Unless they forfeit.)
Sorry if I entirely missed the point of what you were saying.
I must have not been clear and concise. I apologize for this.
I was discussing the scenario where we introduce a set of cards that allow for an arbitrary number of triggers without human involvement. We clearly need to implement a way of handling nested loops, branching, and recursive calls. So the set of cards facilitating this need to either need to do this completely by themselves or construct the behavior given the state of the playing field. I did not assume we can do what is described in the paper (the site) because I have no reason to think the new cards must function in a way that is compatible with their setup. This is why I am interested in the possibility of implicit and bounded recursion.
The fuel comment was just relevant to implementing an infinite loop in a real game. If our supposed 100% autonomous system (with the new cards as mentioned above) relied on an additional resource (such as burning through cards each "tick"), then this wouldn't work in real life but would work if we generalized the size of the deck or available cards. This is not supposed to be a characteristic of Turing Machines in general (or at all). It was just a precaution for attempting to translate an instance of the generalized case to real life.
If your answer is no, then I think the current cards are already sufficient, it just needs some clever finagling to take the current design and remove all 'may' choices.
If your answer is yes then I think your standards are flawed.
>Could you link something describing the hilarious confetti combo?
Okay, looking into it it was less of a loop than I thought: http://www.mtgvault.com/doompie/decks/the-deck-ripper/
So I got you this instead: http://www.mtgvault.com/planestalker/decks/16-infinite-loops...
As for whether or not the current cards suffice, I can't say much (I don't care about Magic tbh). It just speculation as far as I can tell. I was not presuming that new cards are necessary. I just considered it for the sake of the hypothetical.
Interesting links. Thank you.
You know, codifying pen-and-paper computations is how Alan Turing came up with his machine in the first place.
My point was that pen and paper, itself, is not a sufficient model of computation (it is not capable of any computation). Pen + paper + human is. The fact you mention is precisely why I chose to bring this situation up. If we throw in humans to the mix, our candidate "model of computation" can become heavily reliant on the capabilities of humans. At this point, we need to explicitly say that we include human interaction in this model. Based on my standards, this is not a really fruitful statement, since humans are almost capable of general computation alone. They just need the memory. Analogously, pen and paper is just a work tape. a work tape is not a system or automaton at all. I find it mind boggling that I have to explain this to people, especially those who appear so confident that they understand theory of computation.
Ok, I couldn't do it in minecraft cause I don't play, but I know that it is possible to do in minecraft.
I could probably simulate a turing machine on a TI-81 graphing calculator depending on if it fit in memory.
Saying something is Turing Complete is kind of like saying "Could you write a Nintendo emulator" for it. Once you get beyond a certain level of computational power and memory, you can, even if it would be very slow.
Well of course. A TI-81 is "a computer" in the layman's sense of the term. It has a general purpose Zilog z80 processor that can be programmed by the end user with a BASIC dialect, and (recently, with a exploit discovered in TI's software on it) with z80 assembly.
Pointing out that a TI-81 is Turing equivalent is like pointing out that an iPhone is Turing equivalent. It was never pretending not to be.
Depending on how you define 'Turing-equivalent' this is never a problem: Something is Turing-equivalent if and only if you always have enough memory for the current computation.
Note that you do not have infinite memory, merely sufficient memory, another pedantic distinction sometimes harped upon.
The above distinctions are frequently disregarded in actual conversations because they reduce a useful term to something which can only have application to the most theoretical parts of Computer Science.
Finally, a third distinction: Problems are Turing-complete. Machines are Turing-equivalent. Cite: http://scienceblogs.com/goodmath/2007/01/05/turing-equivalen...
Arbitrarily scalable memory is just one necessary component for Turing-equivalence to a Turing Machine. Two machines are Turing Equivalent when they can simulate each other. Not that this does not necessarily extend to general computation and can actually be a limited for of such.
Countably unbounded memory is not necessarily a requirement (ex. some symbolic systems), but might as well be for the class of machines that you are thinking of. A lot of models of computation become Turing-complete when they receive this benefit. You also need to keep in mind that some automata can have infinite memory, but don't have the ability to take advantage of it. Straightforward examples include pushdown automata and time bounded machines. Machines that do not have a logically complete algebra will also have this problem.
I complete agree with you about the "finite memory" objection people like to use. I find it incredibly annoying and not very productive. It is almost ignorant to discussion going on as they are completely missing the point. It's insulting that they think others don't realize this. I'm glad someone else (you) here shares this feeling. :)
I believe you meant that problems can be R.E.-complete (semidecidable in a complete way; ex. Halting problem). The term, Turing Complete, is inapplicable in this situation. It describes properties of a formal system and/or model of computation (which I'm certain you knew).