
Is CSS Turing Complete? - cordite
http://stackoverflow.com/questions/2497146/is-css-turing-complete
======
bilalq
One of the answers towards the bottom linked to an incredibly interesting read
on programming using nothing but Procs[1]. I found it to be incredibly
fascinating, and it even goes into implementing FizzBuzz under these
constraints.

[1]: [http://codon.com/programming-with-nothing](http://codon.com/programming-
with-nothing)

~~~
timtadh
That blog post is basically explaining untyped Lambda Calculus and Church
Numerals with the ruby lambda syntax. If you liked that you maybe interested
in the real thing. The best introduction I know is a really great book on Type
Theory called: "Types and Programming Languages." It is great because after
Lambda Calculus the author shows how to build really useful programming
languages and type systems using the lambda calculus formalism. Definitely a
must read if you are interested in understanding the underpinning of modern
type systems.

[1]
[http://www.cis.upenn.edu/~bcpierce/tapl/](http://www.cis.upenn.edu/~bcpierce/tapl/)

~~~
acjohnson55
I read that book years ago in a Theory of Programming Languages. I recall
learning a lot from the process, but I'm probably due for a refresher, now
that I'm building more things in Scala and trying to learn me a Haskell for
great good.

------
andrewguenther
I'm coining it now, Guenther's law, corollary to Atwood's law:

Anything that can be implemented in CSS, shouldn't be.

~~~
theandrewbailey
I really hope that's sarcasm. Otherwise we will have people ripping out CSS,
like how some people don't optimize because premature optimization is evil. I
also don't want to go back to font tags.

~~~
andrewguenther
Did we even read the same post?

I'm joking that people shouldn't write Turing complete code in CSS. Add style
properties for days, more power to you, but if someone posts an implementation
of Git in CSS next week I will promptly jump off a bridge, laughing all the
way down.

~~~
baddox
Your phrasing of the proposed law isn't quite right. According to it, red text
should not be implemented in CSS since it can be implemented in CSS.

~~~
theandrewbailey
Bingo!

------
Grue3
The easiest way to prove that CSS is _not_ Turing complete is to demonstrate
that every CSS "program" terminates, therefore halting problem is solvable for
CSS. Indeed, a proper CSS engine will process a set of CSS rules in finite
time, or display an error. Therefore CSS cannot be Turing complete.

~~~
muyuu
Where's this proof?

------
okonomiyaki3000
More importantly, can it be expanded to the point that it is able to read mail
and, if not, when will it be replaced with something that can?

------
catkin
This is called a Turing tarpit
([https://en.wikipedia.org/wiki/Turing_tar_pit](https://en.wikipedia.org/wiki/Turing_tar_pit)):
where a system is technically Turing-complete, but in reality is too awkward
to do anything useful with.

------
rectangletangle
So who's going to be the first to write a full blown web-framework in CSS? I'm
mean who really wants the _unnecessary_ bloat of a dependency like JavaScript?

------
cheepin
Can someone explain how the demo proves that it is Turing Complete? I haven't
taken automata yet.

~~~
choudharism
It doesn't. As someone pointed out, Tic Tac Toe can be implemented in Finite
State Machines [1], which have less computational ability than Turing Machines
[2].

[1] [https://en.wikipedia.org/wiki/Finite-
state_machine](https://en.wikipedia.org/wiki/Finite-state_machine) [2]
[https://en.wikipedia.org/wiki/Turing_machine](https://en.wikipedia.org/wiki/Turing_machine)

~~~
Buge
Isn't the only difference between a finite-state machine and a Turing machine
whether the memory is finite or not? Then you could argue that nothing has
ever been made that is Turing complete because everything can be simulated by
a finite-state machine because of memory limitations.

~~~
edmccard
There is a difference, though, between a finite state machine and a Turing
machine with a finite amount of memory. An FSM cannot recognize context-free
languages, but a Turing machine can--even when it's implemented on a computer
without infinite memory.

(Actually, you only need a pushdown automata for context-free languages--I
forget what the next step up in the hierarchy is, the one that does require
Turing completeness)

~~~
mikeash
A Turing machine without infinite memory can't recognize context-free
languages. Take the standard example of recognizing the language of balanced
parentheses. With finite memory, there's a limit to how many parentheses you
can keep track of. That limit is going to be _gigantic_ for any reasonable
amount of memory, but it's still a limit. The language of balanced parentheses
with a depth limit is no longer a context-free language, but a regular
language, and as such can be recognized by a FSM.

It's trivial to demonstrate that a Turing machine with finite memory is
equivalent to a FSM. Enumerate all possible memory contents. This number is,
again, large for any reasonable amount of memory, but it is finite. For each
possible memory content, enumerate the Turing machine states. Each memory
state plus machine state becomes one FSM state. The Turing machine defines a
transition from one state to another plus an action on memory, which becomes
an FSM transition to the state that encodes both the new machine state and the
new memory state.

~~~
timtadh
Not at all true, the language of balanced parens is not regular and cannot be
matched by a FSM no matter how much memory you have.[1] It can be matched by a
Push Down Automata PDA.

The primary difference between a PDA and an FSM is a PDA has a stack which
acts as memory. Thus, a PDA can remember how many parens it has seen before.
Every time it sees an open paren it will push it onto the stack, when it sees
a close paren it pops. When the stack is empty it matches if the input has
been consumed.

The difference between a PDA and a Turing machine is the stack is now a
read/writable tape which can move in either direction. Notice how in both
formulations there is no limit on the memory (the PDA stack is could be
infinite as can the Turing machine tape). In contrast a Finite State Machine
doesn't have any memory. The finite refers to the number of states not the
amount of memory. Indeed, Turing machines are traditionally formulated with
finite states as well.

Thus, a FSM can never match the langauge of matching parens but a Turing
machine can. It will either match or consume all of its memory if memory is
limited. Memory limitations are not a statement on computational power but
rather a statement on feasibility.

[https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_lang...](https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages#Use_of_lemma)

EDIT: Your second paragraph is of course correct - you could encode every
possible stack as a FSM if there is a bound on the stack. HOWEVER, if you have
a limit on memory but it is essentially unknown (you know memory is finite but
you don't know exactly what the limit is) a PDA or a Turing machine will of
course be able to match things your FSM cannot (because they can take
advantage of the memory your FSM cannot assume it has in its states).

Memory limitations are in some sense an implementation detail. We could design
a computer to pause while we buy more ram. Then it could use as much memory
(in theory) as the human race could produce. That sounds like infinite for
some definition of the word.

~~~
mikeash
I'm deeply puzzled as to how you can say that my second paragraph is "of
course correct" yet continue to argue that there's a difference.

No, the language of balanced parens is not regular and cannot be matched by a
FSM no matter how much memory you have. That's completely true. However, it
also cannot be matched by a Turing machine with finite memory, no matter how
much memory you have. That is because, as I showed and as you agree is "of
course correct", the two machines are equivalent in their capabilities.

~~~
timtadh
Yes, I apologize for the parent comment. I realized I had misread your comment
after I had posted and did the edit to try and rectify the situation. I left
the original because I thought it might clarify some of the differences
between the machines for people who don't know anything about the subject.

I guess I felt that the memory limitation was a misleading way of discussing
the machines. Turing machines and PDAs are usually discussed with infinite
memory. In practice the input really determines how much memory the machine
will use.

Furthermore, I believe that the encoding scheme you suggest will use far more
space than the equivalent Turing machine or PDA it encodes. Because, for every
state in the machine you have to encode every possible memory configuration
for that state. That means you have an exponential explosion in the number of
states. This gets to the heart of the matter for me: when memory is bounded
using the finite version of a PDA will let you match a deeper nesting of
parens than a FSM because it will use memory more efficiently. You have to put
the states of the machine somewhere and that place is either memory or
hardware which are both limited.

~~~
mikeash
You're absolutely right that there's a vast _practical_ difference between the
two things. This is, of course, why our actual real-world computers are
modeled on Turing machines with finite memory, not on pure finite state
machines. To model a machine with 1GB of RAM, an FSM would need 2^1024^3
states, multiplied by however many states are needed to model the CPU.

However, the difference is purely in practical terms when it comes time to
actually build one. In the theoretical world they are equally capable and can
recognize the same languages.

