As the original author of this, I was going to put it into a much more presentable state before showing it off here.
For those confused, here's a proper explanation. No real-world thing can actually be Turing complete (able to express basically any computation that we might want to perform of any size). That's because there are finitely many atoms in the universe, so we can only construct machines of finite size.
It's well known that Rule 110 (Google it) is Turing Compete.
What I've done is made an implementation of Rule 110 in HTML and CSS. Since CSS can't actually really manipulate state, some user interaction is required to "drive" it. In the one that bgruber linked to, it's clicking.
Here's a bigger version that doesn't require the user to know where to put his mouse. The tab-space combo is just as legitimate as requiring that you plug a computer into a wall to power it in order to run a Java program. http://elilies.com/rule110-full.html
Also, I haven't tested it on anything besides the latest Chrome on Mac.
It seems like that's not the case at all.
So no, as others have said, you are completely off base as to what the significance of being Turing complete is. What Turing complete means is that given an infinite amount of memory and time, a language can simulate anything that is computable. (I won't get into what computable means, for that you will need to read and reread the relevant Wikipedia articles, along with a good book on Theory of Computation and probably a good one on Abstract Algebra.)
But the real significance of this is that you cannot write a program that can tell you meaningful facts about the behavior of an arbitrary Turing machine. For any program you write to determine the behavior of HTML + CSS, I could write a piece of HTML + CSS that your program will be unable to say anything about.
However, this particular proof actually only shows HTML + CSS is equivalent to a Linear Bounded Automaton, which unlike a Turing Machine can be predicted. And strictly speaking, even if you have a Turing-machine-equivalent language like C++, you can theoretically make some predictions about programs provided you can bound its memory, i.e., on a machine with less than 4 GB of memory, it is equivalent to a Linear Bounded Automaton. Now, of course actually writing a second program to do this may be impossible to do before the heat death of the universe, because Linear Bounded Automatons are still very powerful computing objects.
Sorry, but this is false and for similar reasons is not reasonably considered Turing complete. The encoding presented here produces, for any particular HTML + CSS, a finite number of cells of an automata (and not only that, the number of cells is linear in code size). That means that deciding any property about a given piece of code amounts to search over a finite space.
This is very different from true Turing completeness where a small set of rules might require unbounded space and computation to decide, and in general will not even be decidable. Now, the author of this code says in a parent post that we can't build a real Turing machine. That's true, but doesn't mean that we can't specify a real Turing complete process; it just means that for some executions we have to give up on running them.
In some cases, it makes sense to consider languages that are not actually Turing complete as approximately Turing complete. For example, if you take the C programming language, with a 64 bit memory space, then again there's technically only a finite space of possible program states. But in that case, you can have a 100 line C program that can generate 2^64 possible program states. Turing completeness is a reasonable approximation. If you had to allocate every bit of memory explicitly with a unique line of C code, it would no longer be very reasonable to consider C approximately Turing complete.
Change "simulate" to "compute" and I agree fully. I don't think it's necessary to introduce the concept of simulation. If a language (or process, mechanism or whatever) is Turing Complete, then you can use it to express all things computable. Given sufficient resources, that expression could actually do the computation.
edit: I recalled in a sibling post why simulation comes up, and I think it's a source of confusion. The easiest way to prove a formalized process is Turing complete is to simulate a Turing machine. Since a Turing machine can compute all that is computable, and your process can simulate a Turing machine, your process can compute all that is computable. Hence, it is Turing complete.
Aren't practical computers just simply Finite State Machines?
There is at least one man out there who isn't:
I doubt I'll be drinking that koolaid to be honest.
So mathematically a laptop is not Turing Complete, but for all practical purposes, it is.
For example, SKI combinators are Turing-equivalent. Regular expressions aren't. The lambda-calculus is. Context-independent grammars probably are. Most data description languages are not.
This is worthy of an HN post because a style-description language would not normally be expected to be Turing-equivalent.
When it became widely known that the C++ template system was Turing-equivalent, it was a shock to many. These meant that the C++ compiler code itself could be exploited to compute arbitrary things such as factorials, or shortest-paths in graphs using A*.
Actually, as the SKI system demonstrates, the threshold is quite low. The core needed components are just "constant" or "if" (they are often equivalent), and "repeat" (equivalent to both "iterate" and "recurse").
There is however a different correspondence between recognizing things and computing functions. If we encode a function as taking a bit string as input and generating a bit string as output, we can turn it into a series of recognition problems as follows: "recognize given S as input, whether bit i of f(S) is 1".
Given all these recognizers, we can compute the function, and given a method to compute the function, we can implement the recognizers.
That is, you can reformulate the question "is f computable" to a series of language recognition problems.
No they are not. Recursively Enumerable grammars are however. A context free grammar can't be turning complete because it can be decided with only a PDA.
Wouldn't it? XSLT, the only other style-description language in anything like common use on the web, has been Turing-complete for a while now (possibly since it started existing; I'm not familiar with the history)...
XSL-FO is the formatting part of XSL (i.e. the bit most like CSS) - I'd be pretty surprised if this was shown to be Turing-equivalent.
HTML and CSS are much more "static" in flavor (hence the OP's use of the term "descriptive") and so it might be more unexpected that they can perform universal computation.
What you mean to say is that a laptop is not an actual Turing machine, but for all intents and purposes, it is. On that point, I agree.
and now apparently HTML + CSS
It says nothing about interactivity, which is a function of ability to (directly or indirectly) access I/O hardware.
That version works much better than the one I tried on github! It's also much cooler.
I believe Petzold said this concept was controlled repetition. Conditionals and loops/jumps/recursion. This example, while nifty, doesn't show that HTML+CSS is Turing complete, because the user still has to provide the looping.
Turing completeness is not hand-wavy, or circular.
Or look up SKI-calculus (http://en.wikipedia.org/wiki/SKI_calculus), if you want to be gob-smacked.
That's like saying that my computer isn't a Turing machine (modulo finiteness of memory) because I need to plug it into a socket to run it. Turing machines are a very abstract notion of computation (and one of the most general ones we know of), and a lot of things can be used to simulate one. Turns out HTML+CSS3 is yet another such thing.
I must admit that since HTML+CSS3 requires clicks (at least in this implementation), I don't consider it to be really proven to be touring complete.
In a related note, Bell's Theorem (along with a few experimental results) demonstrate that no theory of (local) hidden variables can account for quantum theory. This means that the limit is not just on our ability to measure the information in an atom. It literally doesn't exist for us to measure. Quantum mechanics is confusing =P.
 Specifically, the information measured in binary bits is bounded by the surface area divided by four. I'm not 100% sure what the unit of surface area is, but I believe it's Plank units.
If space-time does turn out the be physically discrete at the planck length, and the limits of our measurements are actually reflecting the universe's true discontinuous nature, then this would imply that there would be a limit to how much state you could push into an atom.
Any real physicists please feel free to shoot me down in flames here. This is just my (plainly) limited understanding of the situation.
Perhaps the confusion comes from the fact that the easiest way to prove a formalized process is Turing complete is to express a simulation of a Turing machine.
Seriously, what the fuck? This is just too much exaggerated.