It seems like that's not the case at all.
So no, as others have said, you are completely off base as to what the significance of being Turing complete is. What Turing complete means is that given an infinite amount of memory and time, a language can simulate anything that is computable. (I won't get into what computable means, for that you will need to read and reread the relevant Wikipedia articles, along with a good book on Theory of Computation and probably a good one on Abstract Algebra.)
But the real significance of this is that you cannot write a program that can tell you meaningful facts about the behavior of an arbitrary Turing machine. For any program you write to determine the behavior of HTML + CSS, I could write a piece of HTML + CSS that your program will be unable to say anything about.
However, this particular proof actually only shows HTML + CSS is equivalent to a Linear Bounded Automaton, which unlike a Turing Machine can be predicted. And strictly speaking, even if you have a Turing-machine-equivalent language like C++, you can theoretically make some predictions about programs provided you can bound its memory, i.e., on a machine with less than 4 GB of memory, it is equivalent to a Linear Bounded Automaton. Now, of course actually writing a second program to do this may be impossible to do before the heat death of the universe, because Linear Bounded Automatons are still very powerful computing objects.
Sorry, but this is false and for similar reasons is not reasonably considered Turing complete. The encoding presented here produces, for any particular HTML + CSS, a finite number of cells of an automata (and not only that, the number of cells is linear in code size). That means that deciding any property about a given piece of code amounts to search over a finite space.
This is very different from true Turing completeness where a small set of rules might require unbounded space and computation to decide, and in general will not even be decidable. Now, the author of this code says in a parent post that we can't build a real Turing machine. That's true, but doesn't mean that we can't specify a real Turing complete process; it just means that for some executions we have to give up on running them.
In some cases, it makes sense to consider languages that are not actually Turing complete as approximately Turing complete. For example, if you take the C programming language, with a 64 bit memory space, then again there's technically only a finite space of possible program states. But in that case, you can have a 100 line C program that can generate 2^64 possible program states. Turing completeness is a reasonable approximation. If you had to allocate every bit of memory explicitly with a unique line of C code, it would no longer be very reasonable to consider C approximately Turing complete.
Change "simulate" to "compute" and I agree fully. I don't think it's necessary to introduce the concept of simulation. If a language (or process, mechanism or whatever) is Turing Complete, then you can use it to express all things computable. Given sufficient resources, that expression could actually do the computation.
edit: I recalled in a sibling post why simulation comes up, and I think it's a source of confusion. The easiest way to prove a formalized process is Turing complete is to simulate a Turing machine. Since a Turing machine can compute all that is computable, and your process can simulate a Turing machine, your process can compute all that is computable. Hence, it is Turing complete.
Aren't practical computers just simply Finite State Machines?
There is at least one man out there who isn't:
I doubt I'll be drinking that koolaid to be honest.
So mathematically a laptop is not Turing Complete, but for all practical purposes, it is.
For example, SKI combinators are Turing-equivalent. Regular expressions aren't. The lambda-calculus is. Context-independent grammars probably are. Most data description languages are not.
This is worthy of an HN post because a style-description language would not normally be expected to be Turing-equivalent.
When it became widely known that the C++ template system was Turing-equivalent, it was a shock to many. These meant that the C++ compiler code itself could be exploited to compute arbitrary things such as factorials, or shortest-paths in graphs using A*.
Actually, as the SKI system demonstrates, the threshold is quite low. The core needed components are just "constant" or "if" (they are often equivalent), and "repeat" (equivalent to both "iterate" and "recurse").
There is however a different correspondence between recognizing things and computing functions. If we encode a function as taking a bit string as input and generating a bit string as output, we can turn it into a series of recognition problems as follows: "recognize given S as input, whether bit i of f(S) is 1".
Given all these recognizers, we can compute the function, and given a method to compute the function, we can implement the recognizers.
That is, you can reformulate the question "is f computable" to a series of language recognition problems.
No they are not. Recursively Enumerable grammars are however. A context free grammar can't be turning complete because it can be decided with only a PDA.
Wouldn't it? XSLT, the only other style-description language in anything like common use on the web, has been Turing-complete for a while now (possibly since it started existing; I'm not familiar with the history)...
XSL-FO is the formatting part of XSL (i.e. the bit most like CSS) - I'd be pretty surprised if this was shown to be Turing-equivalent.
HTML and CSS are much more "static" in flavor (hence the OP's use of the term "descriptive") and so it might be more unexpected that they can perform universal computation.
What you mean to say is that a laptop is not an actual Turing machine, but for all intents and purposes, it is. On that point, I agree.
and now apparently HTML + CSS
It says nothing about interactivity, which is a function of ability to (directly or indirectly) access I/O hardware.