Summary of the Ford paradox (which the post takes forever to actually get to): Quantum mechanics has only periodic orbits, but chaotic systems aren't periodic. How do we reconcile this with the existence of, say, double pendulums?
My guess: because the abstract double pendulum is a model that only approximates real double pendulums. Analogously, computers based on Turing Machines exist despite Turing Machines requiring an infinite amount of space.
---
Also, a minor nitpick: the post seems to equate 'not periodic' or 'chaotic' with 'is random', which I think is misleading. Pseudo-random, I guess, but the output of a simulation of a chaotic system is highly compressible and thus not random in the information theoretic sense.
Thank you; your minor nitpick doesn't seem minor at all. It was keeping me from noticing I don't see what the problem is. The movement here is deterministic, it's just hard to predict. I don't see the paradox at all.
When simulating a chaotic system, using different rounding for the floating point values will affect the simulation by a small amount, which a chaotic system is liable to amplify. Quantisation of reality seems to be analogous to rounding floats, in the sense that it affects the precision of the measurements. I'd be interested to know whether that affects the "running" of the system like with floats in a simulation, or if it only puts a limit on how well the initial conditions can be defined.
Chaos is deterministic, but I'm always told that some quantum things are NOT deterministic, i.e. a single photon being exposed to two slits in a paper. My understanding is that the opening it goes through is random; how can that be reconciled with deterministic models of the universe?
> My guess: because the abstract double pendulum is a model that only approximates real double pendulums.
This subject is somewhat outside of my area of knowledge, but if I had to guess, this is what resolves the paradox. An abstract mathematical chaotic system is different than a real-life one, and it is my guess that if we performed the experiment, we would find the orbits would be different.
QM has withstood every test thrown at it so far, and while it's always possible that a correction to it will be found, I think it will be a very rare event should it happen.
...the output of a simulation of a chaotic system is highly compressible...
Is that really true? Losslessly? Do you mean the entire phase space, or the sequence generated on any given run? Because I was under the impression that what distinguishes a chaotic system is that the output cannot be predicted in any way other than running the simulation (the article uses the word 'random', but it might be better to say 'unpredictable'). Compressibility relies on exploiting regularities, and regularity is predictability, isn't it?
The program that simulates the chaotic system is shorter than the unbounded amount of output being generated, and counts as a compression of the output.
That seems like another way of wording what I said, which is that there is no way to predict the output other than by running the algorithm that generates it. If you are presented only with the string representing the unbounded output, is there any way to reconstruct the algorithm from it? If not, then I would say that the string itself is incompressible; in that case only the string plus the algorithm is compressible in the way you describe.
I guess in other words, the more interesting question is this: can we generate hard to compress pseudorandom sequences with simple algorithms - if the algorithm description by itself can not be used as "compression"?
No. We can't. If the algorithm that generates the sequence is simple, lets call it algorithm X, then a concise way of expressing the string is "the output of algorithm X". That expression losslesly compresses the output, and it's just a few characters longer than the algorithm itself. Randomness theory and information theory study such things.
If you add a rule such as not using Algorithm X, it's trivial to create an Algorithm Y that generates the same output and it's barely any longer than X. For example, by adding any meaningless instruction at the start of algorithm X.
Of course, discovering what the shortest expression for a given sequence is is not a computable problem. So we can't always know if we've found the best form of compression for a sequence, we can only find an upper bound.
Absolutely. Consider any number which can be expressed as a patterned continued fraction, like pi or e. pi's digits are such that it is very hard to compress, but it is absolutely trivial to generate. (http://en.wikipedia.org/wiki/Continued_fraction)
>Consider any number which can be expressed as a patterned continued fraction, like pi or e. pi's digits are such that it is very hard to compress, but it is absolutely trivial to generate.
You're confusing theoretical compressibility with the actual compression ratios you get when you put a string through typical compression functions.
Here's a different example. Suppose you have ASCII text encrypted with AES with the AES decryption key appended to it. Run that through gzip and you'll probably get output which is larger than the input. But here's a different compression algorithm that will work much better: Decrypt the encrypted data with the key and then compress the plaintext with gzip.
It isn't that the original string is of the sort that can't be compressed, it's that you need a compression function which is suited to the input. Pi is like that.
Compare this to other strings, like the output of an environmental hardware random number generator, which will almost always be totally incompressible whatsoever.
How is the algorithm that generates the digits of pi a compressed form of pi? It gives me all the information about the number in finite amount of time.
In randomness theory, the definition of the most compressed form of a sequence S is, given a computer C, the shortest program with output S in C.
So the program that generates the output of the simulation of the pendulums is a compression of it's output. Since it's and infinite sequence that can be compressed into a finite program, it is not random from the perspective of information theory or randomness theory.
The difference between an chaotic and an non chaotic system is, how small perturbations propagate. In a regular system, you get essentially the same result, even if you start from slightly different initial conditions. In a chaotic one, small perturbations will amplify and therefore the system will end up in a very different state from nearly the same initial conditions. ( And if there is an experimental error, then the final state is essentially random.)
My math and physics is pretty weak, but it seems to me that this assertion is wrong:
> Classically chaotic systems generate information over time.
Really? Simply being chaotic does not preclude a system from being deterministic. Every future state of the system is "present" given the initial conditions, even if it isn't predictable.
The situation is analogous to that of the digits of irrational numbers. I can't tell you a priori what the 2^1000th digit of pi is, but if I calculated for a million years I could find it out. It's not being "generated", it's just as much a part of pi as 3.14 is, just a little harder to access.
There may be true non-determinism in nature, but it isn't necessary for a system to be considered chaotic.
anyone who found the discussion of dynamical systems here interesting would be well served by having a look at http://www.scholarpedia.org/, which has some very good articles on the area
I'm not a physicist and don't remember QM all that well, so feel somewhat uncomfortable trying to comment on this, but I'm not sure I understand why this is a paradox rather than an intriguing curiosity. There is a lot of work out there on how chaos can be produced from order. Stephen Wolfram was famously obsessed with how simple, deterministic cellular automata can produce complex, chaotic behavior. There is an interesting problem here, but it seems to violate our intuitions rather than actual physical principles.
The answer might be stuffed in the "this system will likely shut down prior to generating more information than the sum of the information in the individual subatomic particles."
Doesn't this miss something important about entropy and information creation? From what I understand the classical system isn't producing information...
Well, let's say you measured the times at which the bottom pendulum crossed the median. You would get a list of times that were spaced here and there, some near others, some far. This would be one possible set of information that the pendulum produces.
The idea is that, presumably, this list of times "looks" very random and can't be expressed by a much shorter, compressed, set of data, not even by providing the pendulum equation, and initial conditions with finite precision. (You could, if it weren't chaotic, just a single pendulum.) This would mean it has high entropy content, in the information theory sense. (I don't actually know, since I haven't done the experiment nor read about it being done, but it's a reasonable guess.)
My thoughts exactly. I don't know a whole lot about information theory or quantum mechanics, but my PhD is in dynamical systems and I've never heard tem described as 'random' or 'information producing'.
When I was a senior in college, I went through the math of the double pendulum. The subject is mostly just ordinary differential equations although there is a cute role for a little matrix theory.
Generally it seems to me that the OP gets off into some not very well defined and not very relevant topics and, instead, for the 'chaos' he is observing there's a fairly easy explanation: The system is unstable. Or, in one step more detail, the system really is an initial value problem for an ordinary differential equation, but, going way back to Bellman's work on stability theory, it's long been well known, just from the equations, that the solution can be 'unstable', that is, small changes in the initial conditions (values) can result in large changes in the solution. And that is just from ordinary differential equations without considering quantum mechanics. And for the 'chaos' in the OP, that's about all the explanation that is needed.
Why? Because there is really no chance that the motion of the system could be periodic or even simple because it nearly never gets back accurately enough to an earlier state. So, the system often gets back to something 'close' to an earlier state, but the system is so unstable that 'close' is not close enough so that the earlier state and the present one close to that earlier state soon result in very different solutions for the future.
Something similar happens with pseudo random number generators, e.g., the usual linear congruential generators where we set
R(n + 1) = ( A * R(n) + B ) mod C
for n = 1, 2, ..., and R(1) some
positive integer. Then roughly
the R(n)/C are independent, identically
distributed, uniform on [0,1). One
of Knuth's recommendations (in one of the
volumes of TACP) was
A = 5^15, B = 1, C = 2^47. So, a
point here is, get such 'random'
numbers without considering phase
space or quantum mechanics.
In a sense, this is a very old point:
E.g., one dream before about 1900 was
that we could observe the present
state of the world and, then, use deterministic
physics to predict the future. So,
as I recall, it was E. Borel who
did a calculation and concluded that
a change of moving 1 gram of matter
1 cm, or some such, on a distant star would invalidate
predictions on earth after just milliseconds
(presumably starting after the travel
time of light from that star to the earth).
We suspect we see much the same in
weather prediction: Small changes in
initial conditions too soon make changes
large enough to switch between rain
and sunshine. The usual joke is that
a butterfly could flap its wings and
convert a clear day to a hurricane.
We anticipate that probabilistically
weather prediction is quite stable, that
is, what is stable is the conditional
probability distribution of the variables
we use to measure weather conditioned
on the present. In particular, we still
believe in the law of conservation of
energy.
Also we should notice the classic work
on ergodic theory, by Hopf, Poincare,
Birkhoff, etc.: The standard illustration is
pouring cream into coffee and stirring.
Then the theorem says that, if stir long
enough, then can make the cream separate
from the coffee back to as close as please
to the original state. Why? Because
if take the 'volume' of the possible
states at some point in time and then
let time pass, then the 'volume' of
those states is still the same. So,
as the system evolves, it is 'measure
preserving' in state space. So, if
want to apply this to a frictionless
double pendulum, then can get it to
return as close as we please to its
initial state, but between then and
now it is free to do a lot. This
stuff goes back to the first half
of the last century.
Likely there are some interesting
and important questions in chaos theory,
but what the OP is saying about the
double pendulum seems to have a simple
explanation.
There is something I do not get: the behaviour of the macroscopic pendulum depends as well on random quantum events (f.e. the emmission of a photon or whatever). These are not deterministic nor linear nor "periodizable".
Is it worth it to try to figure out what this article is about? It's very non-accessible writing and I can't seem to find his point nor do I see a reason to care about it from scanning.
My guess: because the abstract double pendulum is a model that only approximates real double pendulums. Analogously, computers based on Turing Machines exist despite Turing Machines requiring an infinite amount of space.
---
Also, a minor nitpick: the post seems to equate 'not periodic' or 'chaotic' with 'is random', which I think is misleading. Pseudo-random, I guess, but the output of a simulation of a chaotic system is highly compressible and thus not random in the information theoretic sense.