

Complexity and Intelligence by Eliezer Yudkowsky - MikeCapone
http://www.overcomingbias.com/2008/11/complexity-and.html

======
queensnake
I hope that guy goes somewhere with all his talk, eventually.

------
ars
Talk about a straw man argument! He builds up an entire argument, then shoots
it down, and manages to generate pages of text, all based on a flawed premise.
I don't agree that you can ignore the size of the working set when defining
complexity.

Quite frankly it's idiotic to do so. If you allow that, then why not just
create every single possible permutation of every single possible thing? (Not
randomly, start at 1 and start counting.)

The complexity of that is 1 bit.

So, basically everything, no matter what, has a complexity of 1 bit.

What a pointless way of defining complexity. (Pointless for talking about the
universe anyway, maybe it's useful in more specific math.)

~~~
elijahbuck
Everything does not have a complexity of one bit. The object you describe with
a complexity of 1 bit doesn't specify the object you want. In the permutation,
there may be a string that specifies some universe, but then you have to
specify where in the permutation that string is and how long it is, thus
increasing the complexity required to describe that particular universe
object.

------
DaniFong
Wouldn't the 'Kolmogorov complexity of the universe' include initial
conditions? How can we know that these aren't significant?

~~~
jerf
The way he defined for most of that essay, it is more proper to speak of the
"Kolmogorov complexity of the _multiverse_ ". The infinite possible starting
conditions of your brand of physics merely adds the K-complexity of the TM
necessary to describe the possible initial states, likely quite small.
(Eliezer believes that many-worlds is the only possibility for the universe we
live in, I believe.)

The K-complexity of _this specific universe_ is much higher, because it would
necessarily require a description of the multiverse _and_ some sort of
identifier of how to pick out the universe of interest.

~~~
dhs
I don't understand this. If the multiverse is a system of a certain
(appearantly low) complexity, it should not be allowed to have sub-systems
like the "universe of interest" which have a higher complexity. Also, I don't
see why the information which identifies a certain subsystem/universe should
only be available from the point of view of that subsystem, increasing its
complexity, but not the complexity of the system it's a part of.

~~~
jerf
It's the nature of K-complexity. If you don't understand, I'd suggest (in all
humility and seriousness) pondering the nature of TMs and K-complexity
further.

Some time spent surfing through the Mandelbrot set can be enlightening, as a
more concrete example. Remember, no matter _what_ you see in Fractint (or your
choice of fractal generator), the K-complexity of the Mandelbrot set is very,
very low. (Not quite as small as the Mandelbrot-generation process alone
because you also need to describe the coloring algorithm, but that's not that
much either.) _But_ , to uniquely identify the image you are looking at to
someone else, you must transmit not just the Mandelbrot generation routines,
but also the coordinates you are looking at, or a description of the routine
you used to get there. This can easily be larger than the rather small
Mandelbrot algorithm. (It's easy to lose track of how deep you are in the
M-set, without realizing that your computer is chugging away on computations
involving thousands of significant digits...)

This is exponentially (super-exponentially?) more true to pick out a
particular piece of a particular universe from a TM simulating all possible
string theory universes. String theory may be simple (or may not be), by the
time you're done identifying which of the 10^120 vacuum states you want to
deal with (~400 bits right there), which initial conditions you want to deal
with (no idea what that would take), and where in space and time you wish to
point at (many thousands of bits minimum, no known upper limit), you can
easily exceed the size of the part of the TM that describes the physics
itself.

It might be helpful to try to forget everything you know about "complexity";
the English meaning of the word misleads your mathematical intuition.
K-complexity is really something completely different (as is part of Elizier's
point), and, frankly, it's much less useful than it seems at first blush. It's
part of the wild world of Turing Machines, which can not be tamed or
understood by any finite being. (And it doesn't really help that you can't
prove if you have the optimal TM for a given result.)

It may also help to intuitively consider the difficulty of "pointing" at
something, as in the essay. It is easy to gesticulate wildly at the Earth,
from where you sit now. It is far, far harder to unambiguously specify which
protozoan you are talking about right now. The part of the description that
filters through the near-infinite possibilities to uniquely identify the topic
of interest can be very, very large, and can easily exceed the size of the
specification of "all possible topics of interest".

~~~
dhs
I think I do understand most of the technicalities, using Gregory Chaitin's
ideas about the properties of self-delimiting programs as a bridge. What I
don't get is what I percieve of as the insistence on "surprises" in the
"universe of interest" that can't be seen "from outside". Zooming through a
Mandelbrot set visualization, while it can be a great experience, ultimately
offers you few surprises, and because there are few surprises, you know that
the complexity - represented by the original equation - must be low. And
Eliezer seems to be saying the opposite, namely that the are lots of surprises
in a particular universe that cannot be known from the system as a whole,
because the system as a whole only contains 400 bits of information. If you
would write this multiverse program as self-delimiting, the way Chaitin does,
where would all the extra complexity/information/surprises found in the
"universe of interest" come from?

~~~
jerf
You keep saying "complexity". There's a reason I keep writing "K-complexity".
The extra _K-complexity_ comes from the extra bits needed to narrow down the
results. The whole multiverse does have _English-complexity_ (the conventional
meaning of the word, not the measure of how many words it takes to describe
something which would just be K-complexity again) greater than the part... but
English-complexity is ill-defined.

Look at a word-processing document. A real one, sitting on your hard drive.
The program to output all possible documents is very, very simple. The
specification of how to get to the exact document you are looking it is the
(compressed) size of the document itself. The English-complexity of "the set
of all word processing documents" is high, but the K-complexity is low. The
English-complexity of "one particular document" is low, but the K-complexity
is quite high.

You might say, "Well, I simply tell you to simulate the multiverse, then hand
you instructions on how to get to that document", but the instructions will be
of a very non-trivial size. I think you intuitively see the instructions as
very small, but they are actually huge. Starting with just "Simulate the
multiverse" leaves me with, quite literally, a multiverse in hand. Now what?
Now how do I find what you are talking about? I'm worse off than when I had
nothing at all!

When you have a gigantic set, simply the act of indicating a member within it
takes bits. K-complexity measures those bits. English-complexity says you're
lowering the complexity. _Neither is wrong_... it's a definitional matter.

