

Closures as life - tmsh
http://tmsh.posterous.com/closures-as-life

======
klodolph
This seems like... total nonsense. I mean, I understand evolution, fourier
series, closures, electron orbitals. But these words seem strung together in
ways that make no literal sense, and the metaphors are so vague that they
might as well be meaningless.

Once upon a time there were some molecules. Bumping into each other, these
molecules formed into larger molecules. Some of these molecules bumped into
other molecules in such a way that the other molecules formed into copies of
the first molecules.

I don't see how this has anything to do with "periodic functions". And somehow
closures "introduce this idea of 'time'." Time is merely the arrow that points
towards entropy.

Conversations like these are why I transfered out of a certain college.

~~~
gukjoon
I don't think this is complete nonsense. At the root of all this, he's weirded
out by the concept of entropy. Entropy exists, yet the complexity on this
infinitesimal speck of dust in the universe is increasing. Why is that?

tmsh, I wrote a equally disparaged blog entry a while back that you might be
interested in: <http://www.jierenchen.com/2009/10/memory-and-evolution.html>

~~~
klodolph
"Why is that?"

Complex configurations have more entropy than simple configurations. Life
causes entropy to increase more quickly. You can think of living beings as
agents of entropy if you like, but that implies that entropy acts with
purpose, which it doesn't.

I remember having an argument at college with someone who argued that the laws
of thermodynamics conflicted with the theory of evolution. So asked him what
entropy was. He gave a bunch of different layman's definitions, such as
comparing it "disorder" or "chaos". I wrote down for him, "S = k log N" and
told him that if he doesn't know Boltzmann's equation, he has no business
lecturing me on thermodynamics. This person also said that there was no
natural process that created information. I wanted to tell him that the laws
of thermodynamics, combined with Boltzmann's equation, and used in combination
with Shannon's definition of information, imply that the amount of information
in a closed system must always increase. Interpreted physically, mutation
increases the entropy/information in genetic code.

If you want to know more about entropy, talk to a chemist. Chemists have to
know entropy or they can't do their jobs.

As for the linked blog post, the idea of applying theories from evolutionary
biology to the study of information is usually attributed to Richard Dawkins
in his 1976 book, "The Selfish Gene". This is the book that coined the term
"meme", and I recommend it.

------
pygy_
In this vein, the integrated information theory of consciousness is very
interesting.

I have little time right now, so I'll let the papers speak for themselves. The
abstract of the first paper follows. The second one goes deeper in the same
theory. The most interesting part of the theory is that it is quantitative and
testable (see the last two papers). The full text of all papers should be
freely available.

\----

## An information integration theory of consciousness ##

Giulio Tononi

 _Background_

Consciousness poses two main problems. The first is understanding the
conditions that determine to what extent a system has conscious experience.
For instance, why is our consciousness generated by certain parts of our
brain, such as the thalamocortical system, and not by other parts, such as the
cerebellum? And why are we conscious during wakefulness and much less so
during dreamless sleep? The second problem is understanding the conditions
that determine what kind of consciousness a system has. For example, why do
specific parts of the brain contribute specific qualities to our conscious
experience, such as vision and audition?

 _Presentation of the hypothesis_

This paper presents a theory about what consciousness is and how it can be
measured. According to the theory, consciousness corresponds to the capacity
of a system to integrate information. This claim is motivated by two key
phenomenological properties of consciousness: differentiation – the
availability of a very large number of conscious experiences; and integration
– the unity of each such experience. The theory states that the quantity of
consciousness available to a system can be measured as the Φ value of a
complex of elements. Φ is the amount of causally effective information that
can be integrated across the informational weakest link of a subset of
elements. A complex is a subset of elements with Φ>0 that is not part of a
subset of higher Φ. The theory also claims that the quality of consciousness
is determined by the informational relationships among the elements of a
complex, which are specified by the values of effective information among
them. Finally, each particular conscious experience is specified by the value,
at any given time, of the variables mediating informational interactions among
the elements of a complex.

 _Testing the hypothesis_

The information integration theory accounts, in a principled manner, for
several neurobiological observations concerning consciousness. As shown here,
these include the association of consciousness with certain neural systems
rather than with others; the fact that neural processes underlying
consciousness can influence or be influenced by neural processes that remain
unconscious; the reduction of consciousness during dreamless sleep and
generalized seizures; and the time requirements on neural interactions that
support consciousness.

 _Implications of the hypothesis_

The theory entails that consciousness is a fundamental quantity, that it is
graded, that it is present in infants and animals, and that it should be
possible to build conscious artifacts.

\----

[1] <http://www.biomedcentral.com/1471-2202/5/42/>

[2]
[http://www.biolbull.org/cgi/content/full/215/3/216?view=long...](http://www.biolbull.org/cgi/content/full/215/3/216?view=long&pmid=19098144&ref=nf)

[3]
[http://www.coma.ulg.ac.be/papers/vs/massimini_PBR_coma_scien...](http://www.coma.ulg.ac.be/papers/vs/massimini_PBR_coma_science_2009.pdf)

[4]
[http://www.coma.ulg.ac.be/papers/vs/boly_PBR_coma_science_20...](http://www.coma.ulg.ac.be/papers/vs/boly_PBR_coma_science_2009.pdf)

------
vog
This article vastly overstates the role of closures. The connection isn't that
deep - it is a simple consequence of life being a super-complex software
executed in a biochemical machinery. Thus, you can find any programming
construct the process of life, and argue that life wouldn't work without that.

There are lots of much more important things than closures. One example is the
ability be self-referential on all levels (not to be confused with plain
recursion). This blurs not only the line between code and data, but also
between software and hardware. And this happens in a much deeper way than
we're able to do now with things like FPGA/CPLD or hardware virtualization.

If you are really interested in that topic, I recommend the book "Gödel,
Escher, Bach" by Douglas Hofstadter. It is written very well and should be
especially easy to understand by programmers:

<http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach>

~~~
eru
Do we really need `real' self-references, if we have things like quines? Or do
you think quines are part of the magic? (Like Goedel's theorems are?)

------
paulnelligan
the idea of equating software to life itself is quite limited ...

life is frequency based, software can only ever approximate life by
representing it numerically, binarily (??)

take sound, real sound is something that has a frequency spectrum, with
harmonics ... fourier analysis allows us to take a snapshot of the wave, which
approximates the wave, yet never represents it's full frequency spectrum -
because it's impossible to do that ...

to boil life itself down to something which can be represented numerically (or
even verbally for that matter) kinda misses the point ... The essence of life
will always be absent from such an approximation ...

