
At the Far Ends of a New Universal Law - digital55
http://www.simonsfoundation.org/quanta/20141015-at-the-far-ends-of-a-new-universal-law/
======
rdlecler1
There is something intensely interesting here from an information theory
perspective. I need to dive into this deeper, but I wrote a paper in 2008
(cited 110 times) called Survival of the Sparsest. I showed that when
computational networks were permitted to evolve their connectivity (under a
selective regime) they would evolve to a kind of minimum energy state with
minimal network complexity (economic with no spurious connectivity). Looking
at biological gene networks, I showed that this pattern of sparse connectivity
showed up again and again.

For a given function, this suggests that there may only be one or two network
topologies that satisfy the conditions of being both efficient and functional.
This makes a good null hypothesis if you can see the input and output states
and need to guess the structure of that network. If you don't see that
structure, then maybe this suggests that there is some other confounding
variable lurking out there. For example, maybe the network has extra
connectivity because redundancy is a functional requirement and not just nice
to have.

I'm guessing that there's some relationship here between root(2N) and the
minimal complexity networks I was evolving. Haven't had time to digest this,
but does anyone know if N refers to the number of elements N in a system, or
the number of connections in an NxN matrix?

Paper
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2538912/](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2538912/)

Lot's of meat in the supplementary
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2538912/bin/msb2...](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2538912/bin/msb200852-s1.pdf)

~~~
gone35
Thank you for sharing this! Not my field at all [1], but reading through your
result, I cannot help but think of L1-regularization in ensemble
optimization/learning algorithms; see for instance Section 6.3 in [2]. Perhaps
it's too leaky of a heuristic, but it definitely struck a chord.

[1] [http://xkcd.com/793/](http://xkcd.com/793/)

[2] [http://face-rec.org/algorithms/Boosting-
Ensemble/8574x0tm63n...](http://face-rec.org/algorithms/Boosting-
Ensemble/8574x0tm63nvjbem.pdf)

~~~
rdlecler1
Thanks for sharing, I need to dive into that deeper when I have some
relaxation time!

It's funny that you mention that because the mathematics used to describe the
dynamics of artificial neural networks are the same mathematics used to
describe (artificial) gene networks. In effect, a gene network is the 'brain'
of the cell. In fact, it was such an effective system that life re-evolved
this computational architecture with a different substrate: neurons.
Functionalism at its finest! Insofar as cells communicate with one another and
are composed of the exact same gene networks (albeit in different states), you
basically have one giant meta neural network comprised of identical 'tiles'
neural networks each 'tile' connected to their nearest neighbors.

------
gojomo
There's a pruning threshold in some universal-substrate
compression/optimization.

Until a macro-phenomenon repeats enough to reach that threshold, lossy chaos
is used to minimize state. (The sim remains lower-resolution.) Once an
arrangement starts repeating enough to exceed that threshold, extra regularity
is allocated to it, potentially also assisting self-reinforcement
(reproduction). That is, cycles (and attention?) is focused on the interesting
parts.

Compare in speculative fiction: the terrestrial complexity-limit reached in
Greg Bear's _Blood Music_ , or the 'zones of thought' in Vernor Vinge's _A
Fire Upon the Deep_ et al. (And if my hunch is right, perhaps also the
'ragtime' in Leonard Richardson's _Constellation Games_.)

~~~
gojirra
There's a naturally destructive boundary which can automatically limit
arbitrarily redundant "information" in a subset of certain types of derived
objects.

Until a visible event recurs frequently, such that it reaches the pre-defined
boundary, an undisclosed entity is able to take advantage of destructive
entropy in order to simplify the situation, when deriving an interpretted
expression for these sorts of objects. (in other words, virtual recreations
are deliberately created with poor quality in seemingly repetitive situations
like this) When a collection of objects is particularly repetitive for some
reason, beyond the previously described limits of repetitiveness, an
undisclosed entity might permit even more repititions than usual, which
obviously begets even more replicas of this peculiar set of objects
(continuity). This suggests that repetitive loops (and interest?) become the
appealing part of a pretend virtual model that an undisclosed entity might
choose to observe.

All of this is similar to the circumstances described in the following novels:

\- _Blood Music_ by Greg Bear

\- _A Fire Upon the Deep_ by Vernor Vinge

\- _Constellation Games_ by Leonard Richardson

~~~
jchrisa
Which one of you is the AI? :)

~~~
readwrite
In all seriousness, Is this really AI?

------
ap22213
Great article! I only wish I had more articles like it.

~~~
miles932
but not too many more, or they'd be interrelated and go through a phase change
destroying the degree to which you like them.

~~~
emotionalcode
Obviously that would imply a system which contains a similar distributive
growth that contains the confounding variable and the strong coupling of it to
other elements in the whole. Then we could redefine the selection of elements
and interrelations and like it again.

------
mturmon
If you don't mind a little math, I found this paper by one of the principals
in the area (Deift) to provide more background:

[http://www.icm2006.org/proceedings/Vol_I/11.pdf](http://www.icm2006.org/proceedings/Vol_I/11.pdf)

