
Brain tunes itself to criticality, maximizing information processing - Lorenz_Duremdes
https://source.wustl.edu/2019/10/brain-tunes-itself-to-criticality-maximizing-information-processing/
======
Agebor
This is a similar view to the emerging theory of Bayesian Brain, which views
the brain as a system that tries to minimise the prediction error (which might
be the same thing as "free-energy" in some related publications) by comparing
expectations with actual information coming from the senses.

[https://towardsdatascience.com/the-bayesian-brain-
hypothesis...](https://towardsdatascience.com/the-bayesian-brain-
hypothesis-35b98847d331)

So far it seems that it explains quite a lot of data, and many mind illnesses
(e.g. many diseases can be thought as the brain under-correcting or over-
correcting for the prediction error).

By under-correcting, the brain is not learning enough on its mistakes, which
may lead to delusions of superiority (e.g. being stuck in usual habits, or
inability to change one's world-view based on new information). On the other
hand, when over-correcting, the world may seem unpredictable, frightening -
leading to self-doubt, anxiety and negative thoughts.

Being wrong around 15% of the time might actually be the optimal rate for
learning... [https://www.independent.co.uk/news/science/failing-study-
suc...](https://www.independent.co.uk/news/science/failing-study-success-
machine-learning-a9186051.html)

~~~
vstuart
I agree; Karl Friston's work is among the most interesting I have ever read,
period. Interestingly, his 2009 paper
([https://www.fil.ion.ucl.ac.uk/~karl/The%20free-
energy%20prin...](https://www.fil.ion.ucl.ac.uk/~karl/The%20free-
energy%20principle%20-%20a%20rough%20guide%20to%20the%20brain.pdf)) / free-
energy principle makes use of reinforcement learning, gradient descent, Markov
blankets, Helmholtz machines, and other foundational tenets of modern machine
learning ... In that regard, Geoff Hinton (a foundational figure in modern
machine learning) overlapped with Friston while Hinton was in England at that
point in his career.

~~~
MrQuincle
Yes, interesting man. I encountered him on a workshop by Bert Kappen on
stochastic optimal control. It shows that there are different control
strategies for different noise levels separated by phase transitions.

I checked Friston again. He now also has this article:
[https://www.frontiersin.org/articles/10.3389/fncom.2012.0004...](https://www.frontiersin.org/articles/10.3389/fncom.2012.00044/full)

CLE = Conditional Lyapunov Exponents.

"In short, free energy minimization will tend to produce local CLE that
fluctuate at near zero values and exhibit self-organized instability or
slowing."

I've to study it more what he means with self-organized instability.

------
knzhou
Any study about "criticality" needs to be taken with a huge grain of salt. The
standard methodology is that criticality is synonymous with power laws. And
power laws are straight lines on log-log plots, with slope equal to the
critical exponent.

So when anybody says "we showed X was critical", they actually mean "we
plotted a fuzzy cloud of data points on a log-log plot and fitted a line
through it", nothing more. But you can fit a line through anything. Even a
normal distribution shows up as a line on a log-log plot if your data has a
small enough range.

Criticality studies trade on the reputation of physics, where the idea came
from, and there it works fantastically. For instance, we can measure critical
exponents for liquid/gas phase transitions to three or four significant
figures, and even predict those numbers from pure theory. Applications outside
of physics usually have barely one significant figure, if they're even
measuring power laws at all, and no predictive theory.

~~~
pfdietz
I'm reminded of a story in Ulam's biography.

John von Neumann was at a talk where the presenter had put up a slide with a
cloud of points, and had optimistically drawn a line through the cloud. Von
Neumann muttered, "at least they lie on a plane."

~~~
hdrujvw-4579
I've been meaning to read that biography. if anything for its notes about
Johnny von Neumann, who strangely doesn't have a biography of his own.

[https://www.amazon.com/Adventures-Mathematician-S-M-
Ulam/dp/...](https://www.amazon.com/Adventures-Mathematician-S-M-
Ulam/dp/0520071549)

------
MrQuincle
This article does not have much info.

What exactly does exhibit criticality? As correctly stated, all kind of
phenomena can exhibit power laws.

Avalanches on a sandpile are sized as a power law. Typical example of self-
organized criticality (Per Bak).

Back in the day I played with group renormalization theory to prove SOC, but
most systems break down if there is loss on a microscopic scale. Intuitively,
you need conservation laws on a microscopic scale or the very large events do
not happen.

This is unlikely the case in a biological system and I won't expect it to be
at a critical state, but only hovering around "an interesting area".

~~~
pas
Hengen said: “Recently, people moved away from measuring simple power laws,
which can pop out of random noise, and have started looking at something
called the exponent relation. So far, that’s the only true signature of
criticality, and it’s the basis of all of our measurements.”

They measure neuron activity and spread of firing actions - if I understand
correctly without looking at the paper. And they found that thanks to the
sophisticated inhibitor neuron network the whole network balances around this
mathematical criticality.

I guess it means if there were less inhibition then entropically there would
be too many firing actions leading to a feedback frenzy, which is obviously
not that effective for information processing.

And if there were more inhibition then the information wouldn't be able to
spread to all the special small parts of the brain, thus making them too
specialized.

Maybe.

~~~
callesgg
Can that be boiled down to something like "the brain is optimised for maximum
performance without breaking down."

Always being just bellow the activation threshold.

~~~
pas
I think it's better phrased as that it has active controller feedback loops.
If there is room for more specialization, if it can get away with less energy
used, less neurons, or some other kind of optimization, then it seems it will
do that.

And there are loops that work against these to prevent breakdown, forgetting
too much, slowing down too much, etc.

Of course it's not really known yet what these control system are exactly. (At
least I'm not aware we have good data and theories about this aspect of the
brain.)

------
dr_dshiv
When we think of all the different neurotransmitters (dopamine, serotonin,
etc) we sometimes forget that the vast majority of neurons are producing
either excitatory outputs (via glutamate) or inhibitory outputs (via GABA).
The other neurotransmitters are modulators of this basic phenomena of
excitation/inhibition.

The brain has a tightly balanced feedback loop between excitation and
inhibition. Too much excitation and the brain gets a seizure (positive
feedback). Too little, and you black out.

TBH, it's quite a good example of the principle of balance in the concept of
Yin-Yang.

~~~
carapace
Your observation reminds me of the sympathetic and parasympathetic nervous
systems.

------
vstuart
Interesting, but that article -- Ma et al. (2019) "Cortical Circuit Dynamics
Are Homeostatically Tuned to Criticality In Vivo"
[[https://www.cell.com/neuron/fulltext/S0896-6273(19)30737-8)](https://www.cell.com/neuron/fulltext/S0896-6273\(19\)30737-8\))]
-- makes no mention of Karl Friston and his work (who is also mentioned
elsewhere in this thread: @Agebor, re: 'Bayesian brain'), which seems highly
relevant.

E.g.

* Friston, K. (2009). The Free-Energy Principle: A Rough Guide to the Brain? Trends in cognitive sciences, 13(7), 293-301.

[https://www.fil.ion.ucl.ac.uk/~karl/The%20free-
energy%20prin...](https://www.fil.ion.ucl.ac.uk/~karl/The%20free-
energy%20principle%20-%20a%20rough%20guide%20to%20the%20brain.pdf)

* Friston, K. (2010) The Free-Energy Principle: A Unified Brain Theory?. Nature Reviews Neuroscience. 11(2): 127.

[https://www.fil.ion.ucl.ac.uk/~karl/The%20free-
energy%20prin...](https://www.fil.ion.ucl.ac.uk/~karl/The%20free-
energy%20principle%20A%20unified%20brain%20theory.pdf)

* Solms M (2018) "The Hard Problem of Consciousness and the Free Energy Principle." Front. Psychol. 9: 2714. DOI: 10.3389/fpsyg.2018.02714 | PMCID: PMC6363942 | PMID: 30761057

[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6363942/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6363942/)

[https://www.frontiersin.org/articles/10.3389/fpsyg.2018.0271...](https://www.frontiersin.org/articles/10.3389/fpsyg.2018.02714/full)

~~~
vstuart
Ma et al. include 72 references yet also fail to cite the following, which
also seems relevant.

Thermodynamics and signatures of criticality in a network of neurons

[https://www.pnas.org/content/112/37/11508](https://www.pnas.org/content/112/37/11508)

"The activity of a brain—or even a small region of a brain devoted to a
particular task—cannot be just the summed activity of many independent
neurons. Here we use methods from statistical physics to describe the
collective activity in the retina as it responds to complex inputs such as
those encountered in the natural environment. We find that the distribution of
messages that the retina sends to the brain is very special, mathematically
equivalent to the behavior of a material near a critical point in its phase
diagram."

------
tyingq
Off topic, but seeing "wustl.edu" as the source brought up some memories.

In the early 90s, I used their ftp site at wuarchive.wustl.edu very
frequently. It was a reliable source to download open source software like
Perl, tcl, trn, gcc, and so on.

------
cr4zy
This problem of run-away excitation in wetware reminds me of exploding
gradients in artificial neural nets. We try to handle this with data
normalization, batch normalization, and gradient clipping of different sorts
(although unless the clipping is incorporated into the loss as in PPO
(Schulman), it's very brittle and dependent on the data and network
architecture.). So I wonder if we can glean something from these inhibitory
neurons for artificial nets. The opposite problem of vanishing gradients
results from too much inhibition, which happens in recurrent neural nets - so
it's definitely a balance. Currently PPO does the best job IMO, but is
specific to reinforcement learning.

------
riazrizvi
The article doesn’t make much sense to me because it doesn’t define
criticality, quiescence or chaos.

~~~
andbberger
They are using criticality in the same sense it used in physics (in the
context of phase transitions). You may find the wiki article useful in this
regard [1]

[1]
[https://en.wikipedia.org/wiki/Critical_phenomena](https://en.wikipedia.org/wiki/Critical_phenomena)

~~~
riazrizvi
This is a link which offers criticality as a metaphor with multiple meanings.
You suggest phase transition criticality is close to what they mean,
characteristic boundaries where matter transitions between solid-liquid,
liquid-gas etc?

I won’t spin mental cycles guessing. I’d rather the authors were explicit.

~~~
andbberger
Well frankly I did not read the article. But I am all too familiar with
discussions of criticality in the context of neuroscience and it is always
meant in the same way physicists use the word.

I'm afraid this may be a scenario where some deeper knowledge is needed to
fully appreciate the discussion. You would be well rewarded for putting in the
effort though, it's a fascinating notion. I'd recommend starting with the
Ising model [0] which is the canonical system exhibiting critical phenomenon.

[0]
[https://en.wikipedia.org/wiki/Ising_model](https://en.wikipedia.org/wiki/Ising_model)

edit: if by 'this' you are referring the the wiki article on critical
phenomenon, you're definitely missing the larger picture. The examples that
wiki article lists aren't metaphorical, they're all essentially 'corollaries'
(in a very loose sense) of the same underlying thing. Start with the Ising
model.

------
empath75
I wonder how this relates to anesthesia.

------
audiometry
“Recently, people moved away from measuring simple power laws, which can pop
out of random noise, and have started looking at something called the exponent
relation.“

Sadly no further explanation of this. Exponent-relation sounds like a synonym
for power, too.

------
andbberger
The brain being poised towards criticality is a serious meme in the neuro
community. Not that it's an uninteresting idea or one unworthy of pursuit.
Just a bit hard to have people to take you seriously.

~~~
longtom
See also: [https://twitter.com/farlkriston](https://twitter.com/farlkriston)

Well, these ideas are very likely accurate, but so general that they are
disconnected from solving practical problems. Fine, the brain is a prediction
machine and it optimizes over some program space by annealing doing
homeostasis/regulation, and maybe tends to occupy certain kinds of states now
and then. This, however, _tells us very little_ what the learning rules for
the synaptic weights should be and how we should wire things up. In fact, I
believe human-relevant problems are best solved by such a special subregion of
program space that one needs _pretty specific_ architectural priors as
otherwise search will take too long. These are unlikely to be derived from
general concepts, but are more likely evolved, either literally by
evolutionary algorithms or by people doing the trial and error. The issue
being that general concepts about prediction errors and program spaces know
nothing about our specific world. E.g. none of these general concepts predict
the usefulness of CNNs. CNNs exploit fairly specialized priors about object
translation invariance and locality in image statistics, which are specific
computations occurring in our universe when parts of it are perceived by
geometric projections of EM rays onto an image plane with sensors. Hinton's
capsules go into the right direction exploiting some more priors about spatial
reference point invariance, but we need to go deeper. The brain disassembles
the world into stable episodic chunks and operates on them, and it manages to
backpropagate values through such episodic memories. Currently, no neural
architecture does something like this.

